Inspire. Learn. Create.
Text Analytics & AI
AI & Open End Analysis
How to Choose the Right Solution
The Latest in Market Research
Market Research 101
The Latest in Market Research
The Power of 'Why' in Market Research: How to Improve Your Survey Methodology
Every researcher struggles with “the why.” WHY do people behave in a certain way. WHY do people prefer products of a certain shape, size, or color. WHY do people prefer a certain brand. These types of “why” questions often lead researchers to discussions about whether a qualitative or quantitative method is the best option for a particular market research project.
But “why” is not a question reserved for making a decision about using qualitative or quantitative research. Rather, it’s a question that social and market researchers should ask of themselves throughout the entire research process.
Why is your chosen methodology the most suitable methodology?
Researchers who specialize in questionnaire research are extremely skilled at their craft. In fact, they can probably answer any business question you bring to them by designing a comprehensive and detailed questionnaire. Similarly, skilled qualitative researchers can also solve a huge range of business problems with focus groups and IDIs. We know, however, that not every research question is best solved by using your favorite methodology.
- Baseline metrics: Are you seeking baseline metrics of frequency, magnitude, and duration? In such cases, quantitative research is your best option. While most people immediately turn to questionnaires as the best option, there are indeed many other quantitative options. Biometric methods like eye-tracking, EEGs, galvanic skin response, and heart rate variability offer valid and reliable metrics. Similarly, customer data analysis, web analytics and data mining could be more appropriate quantitative methods. Know WHY you chose your specific quantitative method.
- Generalizability: Are you trying to generalize behaviors or emotions from a small group of people to a vastly larger population? For this purpose, you’ll need to start with a fairly large random sample that is representative of the population in terms of key demographics and psychographics. Historically, quantitative methods were the only option but AI innovations have changed that. Today, qualitative research can be conducted at a vastly larger scale as tools like Ascribe can code and analyze vast quantities of qualitative data with high levels of accuracy.
There used to be a clear separation between what quantitative and qualitative research is and what it can do but that is no longer the case. Researchers need to ask themselves why they’re resorting to traditional methodologies when new options are being added to our toolbox every day.
Why are the questions so monotonous?
The bulk of marketing research data collection tools rely on asking people questions. Where do you shop? What do you buy? When do you buy? How many do you buy? Why do you buy? These are straightforward, simple questions. But, reducing our lives to these simple questions negates the complexity of our lives and our decision-making processes. Simple is good, but simple questions elicit shallow, habitual answers rather than deep, personal reflections.
Instead, we need to ask these questions: Why did I phrase the question like that? Why did I choose that set of answers? Why do I always rely on templated questions? Why do all of my questionnaires use the same questions? When we scrutinize our questionnaires in the same way we expect participants to scrutinize their answers, the result is more effective questions.
Simple questions don’t need to be mind-numbingly boring and lacking in complexity to generate thought-out answers. Choose the five most important questions in each questionnaire and take thirty minutes to brainstorm ten alternatives of each. Write out as many preposterous questions as you can. Imagine unusual scenarios, unexpected shopping partners, and unlikely shopping occasions. Aim for strange and unexpected. Even if none of the resulting questions are good, the process will push you towards questions that are more interesting and thought-provoking for your participants.
Why did you choose that type of report?
The fastest way to finish a report is to write it in PPT or Word. It’s what we’ve always done and it’s what clients have always expected. But, there is no reason besides convention that findings need to be shared that way. The purpose of a report is to share or teach a key learning to people, not to create pages of writing. So why did you choose PPT or Word?
Think about what you love to do in your spare time. Maybe you do like to read. In that case, Word might be the perfect type of report for you to receive. Even better, imagine if it took on the flavor of your favorite author and was written as a romance, mystery, or historical fiction novel. That might be the most engaging book you’ll even read and it will certainly be one you’ll remember forever.
Obviously, the creative report needs to be accompanied by an addendum of detailed results but there’s no reason for the most important teaching tool to be prose.
What’s next?
An extremely effective way to achieve business success is to reject the status quo. Instead, ask yourself why. Why did I choose this – because you did in fact choose every aspect of the research process, even if you did it without thinking. Follow up that why with multiple probes until you know for sure that you’re choosing the best path, not the easiest, fastest, or simplest path.
For quantitative researchers, it could mean recommending qualitative interviews with AI coding and analysis for your next project. For questionnaire authors, it could mean rejecting your traditional template and developing a new template replete with creative options that inspire deep thinking. No matter what your why is, it’s sure to put you on the path to more effective and engaging research. We’d love to be on that path so please get in touch with one of our survey experts!
12/18/24
Read more
The Latest in Market Research
Growth Mindset Goals for 2025
This year, forget New Year’s resolutions. Instead, capitalize on your growth mindset and choose goals that fit naturally into your career path. Whether you’re seeking personal growth or business growth, here are a few ideas to get you started!
Get Comfortable with AI
Like it or not, AI is here to stay and it’s changing everything. In the insights industry, it’s infiltrated recruitment, sampling, research design, questionnaire design, interview moderation, data analysis, reporting, and more. There is no avoiding AI.
If you want to remain relevant and happily employed, you have no choice but to engage with and become knowledgeable about AI. You don’t need to become an expert programmer or developer but do need to be able to engage in meaningful conversations and make wise decisions. Here are some fantastic free and paid resources to get you started.
- Coursera: One of my favorite free sources for learning and improving skills, Coursera offers myriad free and paid courses from many accredited institutions you already know and trust. Perfect for beginners, you can sign up for their free online class called, “AI for Everyone” taught by Andrew Ng from Stanford University. This class focuses on understanding the terminology, what to expect from AI, how to use AI, and how to use AI ethically. You don’t need any technology skills or experience to benefit from this course.
- Harvard University: If you want to name-drop and you already know Python, this course is for you. Harvard offers many free courses to the public including an AI class called: “CS50’s Introduction to Artificial Intelligence with Python.” You’ll learn about algorithms, machine learning, AI principles, and how to use AI in Python. Make sure to drop this certification on your LinkedIn page!
- NewMR: If you’d prefer to be more of an observer and soak up tidbits along the way, sign up to receive Ray Poynter’s AI Newsletter. With lots of discussion about how to use AI and notices about webinars and new tools, Ray will keep you up to date with everything insights experts should be aware of. This is a great option for people who feel they’re too old or too far along in their career path to start learning something new – because Ray isn’t!
Increase Your Questionnaire Design Skills
No matter how many years of experience you have writing questionnaires, there’s always someone from a different industry with different experiences who can offer you something new to learn. Whether that’s new types of questions or perspectives you hadn’t considered before, grow your questionnaire design skills with a free or paid class, webinar, or book.
- ESOMAR Academy: Always a wealth of current knowledge, ESOMAR regularly offers online classes including this “Empathetic Survey Design” course by Jon Puleston and Martha Espley. Stuffed with data from their own research, you’re sure to pick up some helpful techniques you’ve not considered before.
- Coursera: The choices on Coursera are unending. Whether you prefer the perspective of psychologists, sociologists, anthropologists, or economists, there is a questionnaire design course for you. For a well-rounded approach, try this “Questionnaire Design for Social Surveys” course offered by the University of Michigan.
- People Aren’t Robots: Skip this option if you don’t like authors plugging their own books! In case you do, “People Aren’t Robots” is a short yet detailed book with lots of examples that will inspire you to write better questionnaires. Taking the point of view that people are imperfect, it offers a rare perspective on how to be kinder to people answering questionnaires.
Be the Change You Want to See
It’s easy to get frustrated when you see your industry moving down one path when you want to take a different path. Fortunately, there’s an easy solution. Join your national association and grab the steering wheel!
- Insights Association: If you’re based in the USA, the Insights Association works on your behalf to champion, protect, and create demand for the insights and analytics industry. Volunteers participate in developing quality standards, education, certification, and more. Get involved with a few small projects to see just how big of an impact you can make.
- CAIP and CRIC: If you’re based in Canada, you’ve got two ways to help steer your national association. Individuals can become Certified Analytics and Insights Professionals and companies can join the Canadian Research Insights Council. In either case, volunteer as a board or committee member, and make your priorities come to life.
- Esomar: Are you ready to take a bold move and create global impact? Then Esomar is your destination! You may feel you’re not yet ready to run for a board position or a National Representative position but there are plenty of other ways to be heard. That might be as part of project committees that work towards specific tasks like guidelines and publications, or program committees that plan events and content. Contact Esomar to find out how you can make your ideas a reality.
What Next?
As the saying goes, the best time to start is right now. Whether it’s January 1st or June 30th, exercise your mind with a course, a webinar, a book, or an email to your national association. And when you’re ready to implement your upgraded questionnaire design skills or test out an AI text analytics and verbatim coding system, talk with one of our survey and AI experts. We’d love to be a part of your growth journey!
12/18/24
Read more
The Latest in Market Research
How to Avoid Fostering False Precision
No matter how careful you are, false precision creeps into research results and reporting. From research design and statistical testing, to a general failure to accept the limitations of people and data, numbers often appear more accurate than they really are. There are a few things researchers can do, however, to try to minimize false precision such that marketers can make more informed business decisions.
Incorporate as much rigor as possible
Researchers have many tools to reduce the potential for false precision but three foundational techniques are particularly important.
First, use the largest sample sizes you can afford. While there is no “best” sample size that applies to every study, it’s fair to say more is better. In the market research space, 700 per group often offers the precision necessary to determine whether two groups are different - in this case, 10% vs 15% will probably be statistically different. When budgets lead to sample sizes of 200 to 300 per group, reliability will decrease and false precision will increase.
Second, use comparison or control groups as often as possible. Without a comparison, it’s impossible to know how much random chance affected the data. Was recall of your brand actually 10% or would 10% of people have recalled a brand you just made up? Did 10% of people try or buy or like or recommend your product or would 10% of people have said the same of a brand you just made up? No matter how careful they are, people will always misremember and misunderstand seemingly obvious things.
Third, when the opportunity arises, use a true random sample. If you’re lucky enough to be working with students registered at a school or cashiers employed in a store, it may be possible to gain consent from a sample of the population. Unfortunately, most market researchers won’t have access to a population list of customers/shoppers/buyers/users and so won’t be able to benefit from this.
Use as few significant digits as possible
Numbers are easy to generate. Throw a questionnaire at 700 people, run chi-squares, calculate pvalues, and build thousand-page tabulations. But those resulting numbers aren’t truth. They are representations of complex subjective constructs based on fallible, unreliable humans. Where truth is 68%, a survey result could be 61% or 69%. To say that 61.37% of people would recommend hypothetical Brand C is a gross misuse of decimal places.
Decimal places are perhaps the most problematic source of false precision, particularly in the marketing research world. To avoid this, don’t use any decimal places when percentage values are between 5% and 95%. Similarly, avoid using two decimal places when reporting Likert results. Only venture into one or more decimal places when you’re working with huge sample sizes from truly random samples.
Even better, if you’re brave and want to express your appreciation for false precision, round 61.37% to ‘about 60%.’
Use statistical testing wisely
Like artificial intelligence, statistical tests are meaningless when they aren’t offered with human oversight.
Tabulation reports can include thousands of t-tests and chi-square tests but, by design, we know that 5% of the significant results are Type I errors. Even worse, we don’t know which of those significant results are false. Because they are easy to find and exciting to report, it’s easy to overuse these significant results. To help readers grasp the concept of false precision, it’s a good idea to share corroborating trends from other sources such as last year’s research report, loyalty data, economic or political data.
If you’re lucky enough to be using random samples, always report margins of error. Further, always report available confidence intervals. While these numbers also incorporate a degree of false precision, readers need reminders that any statistics shared aren’t carved in stone.
Most importantly, ensure your reader understands that any numbers presented are not truth. Rather, they are vastly closer to truth than hypothesizing.
Summary
False precision is an easy trap to fall into, especially when the research results match your hypotheses. It can result in misleading interpretations, flawed decision-making, and ultimately, negative consequences for businesses. However, by being mindful of the limitations of research designs and data reporting, and offering clear instructions on how to best interpret numbers, researchers can help marketers better understand their data and make more informed and accurate decisions. If you’re curious about false precision might present itself in your research, feel free to connect with one of our survey experts!
11/28/24
Read more
AI & Open End Analysis
Cautious Innovation: How to Win the AI Race in the Insights Industry
Companies are in a heated race to showcase how their AI innovations help businesses achieve significant gains in processing data more quickly and accurately. But, as with all things, jumping on the bandwagon without a plan is rarely a wise move.
As AI tools developed in a variety of industries over the years, their use has uncovered lessons for future innovators. Here are just a few of those lessons.
Sudden pivots are wise
Though it seems like a lifetime ago, it was only eight years ago that Microsoft released the Tay chatbot. It instantly became a playful conversational tool with cute little quirks. However, people quickly migrated from having fun conversations with it to engaging in more controversial use cases. They realized they could train Tay, and soon taught it to become racist, sexist, and hateful.
This unanticipated outcome led to two important learnings. First, Microsoft reacted quickly and removed Tay from public use. Tay’s removal did not necessarily reflect a failure but, rather, a calculated risk within the innovation funnel. Second, as we’ve already learned from “Privacy by design” methods, the Tay incident reinforced the need for AI tools to incorporate “Ethics by design” models. Thanks in part to Tay, most AI tools now incorporate ethical guardrails. Take the risks, bring your innovations to market, but ensure they are prebuilt with processes to detect and prevent misuse.
Minimum viable standards are relative
Remember when restroom hand dryers with sensors first came out? They worked great for many people, but it soon became apparent that they were unreliable for people with darker skin tones. Developers hadn’t tested the product on people who had darker skin. Across the board, we’ve seen that AI tools are often biased towards pale, male faces because other people are excluded in sufficient quantities from AI training data. As a result, we now have higher minimum standards for training datasets, and we ensure they include people reflecting a wide range of demographics, especially in social and market research.
Our standards have improved over time, but they also differ based on the use case. In the research industry, for example, if you need to code questionnaire verbatims to understand which color of bar soap people prefer, 85% accuracy is suitable for the job. Increasing the 85% to 95% won’t change the outcome of the research but it will take longer and cost more. On the other hand, if you need to understand the efficacy of different mental health therapies, achieving 99% accuracy via automated coding enhanced with manual coding is the better way to go. Life-and- death situations necessitate higher accuracy. Standards are relative.
Ensure people retain final oversight
If you ask several competitive AI image generation tools to create an image of fish in a river, and they all show sushi and maki rolls swimming upstream, that doesn’t make the image valid. In fact, after seeing just one image, people would know the result was invalid.
This is exactly why people are necessary to confirm the validity and accuracy of AI tools. For example, during the development of our qualitative coding tool, Ascribe, we compared the results generated by the AI tool with results generated by expert human coders. It takes time to continually generate results in various industries and topics, and then test those results with human coding. But, that ongoing process is time well-spent to ensure that the quality of AI results is comparable to or better than human results.
Cautious risk-taking will win the AI race
Perfection is elusive, even in the age of AI. Every product, no matter how advanced, has its limitations. While some flaws, like those seen with Tay, might demand drastic changes, most can be addressed with small tweaks or by discovering the optimal use cases. The secret to successful innovation is a balance of bold ideas, agile adaptation, and the courage to take small, calculated risks. If you’re curious to learn more about our approach to using AI wisely, please get in touch with one of our Ascribe or Artificial Intelligence experts.
11/28/24
Read more
The Latest in Market Research
Data to Insight to Action: Choosing the right type of research report
Research reports come in a full range of formats. From short and precise to long and detailed, there’s a report for that. In this post, we’ll outline the pros and cons of three formats to help you choose the one that will best suit your business needs.
Data reports are cheap, fast, and easy
Historically, the research industry has done a great job of producing reports with reams of data. Today, these types of reports are supported by automated systems that produce a nicely colored chart for every single question in a questionnaire. Supplemental charts might also be built manually using data pulled from tabulations. With charts in place, titles or headlines are easily prepared by converting the largest or smallest number in each chart to a phrase.
You know you’re working with a data report when the headlines read like these:
- 75% of shoppers seek out discounts
- 55% of people prefer package B
- 80% of people indicated child-care is a key issue
Data reports are fast and relatively cheap to create. Launch a survey, request the automated charts, and choose an interesting number to become the title on each slide. These reports require little thinking and creativity and, as such, nearly anyone can prepare them regardless of their skill or experience.
Despite their tendency to be quite long, these types of reports answer limited questions. And though automation means that they’re far less expensive, they are the least helpful and are rarely used again. To improve data visualization and streamline reporting, consider integrating dashboards using modern admin panel themes, a quick solution that provides interactive and user-friendly insights into your findings.
Insight reports take time, money, and care
In recent years, our industry has realized that data reports offer little value to research buyers. “What” without the “how, why, and when” doesn’t lead to repeatable business decisions that can be generalized for long-term value across products and audiences. Consequently, many report providers have turned their efforts from data reports to insight reports.
Insight reports require researchers to simultaneously consider multiple pieces of data and derive an understanding that isn’t inherently obvious in the data. It requires understanding the external, environmental context in which the research took place, and how this connects to the business problem at hand.
Insight reports are more expensive and take more time to create because they require more skill and experience to create. Rather than focusing on the largest or smallest number in a chart, researchers instead seek out the unexpected, nonsensical, out of place numbers which are most important. Turning those numbers into insight requires digging into external data – cultural, historical, geographical, political, technological, economic.
Headlines in an insights might look like this:
- Because of the sudden cost of living increase, 50% more shoppers are value seekers
- Despite being less accessible, people preferred Format B because their favorite celebrity was recently seen using it
- The need for child-care declined to 80% due to higher unemployment rates
Although insight reports take more time and money to create, they create value beyond the cost of data collection. They offer insights and understanding that can be used not only for the business problems at hand, but also for other challenges still to come with similar products, categories, or audiences. In the end, these more valuable insights lead to long-term loyalty between the researcher buyer and provider.
Action reports are expensive but they generate ROI
The most valuable reports are action reports. These reports go beyond data points and contextual insights to offer actionable recommendations that identify long-term solutions for specific and generalized business problems. If you were to map out the questions and slides in these types of reports, you wouldn’t find a chart for every question and the slides wouldn’t be ‘in order.’ Rather, you would find that each slide title contributes to an overall coherent story complete with an introduction, key issues, and final conclusions.
Here are some examples of action statements in a report
- With the increased cost of living creating more value seekers, switching to lower quality paper and using less ink on each printing will help decrease prices by 8%
- Despite lower ratings for Format A, it should be adopted with a marketing plan that demonstrates these 3 accessibility use cases
- Though demand for child-care has decreased, we must increase child-care openings to support job-seeking opportunities
Unlike lengthy and less expensive data reports, action reports are usually short and much more expensive mainly because they are vastly more difficult to create. Action reports depend on a true partnership between the research buyer and provider, a partnership wherein the provider has been privy to the confidential workings and challenges experienced by the research buyer. A true partnership leads to reports that offer long-term value and increasing ROI.
Key Takeaways
The choice between data reports, insight reports, and action reports ultimately boils down to your specific business needs. While data reports offer a quick and cost-effective solution, their limited insights may provide no actionable recommendations. Insight reports, on the other hand, dive deeper into the data, uncovering valuable patterns and trends. And, at their best, action reports truly deliver by providing not only insights but also concrete recommendations to drive business growth. The key to success lies in choosing the type of report that aligns with your strategic goals and provides the actionable insights you need to make informed decisions. If you’d like to understand more about research processes, connect with one of our survey experts. We always love chatting about the best ways to uncover valid and reliable insights!
11/21/24
Read more
Text Analytics & AI
Transform Open-End Analysis into Precise Results with Theme Extractor 2.0 and Ask Ascribe
Market researchers and data analysts are constantly looking for faster and more efficient ways to analyze and extract actionable insights from vast amounts of open-end feedback. The launch of Theme Extractor 2.0 and Ask Ascribe marks a major leap forward in open-end analysis, equipping organizations with advanced tools that offer exceptional speed, precision, and depth of insight.
Let’s explore how these innovative solutions enhance Coder, Ascribe’s verbatim coding platform, and CX Inspector, Ascribe’s text analytics solution, along with how they can improve your workflow.
What is Theme Extractor 2.0?
Theme Extractor 2.0 is Ascribe’s latest AI-powered innovation, designed to automatically analyze open-ended comments and survey responses with an accuracy of over 95%. It generates a human-like, richly descriptive codebook with well-structured nets, allowing users to spend less time on the technical aspects of analyzing open ends and more on interpreting results. By using Artificial Intelligence (AI) and Natural Language Processing (NLP), the tool navigates the complexities of text analysis, dramatically reducing manual analysis and delivering results with incredible speed.
Check out the theme-based codes and nets in the following snapshot of results, delivering a significant improvement over the one-word topics used in older technologies. A total of 1655 responses were analyzed, and 1638, or 99%, were immediately classified into 33 codes and 9 nets.
Benefits of Theme Extractor 2.0
1. Accelerated Workflow and Efficiency
Analyzing open-ended responses manually can be extremely time-consuming. Theme Extractor 2.0 automates this task, allowing users to process large datasets and free up valuable time for more in-depth analysis. With minimal manual intervention required, it streamlines your workflow and reduces the risk of errors.
2. Exceptional Accuracy
With over 95% accuracy, Theme Extractor 2.0 provides a level of precision that rivals human coding. By minimizing uncoded responses and overlapping codes, it ensures that your data is clean, structured, and primed for thorough analysis. You can trust that the data and insights are as accurate and reliable as possible.
3. Clear and Organized Codebook Structure
Theme Extractor 2.0 generates theme-based codes and nets that help uncover deeper patterns within your open-end responses. Say goodbye to one-word topics, and hello to helpful, descriptive results. It structures codes into clear, logical nets, making it easier for researchers to interpret data and spot key themes. These improved results add clarity to your analysis and ensure a seamless experience.
4. Minimal Manual Intervention
With minimal manual intervention needed, Theme Extractor 2.0 handles the labor-intensive tasks, allowing users to focus on more strategic work. Automating analysis allows researchers to quickly move from raw data to actionable results, saving time and reducing the risk of human error.
What is Ask Ascribe?
Ask Ascribe is an innovative AI-powered tool that lets you ask natural language questions about your data and get instant, actionable insights through answers, reports, and summaries. Powered by advanced Generative AI models, Ask Ascribe allows researchers to engage directly with their data, enabling them to ask specific questions and gain a deeper understanding of key themes, customer sentiments, and areas for improvement.
What is Ask Ascribe?
1. Instant Answers with Natural Language Interaction
You don’t have to sift through piles of data anymore—Ask Ascribe lets you ask natural language questions like “What are the main themes in this feedback?” or “How can I improve my Net Promoter Score (NPS)?” and receive accurate, data-driven answers in seconds. This AIpowered feature simplifies the analysis process, providing you with real-time insights into the story behind your data.
2. Uncover Actionable Insights Easily
Ask Ascribe goes beyond providing answers; it delivers actionable insights that empower you to make informed decisions. Whether you’re looking to explore customer emotions, identify pain points, or uncover opportunities for improvement, Ask Ascribe helps you quickly grasp what your data is revealing and what steps to take next.
3. Explore Data on a Deeper Level
With Ask Ascribe, you can easily drill down into responses to see the original comments tied to specific insights. This feature allows you to explore your data more thoroughly and gain a nuanced understanding of key drivers, customer feedback, and other qualitative insights. You’re not just scratching the surface—you’re engaging deeply with your data.
4. Empowers Better Decision-Making
By delivering quick, relevant answers to your questions, Ask Ascribe helps you make informed decisions based on real data. The ability to “interview” your data in real-time empowers researchers, analysts, and business leaders to respond faster and more strategically to emerging trends and customer feedback.
Why Theme Extractor 2.0 and Ask Ascribe Matter
Together, Theme Extractor 2.0 and Ask Ascribe – available as features in Ascribe’s Coder verbatim coding platform, CX Inspector text analytics and Services solutions – are changing how researchers and analysts engage with their data. By automating open-end analysis and coding processes and enabling AI-powered Q&A, these tools drastically reduce the time and effort needed to turn open-ended responses into clear results and actionable insights. Here’s why they make a difference:
- Unparalleled Speed to Results: Load up a data file or ask a question and the results appear within seconds. Now, any dataset with open-end responses of any size and complexity can be analyzed quickly and easily.
- Increased Productivity: These solutions streamline workflows by eliminating manual processes, enabling researchers to analyze more data in less time.
- Improved Accuracy: Both tools leverage advanced AI models to deliver precise, more descriptive results, ensuring you can trust the insights derived from your data.
- Deeper Understanding of Data: By offering theme-based codes and nets, and natural language interaction, researchers can dig deeper into their data, leading to more strategic decision-making.
Conclusion
The ability to transform open-ended data into clear, actionable results is essential for any organization that values customer feedback and market research. Ascribe’s Coder, CX Inspector, and Services, available with Theme Extractor 2.0 and Ask Ascribe, are powerful solutions that deliver a new level of speed, accuracy, and ease to researchers and analysts.
These solutions are the future of open-end data analysis—enabling users to unlock valuable insights with minimal effort and maximum efficiency. If you’re ready to transform how you work with data, it’s time to explore what Ascribe can do for you.
Interested in discussing how Ascribe can help you?Click here to sign up for a live demo with your data,or drop us your contact info and we will reach out to you.
11/18/24
Read more
The Latest in Market Research
Bridging the Gap: How Academic and Industry Researchers Can Learn From Each Other
Whether your environment is academic or industry, every social, market, and consumer insights expert has developed a unique set of skills shaped by their education and experience. Regardless of the depth or source of these skills, they remain, however, incomplete. With that in mind, here are some key skills that industry and academic researchers can learn from one another.
Embrace the randomness of real life
Every researcher loves a fully factorial, experimentally designed retail aisle, but those don’t exist in real life. Real-world shopping means not finding a product, brand, size, or format you’re looking for, coming upon unexpectedly low or high prices, and dealing with rude customers and employees.
Practical researchers who conduct on-site shop-alongs and ethnographies have extensive experience analyzing and interpreting complex, messy, real- life scenarios. Their hands-on experience makes their work highly relevant to business leaders needing to understand current industry needs, market trends, and consumer preferences. Real life may be messy, but academic researchers should learn to embrace a bit more mess.
Incorporate more theoretical depth
Human behavior isn’t new. For more than a hundred years, academic researchers have worked to understand market and consumer behaviors and to build theoretical foundations like Cognitive Dissonance Theory and the Diffusion of Innovation Theory that can be used to hypothesize and predict future behaviors. They’ve built on the work that came before them, knowing that it is the foundation of their own research. This is what elevates their work from simpler descriptive analyses and hypotheses to deeper understandings of who, what, when, where, why, and how certain consumer behaviors occur.
Rather than trying to understand a single problem, such as which package will be successful today, academic researchers work to build theories that will have a broader impact. By digging into the human behavior archives more often, industry researchers could also generate more robust conclusions and recommendations.
Practice agile research processes
Where academic researchers often have months or years to run a research project that accommodates a wide range of variables, industry researchers often have days or weeks. Industry researchers have learned to expect and adapt to changing circumstances so that they can meet rapid turnaround times. Their work is efficient and responsive to the real world, which can change literally overnight. Industry research is often simple and quickly actionable. Bang for the buck is clear and personally observable. Academic researchers could definitely benefit from tightening their timelines and getting their outcomes into the real world more quickly.
Engage More Stakeholders
Determining whether customers like package A or package B more is not as simple as it seems. Yes, a highly controlled experiment will reveal a winner, but simply moving forward with the results of what is essentially a customer vote could lead to a massive failure.
To ensure an experiment doesn’t land in the proverbial file drawer, industry researchers are careful to engage many of their stakeholders, including not only customers but also the package designer who will need to make any subsequent tweaks to their beloved design, the brand manager who campaigned for the losing package, the category manager who must budget for a more expensive package, the business development team who must promote a design they don’t personally like, and the executive leadership who is divided on the decision.
By engaging a cross-disciplinary team from the beginning, industry researchers have learned how to strengthen the applicability and reach of their research. Sometimes, academic researchers need to remember that uncovering truth isn’t the automatic path to success.
Take risks with new innovations
If you’re not already using AI, your competitors will pull ahead and leave you behind. That doesn’t mean you should jump onto the AI bandwagon and use it everywhere you possibly can. It does mean you should find ways to incorporate new methodologies like AI as soon as you find practical and valid uses for them.
Rather than waiting for academic researchers to complete highly controlled studies, industry researchers are incorporating AI tools and techniques for experimentation along the way. With side-by-side comparisons, industry researchers allow their customers to see and get comfortable with the results while also ensuring the innovations are valid and reliable in real life. The key is to take safe and considered risks along the way.
Value long-term learning
You’re a researcher because throughout your life, as a baby, toddler, child, teenager, and young adult, you were exposed to a set of experiences and circumstances that consciously or unconsciously led you to reject some job opportunities and choose this one. You are a lifelong experiment, not a one- time study from last week.
Similarly, academic researchers value lifelong experiences, which means that many of them conduct studies that take 5, 10, or even 50 years to complete. These studies help us understand systemic, longitudinal issues that are not visible in one-time, cross-sectional studies. Longitudinal studies are how we understand the impacts of early education on later voting patterns or early community service experiences on later-life consumer issues. Industry researchers would be well-served to take a longer-term approach to some of their studies.
Incorporate more scientific rigor
Academic researchers are rigorously trained in statistics, sampling, and research methodologies. Through years of schooling, they’ve learned to scrutinize and interpret data with a critical eye. They know when to use a Scheffé correction and why they won’t use it this time. They know the pitfalls of data tabulations and when to use Python and R scripts. Consequently, they succeed with high research quality and validity, minimal biases, and maximum reliability.
Given that many market and consumer researchers are serendipitous members of the industry and only receive on-the-job or as-needed training, this is definitely a gap that needs to be filled.
Summary
Whether you’re an academic or industry researcher, we all have knowledge gaps across a range of areas. Having a growth mindset focused on uncovering and filling those gaps is how good researchers eventually become great researchers. We would all benefit from taking a masterclass from ESOMAR or a course with the MRII, so don’t be shy. Have a look at their offerings and see how you can add new skills to your repertoire. Or, if you’d like to chat with a passionate colleague about research techniques, please connect with one of our survey experts. We’d love to hear from you!
11/8/24
Read more
The Latest in Market Research
Lessons from Past Elections: Adapting Polling Methods for Tomorrow
Introduction
As the political landscape shifts beneath our feet, polling remains a critical tool for understanding public sentiment. What if the path to more accurate voter insights lies in learning from our past experiences?
To deepen our understanding, we had a conversation with Dr. Don Levy, Director of the Siena College Research Institute, who shared valuable perspectives from his extensive experience in polling. His insights, derived from our discussion as well as his appearances on podcasts with AAPOR and WXXI News, shed light on the factors that can skew polling results.
In this blog, we’ll explore key lessons from past elections and the innovative strategies pollsters like Dr. Levy are implementing to enhance their methodologies and restore public trust.
Key Lessons from the 2016 and 2020 Elections
The polling industry has learned vital lessons that have reshaped its approach to electoral forecasting. Dr. Don Levy discusses the factors contributing to bias observed in the last two elections:
“I think we all learned a great deal from 2016, which, at this point, feels like the distant past. There was an insufficient amount of state polling during that election, and some battleground states—like Michigan, Wisconsin, and, to some degree, Pennsylvania—were not even identified as such. Additionally, some polls did not weight responses by education in that cycle. We learned from those mistakes.
By 2020, the polling landscape had changed entirely. Everyone was considering education, recognizing it as one of the key fissures in the American electorate, and there was a greater focus on several key battleground states. We were not alone in polling Wisconsin, Michigan, and Pennsylvania at that point.”
Dr. Levy identifies two critical areas for improvement:
- State-Level Polling:
Many national polls failed to accurately gauge support for candidates due to an insufficient focus on state-level data. This oversight revealed how local dynamics and voter sentiment vary significantly across different regions, leading to inaccuracies in predictions. - Education as a Variable:
The impact of education on voter preferences was underestimated in many polls. Dr. Levy notes, "Education proved to be a dividing line, especially in battleground states." This insight emphasizes the importance of moving beyond traditional demographics. Pollsters must now consider how educational background influences voter behavior to enhance the accuracy of their models.
In response to these lessons, methodologies have evolved to prioritize regional polling and implement nuanced weighting practices that better represent educational backgrounds. By learning from past missteps, pollsters are striving to produce more reliable results, thereby reinforcing the credibility of polling as a tool for democratic engagement.
This recognition of education as a crucial variable, alongside the emphasis on state-level insights, marks a significant shift in voter sampling approaches. The goal now is to capture a clearer picture of public sentiment, ensuring that polling remains a trusted resource in the ever-changing political landscape.
Addressing Challenges with Innovative Approaches
In response to recent election cycles, pollsters have made significant strides to tackle key challenges. Dr. Levy highlights how these adjustments contribute to better accuracy and engagement:
- Achieving Diverse Representation Through Stratified Sampling: Pollsters use stratified sampling to ensure diverse representation across demographics, selecting sample groups that mirror the broader population—not only by age, gender, and race but also by considering regional and socioeconomic nuances. This helps create a more accurate model of voter intentions.
“We faced a bias in 2020. Our view was that ardent Trump voters—not the shy ones—tended not to respond to polls. This issue was evident across all polling methods: phone, web-based, text-to-web, and IVR. In analyzing about 50 polls from reputable AAPOR members in the ten days before the election, only three correctly predicted outcomes or leaned Republican, while nearly all others were biased the other way.
In Pennsylvania, for instance, areas with the highest Trump support showed significant polling errors. In regions where Trump won by 70% to 30%, polls often showed him at around 60% to 40%. But systemically, what occurred was that as we filled our quotas, we would just simply get a greater participation among Biden voters who were, to stereotype, white men without a college education than from a representative sample of Trump voters.
So, do we face that threat again this election? Yes. To address it, we’re implementing rigorous quota management, stratified sampling, and actively reducing drop-offs, along with the benefit of a repeat election.”
- Minimizing Drop-Offs by Leveraging Historical Data and Voter Lists: Pollsters have also become more vigilant about minimizing drop-offs, aiming to retain as many responses as possible to enhance data robustness. "Drop-offs can skew results by unintentionally filtering out certain groups," says Dr. Levy. As a solution, historical data and detailed voter lists have become valuable tools, allowing pollsters to refine their models and develop weighting strategies that are more reflective of the electorate. This approach helps account for variances in response rates, especially among underrepresented groups.
- Increasing Voter Engagement using AI and Data Enhancement: With advancements in artificial intelligence and data enhancement, pollsters now have more sophisticated tools to reach and engage voters, even as response rates remain challenging. AI-driven insights can optimize contact strategies, enabling pollsters to identify when, where, and how to engage different voter groups effectively.
These innovations collectively improve polling accuracy, reinforce public trust, and ensure that polling remains a valuable tool in capturing voter sentiment.
Upholding Integrity: The Importance of Non-Partisan Polling
In commitment to Non-Partisan Polling, maintaining public trust is paramount, and organizations like the American Association for Public Opinion Research (AAPOR) are essential in promoting transparency and upholding high standards in the polling industry. Dr. Levy highlights that these organizations work to enhance public understanding of non-partisan polling, which serves as a critical check against potential biases that can arise in politically charged environments.
“My hope—and my request—to my friends in the polling community is to broadcast to U.S. citizens and voters, explaining who we are, what we do, and why it’s important. It would be a positive act of citizenship to participate in high-quality, non-partisan political polls. We're not trying to sell you anything, convince you of anything, and/or manipulate you in any manner, shape or form.”
Pollsters are committed to contributing positively to the democratic process by fostering greater transparency around polling methods, data sources, and any limitations that may impact findings. Through collaboration and innovation, the polling industry continues to adapt and improve, addressing new challenges with integrity. As the political and social landscape evolves, the industry’s focus on high standards remains steadfast, ensuring that polling serves as an informative, unbiased reflection of public sentiment and helps advance informed democratic engagement.
Conclusion
Reflecting on the lessons of recent elections, it’s evident that the future of polling depends on how well the industry adapts and innovates. By embracing better sampling techniques and new technology, pollsters are working to paint a truer picture of where voters stand and what they care about. The path forward is one of continuous learning and evolution, ensuring that polling remains a trusted tool for understanding public sentiment.
11/4/24
Read more