No matter how careful you are, false precision creeps into research results and reporting. From research design and statistical testing, to a general failure to accept the limitations of people and data, numbers often appear more accurate than they really are. There are a few things researchers can do, however, to try to minimize false precision such that marketers can make more informed business decisions.
Incorporate as much rigor as possible
Researchers have many tools to reduce the potential for false precision but three foundational techniques are particularly important.
First, use the largest sample sizes you can afford. While there is no “best” sample size that applies to every study, it’s fair to say more is better. In the market research space, 700 per group often offers the precision necessary to determine whether two groups are different - in this case, 10% vs 15% will probably be statistically different. When budgets lead to sample sizes of 200 to 300 per group, reliability will decrease and false precision will increase.
Second, use comparison or control groups as often as possible. Without a comparison, it’s impossible to know how much random chance affected the data. Was recall of your brand actually 10% or would 10% of people have recalled a brand you just made up? Did 10% of people try or buy or like or recommend your product or would 10% of people have said the same of a brand you just made up? No matter how careful they are, people will always misremember and misunderstand seemingly obvious things.
Third, when the opportunity arises, use a true random sample. If you’re lucky enough to be working with students registered at a school or cashiers employed in a store, it may be possible to gain consent from a sample of the population. Unfortunately, most market researchers won’t have access to a population list of customers/shoppers/buyers/users and so won’t be able to benefit from this.
Use as few significant digits as possible
Numbers are easy to generate. Throw a questionnaire at 700 people, run chi-squares, calculate pvalues, and build thousand-page tabulations. But those resulting numbers aren’t truth. They are representations of complex subjective constructs based on fallible, unreliable humans. Where truth is 68%, a survey result could be 61% or 69%. To say that 61.37% of people would recommend hypothetical Brand C is a gross misuse of decimal places.
Decimal places are perhaps the most problematic source of false precision, particularly in the marketing research world. To avoid this, don’t use any decimal places when percentage values are between 5% and 95%. Similarly, avoid using two decimal places when reporting Likert results. Only venture into one or more decimal places when you’re working with huge sample sizes from truly random samples.
Even better, if you’re brave and want to express your appreciation for false precision, round 61.37% to ‘about 60%.’
Use statistical testing wisely
Like artificial intelligence, statistical tests are meaningless when they aren’t offered with human oversight.
Tabulation reports can include thousands of t-tests and chi-square tests but, by design, we know that 5% of the significant results are Type I errors. Even worse, we don’t know which of those significant results are false. Because they are easy to find and exciting to report, it’s easy to overuse these significant results. To help readers grasp the concept of false precision, it’s a good idea to share corroborating trends from other sources such as last year’s research report, loyalty data, economic or political data.
If you’re lucky enough to be using random samples, always report margins of error. Further, always report available confidence intervals. While these numbers also incorporate a degree of false precision, readers need reminders that any statistics shared aren’t carved in stone.
Most importantly, ensure your reader understands that any numbers presented are not truth. Rather, they are vastly closer to truth than hypothesizing.
Summary
False precision is an easy trap to fall into, especially when the research results match your hypotheses. It can result in misleading interpretations, flawed decision-making, and ultimately, negative consequences for businesses. However, by being mindful of the limitations of research designs and data reporting, and offering clear instructions on how to best interpret numbers, researchers can help marketers better understand their data and make more informed and accurate decisions. If you’re curious about false precision might present itself in your research, feel free to connect with one of our survey experts!