Determining The Sample Size

From Market Research
Jump to: navigation, search

Contents

Precision and sample size

Consider a simple survey question such as:

How likely would you be to recommend Microsoft to a friend or colleague?
 
Not at all                                                               Extremely 
likely                                                                      likely
0	1	2	3	4	5	6	7	8	9	10

This question was asked of a sample of 312 consumers and the following frequency table summarizes the resulting data:

MicrosoftLikelihood.png

From this table, we can compute that the average rating given to Microsoft is (0×22+1×12+…+10×22)/312=5.9 out of 10. However, only 312 people have provided data. As there are around seven billion people in the world, it is possible that we would have computed a different answer had we interviewed all of them. But how different? This is the question we seek to understand by evaluating precision.

The most common way of communicating the precision of an estimate to a non-technical audience is using confidence intervals. If we use the simplest formula for computing confidence intervals, we compute that the 95% confidence interval for likelihood to recommend Microsoft is from 5.6 to 6.2, which can be interpreted loosely as the range in likely true values that would have been obtained had everybody been interviewed (i.e., based on the survey, we would be surprised if the true figure, obtained by interviewing everybody in the world, was not in the range of 5.6 to 6.2).

Formulas for computing confidence interval take the sample size as an input. With a total of 312 respondents in the sample the confidence interval is 5.6 to 6.2. With a larger sample size there is a smaller confidence interval. Similarly, with a smaller sample size a bigger confidence interval is computed. For example, if we halve, double or quadruple the sample size (n) we can compute the resulting 95% confidence intervals shown in the table below (if this table was not rounded to the first decimal place a more gradual change in the confidence intervals for different sample sizes would be evident).

n Lower bound Upper bound Interval
156 5.4 6.3 5.87±0.43
312 5.6 6.2 5.87±0.31
624 5.6 6.1 5.87±0.22
1,248 5.7 6.0 5.87±0.15

The right-hand side of the table shows the confidence interval in a slightly different way, represented as the mean with a figure ±. If you have ever read a political poll you will have seen this before, where it is described as the margin of error (e.g., a political poll may report a result of 50% with a margin of error of 3%).

A key thing to note about the confidence intervals is that although they get smaller with larger sample sizes, there are diminishing returns.[note 1] In the example, the largest sample size of 1,248 is 8 times as big as n = 156, but the resulting confidence interval for n = 156 is less than three times as big as that for 1,248.

Guidelines for determining sample size

A practical challenge when designing a study is working out how large a sample is required. Although larger sample sizes reduce the sampling error – that is, the width of the confidence intervals[note 2] – larger sample sizes cost more money. From time-to-time it is possible to work out the required sample size by working backwards from a desired confidence interval. For example, if Microsoft indicated they wanted to measure the average likelihood of being recommended to within ±0.22, we could determine that this required a sample size of 624. In practice, it is extraordinarily unusual rare for such computations to be used to determine sample size. Instead, market researchers use a combination of gut feel, rules of thumb and understanding of their clients’ need for accuracy when working out sample sizes. Some rough rules of thumb for sample size in quantitative studies are: n = (Total budget - fixed costs)/cost per interview n ≥ 300 for basic studies n ≥ 600 for segmentation studies n ≥ 1000 for important strategic studies n ≥ 100 per subgroup (e.g., segment)

Sample sizes with small populations

It is intuitively obvious to most people that if sampling from a small population then you can use a smaller sample size. This intuition is wrong. Unless the sample size is going to be 20% or more of the size of the population, it actual size of the population is pretty-much irrelevant in terms of determining confidence interval and thus is not a factor in working out the required sample size.

That is, the estimated confidence interval of 5.6 to 6.2 is the same width regardless of whether the study had been conducted in a population of 10,000 or 10 billion people. This is mostly good news. It means that there is virtually never a need for super-large samples (e.g., 10,000 or more). Even in China, the sample size used to measure the TV ratings is only 14,650,[1] and the reason it is so large is not because the population is so large, but because there is a need to divide the country up into regions and have relatively narrow confidence intervals within the regions. A flipside of this is that in Australia sampling market research is comparatively expensive, because even though the population is relatively small, the sample sizes still need to be broadly equivalent to those in much larger countries (the Australian TV ratings sample is about one-third the size of the Chinese sample, even though the Chinese population is more than 50 times larger).

Nevertheless, when sampling from relatively 'small' populations (e.g., in business-to-business studies), it is commonplace to use a smaller sample size than in a survey of consumers. However, this is principally because the cost-per-interview is substantially greater in business-to-business surveys.

See also

Sample Sizes for Qualitative Research

Previous page

Deciding How to Conduct the Interviews

Next page

Setting Quotas

Notes

  1. The formula for a simple random sample ± is 1.96s /√n, where s is the standard deviation, and thus the width of confidence interval is inversely proportional to the square root of the sample size.
  2. More precisely, sampling error is the general term to describe the existence of uncertainty due to the random selection of a sample, whereas a confidence interval is a specific measure of this sampling error.

References

  1. Green, Andrew (2010): From Prime Time to My Time: Audience Measurement in the Digital Age, warc: London
Personal tools
Namespaces

Variants
Actions
Navigation
Categories
Toolbox