Asking the “right” people is half the challenge
9/9/14 / David Kennedy
We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:
- Ask the right questions;
- Of the right people; and
- Analyze the data correctly
As Kevin pointed out in a blog nearly a year ago, #2 is often the crux. When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study. Deceptively simple, but there are many examples of research gone awry due to poor sampling.
So, how do you find the right people?
Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population). However, those sources don’t always exist and a purely random sample isn’t possible. Regardless, here are three steps you can take to ensure a good quality sample:
- Don’t let just anyone participate in the research. As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it). While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”. The fix: Limit access through custom links, personalized invites, and/or passwords.
- Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky. For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
- Ensure you have enough coverage. Coverage refers to the proportion of everyone in your population or audience that you can reach. For example, if you have contact information for 50% of your customers, then your coverage would only be 50%. This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot. A very real-world example of this is telephone surveys. The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different. The fix: The higher the coverage the better. When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts. One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”
Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible. However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later. Books have been written on survey sampling, but these initial steps will have you headed down the correct path.
Have questions? Please contact us. We would be happy to help you reach the “right” people for your research.