Research gone wrong
11/29/09 / Kevin Raines
When conducting surveys, I often say that the best survey is one where 80 percent of the figures match your guess and 20 percent of the figures surprise you.
Why? Well, you hope to learn something new, hence the 20 percent. But you also hope that a good proportion of the survey matches your view of reality. If not, it means that either you’re really out of touch with your topic, or it means that something may be amiss with the survey. Neither of those is a happy thought.
When we first run the numbers on one of our surveys, one of the first things we do is a common-sense check. If we were guessing the answers, what would we guess? And do the numbers seem reasonable in light of those guesses? Do the numbers make sense relative to each other? If we do a survey where all of the results seem counterintuitive, we start to worry and we start going through the numbers with a microscope. Even with a strong staff, we’ll sometimes mistype a weighting factor or mislabel a data field. The key is that we have the quality control processes in place to fix those errors before they ever see the light of day.
When I saw this study released recently by the Oklahoma Council on Public Affairs , alarm bells immediately went off. The survey, conducted more or less as a spot civics quiz among Oklahoma high school students, indicated a dismal awareness of basic American knowledge. When given a sample of questions from the U.S. Citizenship Test given to immigrants, the survey reported that only 2.8 percent of Oklahoma high school students would pass the test based on that sample. They also cite another survey with similar results in Arizona as backup to their conclusion.
However, the individual responses to questions on this survey were brow-furrowing. Only 23 percent of students could name the first president of the United States? Only 43 percent could name the two major political parties in the U.S.? Those figures seem absurdly low. They don’t pass my common-sense test, and if we were finding those results on a Corona survey I’d be hip-deep in the numbers trying to figure out if they’re really accurate or if we misplaced a decimal somewhere along the line.
And I wasn’t the only one who found the results suspicious. Oklahoma State Representative Ed Cannaday, a former social studies teacher, questioned it based on his experience in the field , and it has been the subject of various blogs. The furor is rising.
Now, we haven’t seen the data and we haven’t reviewed the methodology. It’s possible that the results are accurate, because if we all knew the right answer ahead of time we wouldn’t be doing surveys. It’s also possible that there was an innocent error in the methodology or the calculations, and of course there are other less innocent scenarios as well. It would be imprudent to immediately make a judgment of any type until the data is carefully reviewed.
So is this a rare survey where the results completely blow up our normal perception of reality? Or is it an example of “market research gone wrong”? I suspect we’ll find out soon enough.