Home    |     About Corona    |     Contact Us

Radiance Blog

Musings on the art and science of research and strategy from the minds at Corona Insights

Latest Posts

How to creatively solve problems

art economyI read this article in the Atlantic a few months ago which described how a surprising number of inventions and innovations in various fields are coming from people who are not experts in the field of interest. It reminded me of how some scientists have created computer games based on real world problems, and people playing these games have been able to help solve some interesting problems (e.g. Foldit). In both the examples in the article and the real world computer games, a slightly outside perspective helped companies or researchers solve an important problem.

It is easy to get into a rut when trying to solve a problem because our inclination is to immerse ourselves in the problem. Although carefully scanning a problem can help us spot errors, focusing on details does not help if the solution to your problem requires seeing the bigger picture and thinking creatively. Outsiders generally do not have the same specific level of detailed knowledge, so it’s easier for them to see the bigger picture. So here are some suggestions to help you come up out of your rut for a breath of fresh air and see your problems in a new light:

  1. Mentally distance yourself from the problem. When we think of something as distant from us, we tend to think about it more abstractly, and research has shown that thinking abstractly can lead to being more creative[1]. Imagining your problem as happening far in the future, in another country, or even to another person are ways of creating distance mentally between you and your problem.
  2. Walk away from the problem. No, seriously. And for those of you who made it a resolution to get more exercise, great news: you can now kill two birds with one stone. Leung and colleagues (2012) found that actually getting out of the box (i.e. the office) led to more creative problem solving[2]. You can make it even more impactful by walking outside, since immersion in nature also increase creativity[3].
  3. Watch some funny cat videos or whatever other videos make you laugh or put you into a good mood. If your boss walks by, just point to this blog post and explain that you are problem solving. Researchers have found in general that positive mood helps us creatively solve problems.[4]
  4. Sleep on the problem. Although getting enough sleep is sometime difficult when there are tight deadlines, sleep can lead to insights that help us solve problems.[5]
  5. And of course, when you are completely stuck, find someone further removed from the problem to take a look at what you are doing. We all do this informally when we talk to friends and family about problems at work. (We often joke about how the friends and family of Corona employees are an invaluable asset to our company!) But sometimes you need to do it formally as well. Even though we are a research firm, we too have hired outsiders when we have needed to think creatively about our own company. Because sometimes you really are too close to a problem to see the forest instead of the trees.


[1] Forster, J., Friedman, R.S., & Liberman, N. (2004). Temporal construal effects on abstract and concrete thinking: Consequences for insight and creative cognition. Journal of Personality and social Psychology, 87,  177-189.

[2] Leung, A.K.Y., et al. (2012). Embodied metaphors and creative “acts”. Psychological Science, 23, 502-509.

[3] Atchley, R.A., Strayer, D.L., & Atchley, P. (2012). Creativity in the wild: Improving creative reasoning through immersion in natural settings. PLoS One, 7, e51474

[4] Isen, A.M., Daubman, K.A., & Nowicki, G.P. (1987). Positive affect facilitates creative problem solving. Journal of Personality and Social Psychology, 6, 1122-1131.

[5] Wagner, U.; Gals, S.; Halder, H.; Verleger, R.; Born, J. (2004) Sleep inspires insight. Nature 427, 352-355.

How to make sense of open-ended responses

As we’ve pointed out before, including an open-ended question or two on a survey can be incredibly enlightening. After all, these kinds of questions really bring the attitudes and beliefs of respondents to life and leave the researcher with a rich pool of genuine opinions on a topic. However, open-ended data can sometimes present an analytical challenge – just how do you make sense of all of these unique answers? As it turns out, there are several options when it comes to analyzing data from open-ended questions, outlined below.

Do nothing. There is always the option to not do anything with open-ended responses. Of course, it is certainly a best practice to review all of the open-ended responses to a question just to get a sense of what folks are saying. However, if open-ended questions yield little more insight than the closed questions within a survey or if there aren’t any clear trends in the data, a formal analysis may not be warranted. This might happen with surveys that have small samples or with data that is elicited from answer options such as “Other (please specify)”.

Coding. Probably one of the most commonly used approaches to understanding open-ended responses is to code them. While there are some software options available that can offer computer assistance to code and analyze open-ended responses, we manually code this data ourselves at Corona in order to ensure accuracy and consistency. As a first step, we read through responses so that we can get to know the data and see some of the trends that emerge. Then, we develop a set of codes or “buckets” for these different trends, re-read through each response, and assign the codes accordingly – sometimes even applying more than one code to a single text response. By doing this, we can make sure that we’re identifying clear trends, issues and ideas suggested by respondents.  Once we’re done applying the codes, we can then quantify these different categories and see how often each of them emerged in the data.

Sentiment analysis. Another option for analyzing open-ended data is to determine the attitude or inclinations of a participant based on the “sentiment” or “feelings” embedded in their response text. Again, this can be done using software or manually. One frequently used approach for this type of analysis is to determine the polarity of a certain response – in other words, is it positive, negative or neutral? For example, we might use sentiment codes to categorize the following responses :

-        “It was a great restaurant with amazing service.” – Code: Positive

-        “I had a terrible dining experience at the restaurant.”- Code: Negative

-        “I ordered a pizza at the restaurant.” – Code: Neutral

Although it is important to realize that sentiment analysis (like coding, above) can be subjective, this approach provides a relatively straightforward and efficient approach to understanding the general nature of responses.

word cloudWord clouds. A fun way of making sense of open-ended data is to create a word cloud, which allows the researcher to quickly visualize common words and themes from qualitative feedback.  There are several different ways a word cloud can be created, and they all can help a researcher see the bigger picture when it comes to open-ends. One popular method compiles a word cloud based on word frequency. The more often a particular word appears in the data results of a particular question, the larger the word will appear in a word cloud. I created a sample cloud of this sort using text from Corona’s February Radiance blog posts. Clouds can also be created from “tags” that a researcher applies to different words in open-ended responses, much like the coding described above. Again, the more often a tag occurs, the larger it is displayed in the word cloud.

There are some important considerations to keep in mind when choosing to use a word cloud for open ended analysis. For example, some words that are similar will show up separately in a cloud, such as “Texas” and “TX.” Also, if some words aren’t kept together, they can lose their meaning (e.g., seeing the words “not fun” in a word cloud is different than seeing “not” and “fun” separately).

Overall, although open-ended questions can have their disadvantages (e.g., time, cost, etc.), we know that they can provide a wealth of information and help tell a story behind the data. The key is to integrate open ends in a meaningful and strategic way when a survey is designed. As Dave said in his last blog post: better insights start with better data that start with better design.

Four questions to ask before starting your evaluation

Checkered FlagEvaluation is a helpful tool to support many different decisions within an organization.  Evaluation can take on many forms (e.g., summative, formative, developmental, outcomes, process, implementation, etc.), and the first step is to identify what kind of evaluation will be most useful to you right now.  Regardless of whether you need to measure your outcomes or refine your processes, in order to plan your evaluation you will first need to get a handle on these four questions:

  1. What are you trying to accomplish with your work?  What are your goals?  How do you hope to change the community, the individuals you serve, the policies or systems in which you operate?
  2. What are you doing to get there?  What are the activities you’ve chosen to work toward your goals?  Do you operate one program or many?  Do you lobby for policy changes?  Do you run educational campaigns?  How do your activities align with your goals?
  3. How stable are your activities year over year?  Does your program run like a well-oiled machine with clear rules for operation?  Are you looking to make improvements to how you carry out your activities, or changes to your mix of activities?  Do you plan to remain nimble in your actions, responsive to changes in the environment, rather than pursuing a fixed set of activities?
  4. What are you hoping to gain from the evaluation?  Do you need to document your outcomes for a sponsor or granting agency? Are you looking for ways to improve your internal communications or efficiencies? Do you need to determine which of your strategies is the most effective to pursue going forward?

Answering these questions will help determine the kind of evaluation you need, and also help to identify any gaps between what you’re doing and where you’re trying to get.  Together they will put you on the path to a productive evaluation plan.

4 ways to report customer satisfaction

In my previous post we discussed two common types of satisfaction surveys.  In this post we’ll touch on the many ways to report results.

Suppose we have the following question:

Q: Taking into account all of your experiences with X, please rate your overall satisfaction with X:

  • Extremely satisfied
  • Moderately satisfied
  • Slightly satisfied
  • Neither satisfied nor dissatisfied
  • Slightly dissatisfied
  • Moderately dissatisfied
  • Extremely dissatisfied

How would you go about reporting the results? (For simplicity, I’m only focusing on one simple question and other question/response types may require different summaries.  Several questions could be combined into a common metric as well.)

 Corona Insights Satisfaction Measure Examples

Report the number of responses for each response option (i.e. frequency distribution).  This option reports everything, but leaves a lot of points to track.  What do you care about the most? Any satisfied response? Only the most satisfied?

Top Box/Top Two Box.  Here we look at the percent who responded “extremely satisfied” (top box) or “extremely + moderately satisfied” (top two box). It provides an easy, single metric to track.  It assumes we care most about the most satisfied customers while ignoring movement on the lower end of the scale (we miss seeing if we move people from dissatisfied to neutral).

Net.  To calculate a “net” score we subtract the percentage of responses from one part of the scale from the other.  Often, this is taking the very satisfied and subtracting the least satisfied.  This takes into account more of the scale, but you have to be careful where you make your breaks (Who do you count as satisfied? In this example, it probably would not be the top three boxes actually.)  The Net Promoter Score ™ is the most popular version of net scoring.

Mean.  Finally, we can calculate the average level of satisfaction by assigning a number to each response option and then calculating the mean.  “Extremely satisfied” = 7, “Moderately satisfied” = 6, and so on (yes, we’re treating ordinal data as interval data). This gives us a single metric that takes into account the entire scale and is often a good measure to use for tracking as it places an emphasis on trends rather than a single point.  However, it’s often not as intuitive for management.  What does a 6.2 mean? Saying 80% of customer are satisfied (like with a top box score) can be more concrete.

This, of course, is just a primer and there are additional ways to summarize and report data, satisfaction, loyalty, or otherwise.  Keeping your goals and audience in mind, as well as fine tuning as you go, are the best ways to effectively communicate results.  Corona works with each client to develop a research program and reporting process that is tailored to their goals.  There is no one-size-fits-all approach.

Have you had experience with these scales? What worked best for you?

How to ask demographic questions

diversityAsking demographic questions (e.g., age, gender, marital status) should be the easiest of survey questions to ask, right?  What if I told you asking someone how old they are will yield different results than asking in what year they were born, or that asking a sensitive question (e.g., How much money did you make last year?) in the wrong way or at the wrong time may cause a respondent to abandon the survey?

Considering today’s identity security concerns, socially desirable bias, and dipping response rates, asking demographic questions is full of potential pitfalls. Although gathering it might be tricky, demographic data are often critical to revealing key insights.  In this post, we present three tips on how best to ask demographic questions on a survey.

  • When to ask demographic questions: Our general rule-of-thumb is to ask demographic questions at the end of a survey, when survey fatigue is less likely to influence answers. Respondents are more likely to answer demographic questions honestly, and will have a better survey taking experiences, if they have already viewed the other questions in the survey. However, we sometimes find it is best to ask a few easy demographic questions at the beginning, so survey-takers start to feel comfortable and see that their feedback will be useful. For example, when researching a specific place (like a city or county), I like asking participants how long they have lived in this place as the first question on the survey.
  •  How you ask the question will determine the type of data you will collect: It is important to consider how demographic data will be used in analysis before finalizing a survey instrument; not doing so might make it difficult to answer the study’s research questions. One consideration is whether to collect specific numbers (e.g., Please enter the number of years you lived in your current home) or provide a range of values and ask participants to indicate which range best described them (e.g., Have you lived in your current home for less than 1 year, 1-2 years, 3-5 years, etc)?  This decision depends on several factors, the primary factor being how the data will be used in analysis.  Collecting specific numbers (i.e., continuous data) typically allows for more advanced analyses than responses within a range of numbers (i.e., categorical data), but these advanced analyses may not be needed, or even suitable, to answer your research questions.   The sensitivity level inherent in the question is also a factor; for example, survey-takers are more likely to indicate that they are within a range of income levels than they are to provide their exact income. In our experience, the benefit of collecting contentious income data is not worth the cost of causing participants to skip the income question.
  •  Information needed to ensure the results represent the population: It is common for certain groups of people to be more likely to respond to a survey than other groups. To correct for this, we often calculate and apply weights so that the data more closely resemble the actual study population.  If you plan to collect demographic data in order to weigh your results, then you will want to match survey question categories with the categories in the data you will use for weighing.  For example, if you would like to weigh using data from the U.S. Census, then you will want to use the same ranges that are available at the geographic extent of your analysis.  Keep in mind that some demographic variable are lumped into smaller categories for larger geographic areas (e.g., states) and into larger categories for smaller geographies (e.g., census tracts). All of these factors must be considered before collecting data from the population.

Haphazard demographic questions can decrease, rather than increase, the value of a survey. At Corona, we always carefully consider research goals, potential analyses, and the population when we design survey questions. Designing a survey might not be as easy as it appears, and we have many more tips and insights than what we could share here.  If you would like guidance on how best to ask demographic questions on your survey, contact us.  We promise that asking us for guidance is easier than asking demographic questions.

Societal values in music

Words-in-Popular-SongsWe stumbled across the interesting data visualization today, which shows how commonly different words or phrases have appeared in Billboard’s Top 100 songs over the past 50 years or so. 

As we scroll through the tables, the most obvious pattern is the increase in profanity (described as “foul words”) since 1990.  Prior to that era, it was almost unheard of to include profanity in a popular song, but … times have changed.

However, we find some more subtle patterns to be more of interest.  The word “love” has become notably less common since the turn of the century, along with the word “home”, and in its place we now hear more references to “sex” and “money”.  Is this a reflection of a less grounded society?  Or is it a contributing factor?

Why pay research participants?

United_States_one_dollar_bill,_obverseWe’ve probably all received some type of payment for participating in research before, whether it was for completing a survey, participating in a focus group or online community, or other form of research.

But should we be paying people for their participation?  While it would be nice if people would be intrinsically motivated to take surveys, the fact is people are busy, and while your survey is very important to you, it may not be to them.

Among the reasons to include an incentive:

  • To say thanks.  Participating takes times and whether the participant is a current customer or not, showing appreciation for their time is a nice gesture.
  • Boost response.  How many people do you need to invite to get one willing participant? Offering an incentive can reduce this number.  Sure, sometimes you could just send out even more research invitations, but many times you’re going to be limited.
  • Boost quality.  In research talk, this means reducing our non-response bias.  In other words, if only 1% of people take your survey, are those 1% different from the 99% who didn’t?  And if so, are the results really representative of your entire audience?  The higher response rate you get, the more confidently you can say your results aren’t suffering from this type of bias.Corona regularly tests incentives and their impact on response rates and data quality.  On more than one occasion we’ve not only seen a boost in overall response, but a boost in the very type of respondent we were hoping to reach most, including tough audiences like customers/donors who had left our client, unhappy customers, and so on.  The reason we conducted those surveys was to find out how our clients could improve, and an incentive provided the boost those audiences needed to be willing to give their feedback.
  • Lower cost.  While this may seem contradictory, if including an incentive increases willingness to participate enough, then sampling and recruiting may become easier and therefore less expensive.

So, what type of paid incentive is best?  The answer is, “it depends.”  The incentive should be tailored to the audience and what’s being asked of them.  The incentive should have broad appeal so as not to inadvertently bias the results due to one group being significantly more attracted to the incentive than another group.

What incentives have you tried? Are there ones you found particularly effective in boosting response?

Imagine 2020 launch

Yesterday, Denver Mayor Michael Hancock revealed Denver’s first cultural plan in 25 years. This strategic plan, written by Corona Insights in partnership with Denver Arts & Venues, will fuel the next era for our city’s art, culture and creativity. What a treat it was to attend the press conference, see the final printed plan and hear firsthand the excitement felt by city leaders and residents.

Corona leveraged its expertise in strategy, data and market research to serve the company’s hometown. The result? A community-centered plan designed to achieve a seven part vision. From finding more art around every corner, to learning over a lifetime and supporting local artists, Denverites hunger for more art.

What can you do? Go to www.imaginedenver2020.org and check out the plan.  There will be an official release party on Thursday at 6pm. Come early to see a presentation of the research behind the plan by Corona Insights that starts at 4pm.  Then stay tuned to see how you can get involved.

We ART Denver.
Denver’s Mayor Hancock with a local model wearing a designer dress by Mona Lucero. The dress was design specifically for the reveal of Denver’s Cultural Plan.
Denver’s Mayor Hancock with a local model wearing a designer dress by Mona Lucero. The dress was design specifically for the reveal of Denver’s Cultural Plan.
The Denver Convention center installed 8 new art pieces by Colorado artists. These pieces were created by Ian Fisher (www.robischongallery.com).
The Denver Convention center installed 8 new art pieces by Colorado artists. These pieces were created by Ian Fisher (www.robischongallery.com).
Executive Director of Denver Arts and Venues, Kent Rice.
Executive Director of Denver Arts and Venues, Kent Rice.
The Denver Convention center installed 8 new art pieces by Colorado artists. This piece was created by Roland Bernier (www.bernierart.com).
The Denver Convention center installed 8 new art pieces by Colorado artists. This piece was created by Roland Bernier (www.bernierart.com).
The Denver Convention center installed 8 new art pieces by Colorado artists. This piece was created by Derrick Velasquez (www.robischongallery.com).
The Denver Convention center installed 8 new art pieces by Colorado artists. This piece was created by Derrick Velasquez (www.robischongallery.com).
Mayor Hancock in front of “The Heavy is the Root of the Light” by Mindy Bray.
Mayor Hancock in front of “The Heavy is the Root of the Light” by Mindy Bray.
 

What are you measuring when you ask your customers, “Are you satisfied?”

satisfied customerBusiness, governments, and nonprofits often ask those who come into contact with them how satisfied are they with X?  You’ve undoubtedly been asked this yourself in the past and perhaps you’ve even run your own customer feedback (often dubbed Voice of the Customer) program.  Doing so is smart as it can uncover problem areas and give a chance to resolve still lingering issues.

However, what are you measuring when you ask someone whether they are satisfied?  There are two general areas in which we measure satisfaction:

  1. Transactions or event-based.  This is when we ask someone how satisfied they are with their recent interaction with an employee, service received, or other specific action between your organization and the customer.
  2. Relationship.  This is when we ask how satisfied they are overall with their relationship with your organization.

The former helps us diagnose very specific issues and uncover unresolved issues with their last interactions.  The latter gives us a snapshot of the organization overall and not only the most recent interaction.

The challenge is that organizations often use a transaction survey as a measurement of the broader relationship.  Depending on the nature of your interaction, asking people their overall satisfaction may be appropriate, but often a question of, “how satisfied were you with your purchase/service call/donation?’ is used as a proxy for overall organizational satisfaction.  The issue here is that someone can have a bad experience once and that won’t necessarily translate to low satisfaction overall.

For example, if you called your cell phone company and had a bad experience on the phone – it took too long to get a human, you had to be transferred multiple times, etc.  A survey of that experience would likely show dissatisfaction.  However, you may still be happy with the quality of service and price you pay.  A survey about your overall relationship may show positive results, even if slightly dented by the recent episode.

Both types of satisfaction surveys have their place, and knowing what you’re trying to measure can help you refine the question you ask and how you interpret the results.  Better insights start with better data that start with better design.

In an upcoming blog post we’ll discuss the different ways to report satisfaction and which may be best for your needs.

Is cluster sampling a good fit for your survey?

Block selectionHere at Corona, we strive to help our clients maximize the value of their research budgets, often by suggesting solutions that get the job done faster, better, or at a reduced cost. In survey research, developing an accurate sampling frame (i.e., a list of the study population and their contact information) is instrumental for success, but sometimes developing or acquiring a sampling frame can be time consuming, expensive, or impractical.  Using a cluster sampling technique is one potential solution that can save time or money while maintaining the integrity of the research and results.

What is cluster sampling?  Cluster sampling, as the name implies, groups your total study population into many small clusters, typically defined by a proximity variable.  For example, street blocks in a neighborhood are clusters of households and residents; schools represent clusters of employees that work in the same school district. The main difference between simple random sampling and cluster sampling is instead of selecting a random sample of individuals, you select a random sample of clusters.  This approach provides a representative sample that is appropriate for the use of inferential statistics that draw conclusions about the broader population.

How to use cluster sampling: First, make sure the nature of your research question is compatible with cluster sampling; if your analysis will require completely independent respondents, then this is probably not the best approach. Second, consider the configuration of your population; you must be able to group people by defined boundaries, such as a city blocks or office building floors.  After grouping your population into small clusters, use a random number generator to draw a random sample of clusters (rather than a sample of individuals).  Typically, every individual from those selected clusters are sampled, although you can infuse your sampling plan with other techniques such as stratified or systematic sampling. As long as 1) you can match every person in the population with a cluster, 2) you have an appropriate person to cluster ratio, and 3) assuming you have a complete list of clusters, you can use these groupings as a sampling shortcut.

When might cluster sampling be useful? Cluster sampling is useful when you don’t have enough resources to develop a complete sampling frame or when it takes significant effort to distribute or collect surveys (such as going door-to-door).  For example, if we wanted to survey bus riders within a city, it would be impractical to develop a list of all bus riders on any given day, let alone to find our random sample of individuals and give them all surveys.  Cluster sampling allows us to select a random sample of bus routes and times, and then survey everyone on those buses.  Although individual clusters may not be representative of the population as a whole, when you select enough clusters at random, your sample as a whole will be representative.

Potential problems: Cluster sampling should be applied with caution, and there are some disadvantages to using cluster sampling compared to a simple random approach.  It is better to sample more, smaller clusters than fewer, larger clusters.  For example, for a nationwide survey it is better to cluster by counties than by states. If your clusters are too few and too large, you might draw a sample that does not adequately represent the population.  The size and homogeny of each cluster and your final sample size desired also impact the viability of cluster sampling.

At Corona, we start fresh with each research project, and we are full of solutions that can help maximize the value of your research budget and resources. If you are struggling with how to reach your population of interest, give us a call, maybe we can shed some light on the situation.

share this page: