A/B tests are common in our work at Corona Insights. An A/B test is performed when a client wants to compare two or more versions of a campaign, slogan, advertisement, or other creative materials to see which is most effective or appealing. For example, an organization could have a few different versions of an advertisement or a slogan, and they might want help deciding which is best. Using an A/B test, we can help ensure their strategies are data-driven and resonant with their audiences or communities.

Implementation

In order to perform an A/B test, participants will randomly be assigned one item, called a variant. We then will ask participants questions about that variant. Sometimes the questions are very simple and straight forward; participants might be asked how much they like the material they were assigned, and the results for each variant can then be compared to declare a “winner.”

Through these questions, we can learn a lot about each item and eventually learn how each one is perceived, and which is more popular among the public.

But these tests can be used to learn so much more about the materials and can help organizations develop a deeper understanding of what resonates with their audience. In many circumstances, we use A/B tests to learn about both the effectiveness of each variant and the aesthetic qualities. To measure the effectiveness of an advertisement, for example, we might ask some of the following questions:

  • Do you understand what this advertisement is for?
  • Does this advertisement make sense to you?
  • Does this advertisement make you feel more or less confident in the thing that is being advertised?

To measure the aesthetic quality of an advertisement, we might ask some of these questions:

  • Do you find this advertisement visually appealing?
  • Is this advertisement easy to read?
  • Does this advertisement grab your attention?

It may be the case that one variant is more effective than the other in sharing a message, but one is more aesthetically pleasing to viewers. In that case, the client might borrow some of the aesthetic characteristics from one and apply them to the other, creating a final product that is visually and informationally superior.

A/B Test Limitations

There are, of course, some limitations to A/B tests. The validity of the test is largely dependent upon the sample of people that are seeing each variant. Presenting the results of an A/B test as averages might obscure how specific groups react to each variant. For example, there are two social media posts that advertise a public health service. Option A might be received well among young men, but Option B might perform better overall. If the advertised service is particularly relevant for young men, declaring Option B the winner of the test might be misleading for the client. Great care must be taken to keep an eye on the bigger picture efforts of the campaign to ensure the test results are being analyzed correctly.

Another limitation comes with the materials themselves. A/B tests can help determine which item is better, more appealing, more impactful, etc., but it does not necessarily result in the ideal item. A/B tests can only compare materials to one another, they cannot necessarily produce the kinds of insights needed for a totally new concept.

Here at Corona, we also use A/B tests internally to improve our data collection. We regularly perform these tests to see what kind of email subject lines produce the most survey takers, or if there is a difference in response rates to mailed invitations in color vs black and white. We are constantly looking for ways to ensure higher quality data from a representative sample of the population, and these tests are a great way to track what is resonating with people, which ultimately benefits our clients.