Email A/B testing is generally used to select between two different variations of an email message so that the winning version can be sent to the broader population. A/B testing is a comprehensive topic; we will go into it in depth in a future blog post. In preparation for that post, we want to examine the idea of a “confidence level,” which plays a big role in interpreting A/B testing results. For example, the result of an A/B test might say “Variation B wins, with a 96% Confidence Level.” What does that mean? And how it is estimated?
Let’s look at an example. Say there of two variations of email creative that we want to test. Suppose our desired outcome is more clickthroughs. We want to identify the email variation that generates better clickthrough rates using a small list, so that we can use the winner for a bigger campaign down the line.
Read more here.