Watch a 5 Minute Demo

What is your company email address?
What is your country/permanent residence?
In which state do you live?
Privacy Policy - By signing up, I agree with Iterable's Privacy Policy. I understand that I am signing up to Iterable Marketing emails and I can unsubscribe at any time.
Form footer image
Loading...
What is your first name?
What is your last name?
What is your company email address?
What is your company's name?
What is your country/permanent residence?
In which state do you live?
Privacy Policy - By signing up, I agree with Iterable's Privacy Policy. I understand that I am signing up to Iterable Marketing emails and I can unsubscribe at any time.

Schedule a demo to learn more.

What is your country/permanent residence?
Please provide how many emails are you sending per month
Please provide your current Email Provider
Privacy Policy - By signing up, I agree with Iterable's Privacy Policy. I understand that I am signing up to Iterable Marketing emails and I can unsubscribe at any time.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Form footer image
Thank you !

Thanks for contacting us, we’ll be in touch shortly.

Test Smarter: Achieving Growth With Email Experiments That Actually Work

It’s impossible to discuss the subject of growth marketing without touching on the topic of A/B testing. After all, creative problem solving, innovation and evolution are essential elements of a growth mindset.

In Part 3 of our growth marketing series, we explored iteration and Minimum Viable Campaigns. That’s the perfect starting point for any growth marketer, but testing is essential when it’s time to level up your game and refine campaign strategies.

But bad testing can be a tremendous waste of time and resources—and often, poor tests are the norm rather than the exception.

Even the best marketers make their share of testing fumbles. Email marketing tests, in particular, are tricky—and easy to get wrong. 

This article—Part 4 in our growth marketing series—explores how you might be missing the mark on email experiments, and some tips to get testing right for your brand.

Keep Your Eye on the Prize 

If there’s one big blunder every marketer has made (myself included!) while deep in the depths of testing, it’s operating with blinders on. We get really entrenched in minutia and lose sight of the big picture.

Always remember—growth marketing is about the long game. A “winning” test may not be a “winner” in the grand scheme of things. 

In Part 1 of our series that defined growth marketing (and Part 2!), we emphasized the importance of long-term loyalty and the role of empathy in driving success.

Bottom line: Short-term wins are important, but they’re not the ultimate objective. Keep this concept back of mind as you create any email experiment.

And never, ever implement tests that capitalize on cheap tricks. As a refresher, here are some examples we touched on in Part 1:

  • Subject lines that use “RE:” or “FWD:” in the subject line to give the impression of an ongoing personal email exchange
  • An exaggerated sense of urgency
  • Unnecessary or phony apology emails for campaign mistakes
  • Anxiety-inducing fake order confirmation emails

Will these spammy stunts result in pops in engagement? Oh, you betcha! You’ll see “winning” results for certain. But that little bump in your pivot chart isn’t sustainable, or good business. These silly games will also deliver spikes in unsubscribes, spam complaints and negative customer sentiment.

That negative feeling does not drive long-term growth and affects the relationship you are cultivating with your audience, which is your ultimate objective. Avoid any test that may detract from building a sense of connection with your brand.

Don’t Waste Your Time

An email goes to the spam box every time a marketer says “test everything.” It’s time to stop testing for the sake of testing, my friends.

It’s a pretty involved process to create and execute a good email experiment, so use your time wisely. Growth marketing is all about accelerating quickly. But poorly planned tests are a time suck—the polar opposite of growth marketing.

Only test the campaigns that are going to deliver business benefits from an increased response rate. 

That means you should focus your energy on the campaigns that drive your audience to take meaningful actions that push them through the customer lifecycle. Consider what steps are essential to driving conversion at each point in their journey and prioritize tests that move the needle at those points.

Don’t waste your time trying to optimize, for example, transactional messages, which already benefit from healthy engagement and often don’t play a significant role in advancing bottom-line goals.

Some things shouldn’t be tested at all! Yeah, that’s an unpopular opinion, but from a growth perspective, it’s a no-brainer.

Implement Email Experiments That Work

A good test achieves the four following benefits:

1. Maximizes return while minimizing opportunity cost.

This means the “bad” tests go to as few folks as possible, while the “good” version reaches the biggest possible swath of subscribers.

2. Produces statistically meaningful results.

This is tricky! The 10/10/80 test has become a standard, but this ratio may be way off depending on the size of your audience and its average engagement KPIs. Micro-tests are also not worthwhile (for example, testing the same subject line both with and without an emoji).

You may not be testing enough variations. An “A” version and a “B” version alone are usually inadequate and are often influenced by cognitive bias. More tests lessen the possibility of bias.

Above all, you must be certain your database is large enough to get a level of significance. Testing may not be worthwhile at all if your audience isn’t sizable, especially if your typical open and click rates are already on the low side.

Our partners at Phrasee have some great insights and interactive tools on their blog that can help you understand whether your tests deliver accurate, meaningful results—look for the interactive tools that advise how to pick strategy by list size and audience parameters.

3. Enables learning.

Track test results and glean what they tell you about your audience. You should be able to apply these findings to future campaigns, so you can do less testing going forward.

4. Does not use downstream metrics to determine a winner.

This statement sure gets marketers fired up! But the truth is:

  • We don’t usually control what happens beyond the click,
  • We can’t account for every random variant on the journey to conversion,
  • And outliers can skew results.

While you should certainly be looking at your downstream metrics, you are only testing for success at the nearest KPI. For example, subject line tests only test open rates. CTA tests only test click rates.

In addition to addressing the relationship between database size and test accuracy, that Phrasee article referenced above sheds light on why downstream KPIs as success metrics sound good in theory but aren’t reliable indicators of a winning test.

Also, Iterable customer Strava gave a terrific presentation (available on-demand) at our Activate 2019 conference that explores the correlation between email experiments and downstream conversions, while confirming that much of what happens beyond the click is outside of your control.

If you’re still hung up on the idea of downstream KPIs as success metrics because you believe you’ve seen it first-hand, then you’re probably not looking at the situation from the right perspective.

An example I hear often goes something like this: Subject line test “A” delivers low opens and high clicks, while subject line test “B” delivers high opens and low clicks, and this is perceived as evidence that a metric other than open rate should be used to determine a subject line test winner.

But the truth is—if you’re seeing those results, then you’re not actually conducting an apples-to-apples subject line test. Instead, you’re testing a bigger strategy—most likely, curiosity vs. specificity. And that’s a whole different ball of yarn!

The Bottom Line

If you’re gonna experiment, do it right.

If you can’t execute good tests right now, it’s okay to put them on the back burner until you have the luxury of focusing on refined optimization. Invest your energy into Minimum Viable Campaigns in the interim to maximize the impact you can make for your brand.

If you’re looking for more information on creating tests that move the needle, check out our on-demand webinar, “Email and Cross-Channel Testing: You’re Doin’ It Wrong.”

Search Posts