How powerful is A/B testing?
Well, aren’t you curious when you read all about how companies you know claim to get thousands more leads overnight from tweaking something simple like changing the color of a button?
… Or when one change to a form on an important landing page boosts their revenue by thousands?
Don’t you wish you could really tighten up your marketing, quickly?
Methodical & rigorous A/B testing can help you achieve crazy results without spending quarters or years to do it! In this article we explain how A/B testing is working for all of your competitors so your brand can succeed online.
What is A/B testing?
A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. A/B testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.
Running an A/B test that directly compares a variation against a current experience lets you ask focused questions about changes to your website or app and then collect data about the impact of that change.
Testing takes the guesswork out of website optimization and enables data-informed decisions that shift business conversations from “we think” to “we know.” By measuring the impact that changes have on your metrics, you can ensure that every change produces positive results.
Just to give you a simple example, you might add product pictures with and without trust badges in your transaction email after a user has purchased from you to test which images drive more upsells.
You’d randomly send the email with added trust badges on product images to half of the most recent cohort of purchasers, and one without any trust badges to the other half, then track which email generated more revenue after a certain period of days.
Knowing which product images generated more sales will give you the knowledge you need to improve your post-purchase funnel.
Simple, right?
Well, running one A/B test will help you improve your chosen metric a certain amount, but the more you run and the more knowledge you gain, the better your business will perform.
There are tons of things you can A/B test, including but not limited to:
- Homepage headlines
- Call to action (CTA) copy
- Ad copy
- Social sharing buttons
- Items offered in a contest
- Trust badges
- Special offers & discounts
- Signup forms
- Landing pages
- Email subject line copy
- Product images
And, thanks to the variety of A/B testing tools out there, it’s never been easier to experiment with pretty much anything.
There’s never an excuse to not be A/B testing at least something…
Why is A/B testing important?
It’s a sad fact, but there’s only so far you can take your business by reading up on best practices and making shots in the dark. A/B testing allows individuals, teams and companies to make careful changes to their user experiences while collecting data on the results. This allows them to construct hypotheses and to learn why certain elements of their experiences impact user behavior. In another way, they can be proven wrong—their opinion about the best experience for a given goal can be proven wrong through an A/B test.
Your audience and market is unique, so no one can definitively tell you which subject line is the best for reducing cart abandonment — they can only tell you their experiences. And what if that doesn’t work for you? Where do you go from there? More than just answering a one-off question or settling a disagreement, A/B testing can be used to continually improve a given experience or improve a single goal like conversion rate over time.
Testing one change at a time helps them pinpoint which changes had an effect on visitor behavior, and which ones did not. Over time, they can combine the effect of multiple winning changes from experiments to demonstrate the measurable improvement of a new experience over the old one. The only way to make smart, data-backed decisions about changes to your site, app, or marketing material is to look at the metrics, come up with a hypothesis, and test it yourself.
Think of A/B testing as tuning up a machine. There are certain parts of the machine that are more important than others.
If only 50% of people can manage to open the door of a car you built, that’s a much bigger issue than if 5% complain of a strange humming noise.
That’s why A/B testing is something you do at every stage of the funnel.
Let’s look at the funnel for an example DTC company selling beauty products:
Example of top of funnel A/B testing
Top of funnel activities typically focus on generating awareness, regardless if you are selling a B2B or B2C product. Remember in this scenario, you’re a DTC company selling beauty products. Let’s say that the top way leads find out about your beauty products company is mostly through PPC ads. You want as many qualified leads to click your PPC ad as possible, so you set up an A/B test to see which ad performs the best.
You might change the copy, the URL and the keyword targeted, making incremental changes until your ad gets a higher clickthrough rate and lower cost-per-lead.
After clicking on your ad, visitors will be taken to a landing page. You could test the headline, CTA copy, or design of the page to see if you can bring more leads into the top of the funnel and move them through the process.
Pro Tip: On both the ad and the landing page you are testing, try adding trust signals to help build a deeper relationship between your brand and your potential buyers. Browsers look for visual cues when they’re shopping and trust marks suggest your brand has been vetted by a trusted 3rd party, leading to higher CTRs, add-to-carts and completed sales conversions.
Example of middle of funnel A/B testing
It’s no good bringing in leads if you’re going to lose them before they get a chance to be sold on the benefits of your product.
After coming in at the top of the funnel, leads need to be educated about what your product can do for them.
That means getting them on your email list, pushing them towards a free-trial, sending them a discounted offer or other educational material that helps them decide that your product is their best option.
Here you’d A/B test your welcome email, subscriber email sequence, the kinds of content that gets clicked and whether a certain test group is converting higher than the rest.
Examples of bottom of funnel A/B testing
In this vital stage, the customer is close to making a decision about buying your product(s). You’d test the messaging of your sales outreach emails, sales scripts, demo tactics and discount amounts.
Putting all of this together, you will end up with a finely tuned sales machine that brings in the right leads, nurtures them with the right education, provides incentivized discounts when necessary and then can effectively close the deal.
But here’s the thing about A/B testing you should also keep in mind…
There is no magic bullet. Not every test will be groundbreaking. A lot of the time, tests can do absolutely nothing. And the big wins that you read about are likely the result of many tests over a period of time.
The important thing about running as many tests as you can is that you at least have evidence for why you wouldn’t want to bother moving in that direction vs just thinking you should do something with no data to back it up.
An example A/B testing framework you can use to start running tests:
- Collect data: Your analytics will often provide insight into where you can begin optimizing. It helps to begin with high traffic areas of your site or app to allow you to gather data faster. Look for pages with low conversion rates or high drop-off rates that can be improved.
Identify goals: Your conversion goals are the metrics that you are using to determine whether or not the variation is more successful than the original version. Goals can be anything from clicking a button or link to product purchases and e-mail signups.
Generate hypothesis: Once you’ve identified a goal you can begin generating A/B testing ideas and hypotheses for why you think they will be better than the current version. Once you have a list of ideas, prioritize them in terms of expected impact and difficulty of implementation.
Create variations: Using your A/B testing software, make the desired changes to an element of your website or mobile app experience. This might be changing the color of a button, swapping the order of elements on the page, hiding navigation elements, or something entirely custom. Many leading A/B testing tools have a visual editor that will make these changes easy. Make sure to QA your experiment to make sure it works as expected.
Run experiment: Kick off your experiment and wait for visitors to participate! At this point, visitors to your site or app will be randomly assigned to either the control or variation of your experience. Their interaction with each experience is measured, counted and compared to determine how each performs.
Analyze results: Once your experiment is complete, it’s time to analyze the results. Your A/B testing software will present the data from the experiment and show you the difference between how the two versions of your page performed and whether there is a statistically significant difference.
If your variation is a winner, congratulations! See if you can apply learnings from the experiment on other pages of your site and continue iterating on the experiment to improve your results. If your experiment generates a negative result or no result, don’t worry. Use the experiment as a learning experience and generate new hypothesis that you can test.
Below are a couple of additional A/B test examples you can glean insights from.
Obama increased campaign contributions by 5% by testing the form
An A/B tester for the Obama campaign increased overall donations by 5% by making the donations form multi-step, instead of being all on one page.
This worked because it didn’t instantly overwhelm the donator with all kinds of fields, but instead got them to make an investment first by making an easy decision — donation amount.
After they’ve clicked it, it feels like a commitment so they are more likely to follow through and input billing details.
Here are the variables:
It works in a similar way to the incremental commitment sales tactic. The deeper a visitor gets into your funnel, the less likely they are to back out at any given time.
You can emulate this by breaking your signup form into steps or removing some fields from your email opt-in box.
Numerous studies have found that there is a positive correlation between trust badges and increased conversions and sales.
Most of the time, visitors aren’t willing to interact with something if it isn’t blindingly obvious.
One example involved New York-based digital marketing agency Blue Fountain Media. They performed a simple A/B test on their sign up page to see what effect adding a Verisign trust badge would have.
Here’s what the original page looked like without the Verisign trust badge.
And here’s what the test page looked like with the Verisign trust badge.
The result? Far more people trusted the second test version with the trust badge, and it led to a whopping 42 percent increase in sales. Their data found that this simple change helped visitors feel more confident about providing Blue Fountain media with their personal information.
Another example is Virtual Sheet Music, a company that provides downloadable sheet music for piano, guitar, violin, and other instruments. The company initially used a Verisign trust badge but removed it due to contractual agreements.
When this happened, they experienced a noticeable drop in sales. But once they put the trust badge back on their site, conversions increased by 31 percent. Virtual Sheet Music then went on to switch to an EV SSL Certificate, which resulted in an additional 13 percent in sales.
Yet another study involved solar energy solutions provider, Clean Energy Experts. Adding a Verisign seal to their information portal helped them boost conversions by an astonishing 137 percent.
While I can’t say that every single brand will experience a dramatic lift in conversions by adding a trust badge, it’s clear that many companies do see a positive impact.
The lessons you can learn from this?
If you’re experiencing issues with shopping cart abandonment, aren’t getting the conversions/sales you want and generally want to build more trust with customers, this is a great way to do it.
Additionally, if you want a visitor to click a link, fill out certain form fields or take specific action on a CTA on an important landing page, your brand needs to put it right out in the open. That might mean breaking down an element like a hamburger menu into its individual items, or moving a call to action to a more prominent place.
No matter how you slice it, the proof behind this comes from eye-tracking heatmaps.
We know that readers’ eyes move in an F-shape along a page or screen, like this:
So with that in mind, design your pages so they highlight the most important elements to get more clicks and higher engagement rates.
Reassuring visitors increased Highrise signups by 30%
Often, testing your landing page’s headline can be the single best way to improve overall business performance. A simple test like the one Highrise carried out can give you a massive boost in leads.
Control:
Winner (+30% signup rate):
By emphasizing the 30-day free trial prominently in the main headline, visitors felt reassured and like they had nothing to lose by signing up and giving it a go.
The strong call to action of ‘Pick a plan to get started!’ could have also had an effect, especially since it tells the visitor the exact next step they need to do, instead of a vague instruction.
The power of guarantees has been proven time and time again, and you can see it even on opt-in forms with copy like ‘We promise never to spam you’ or ‘I hate spam as much as you do’, like I am using below my email subscription sign-up forms. We consider this another form of social proof because when you see it, your brain automatically assumes that others feel like you do too. And guess what, it works!
Performable increased clicks by 21% simply by changing button color
Colors can have a profound effect on your reader’s response to your site. Think about it in the extreme and imagine a site that was made up of the world’s least favorite colors.
Below is a simple but legendary A/B test carried out by Performable. By just tweaking the color of their main call to action button from green to red, they increased clicks by 21%.
HubSpot’s analysis of the test is that the color didn’t matter as much as the contrast. The green in Performable’s logo meant the call to action button was dampened and got blended into the background.
Changing it to red — green’s perfect contrast — made it stand out and make visitors want to click.
Changing one word increased clicks by 161%
Never underestimate the power of one word A/B tests. Sometimes they can have insane results.
Software firm Veeam tested the wording the call to action that would put visitors in touch with the sales team. Here are the two versions:
Control:
Winner (+161.66% clicks):
To many would be customers, ‘Request a quote’ sounds like it might be slow, take too much time, and connect you with a sales rep whose life will now be devoted to hounding you.
‘Request pricing’ sounds and feels better psychologically because it implies the pricing is fixed, and all you have to do is request and learn more. Obviously that’s how Veeam’s visitors felt, too.
Tracking and analyzing your A/B tests
Assuming you’re using different platforms to test different parts of your product and marketing, you need a central place to store it all.
What’s the answer?
Well, as it is in 90% of cases: a spreadsheet.
Whether you love them or hate them, they are unrivaled when it comes to being the best places to store, filter, and analyze structured data.
Closing thoughts – Learning from A/B tests so you can optimize your processes going forward
When you have it in front of you, it can be easy to be blinded by a fake certainty of what the data actually means.
Sure, you find that one headline did better than the other. But why? Without answering the ‘why’, you won’t make better choices in the future or learn anything from your tests.
That’s why it’s best to first set up tests that lend themselves well to statistical significance.
For example, let’s say you’re making changes to a landing page and find that it boosted conversion by 43%. Nice!
The problem comes when you find that the changes you made were changes to the copy, the button and the headline. Then what? How do you know what made the impact?