Every successful PPC campaign starts by finding the right combination of targeting, bidding, and creative copy. As soon as you find success with your campaigns, you'll want to start optimizing them to increase results. One way to increase performance is to run A/B tests to find what works, and then scale it.
In this article, we'll break down the process for running run A/B tests in your PPC campaigns.
1. Define the Success Metric
The first step is to define the metric that will determine the success of your tests. This success metric will help you develop your test hypothesis and separate the winning variation from the losing one.
Here're the metrics you can use to measure the results of your PPC A/B tests:
- CTR (Click-through-rate)
- CPC (Cost-per-click)
- Cost per Conversion
- Conversion rate
- CPA (Cost-per-action)
- ROAS (Return on ad spend)
Which metric should you choose? It depends on what you are trying to find from your tests. There's no right or wrong metric; rather, there are the right metrics for the goal.
For example, if you want to see what specific attributes make people click on your ads, then CTR is the best metric. If one of your experimental ads gets a higher CTR than the control, you know the attribute you are testing is driving the increase, assuming everything else is the same.
- Write down your test goal.
- Pick a metric that is closest to that goal.
2. Define Your Hypothesis
Behind every successful A/B test, there's a clear hypothesis. The more clear the hypothesis, the better the outcome of your test. In the simplest terms, a hypothesis is a prediction of your test. In a hypothesis, you define what you will test, what the possible outcome is, and why you think so.
Chris Goward, CEO and Founder of conversion optimization agency WiderFunnel, puts it this way: "A hypothesis is simply a question you can ask of your target audience or test sample." Creating a test hypothesis is easy. In his book You Should Test That, Chris provides a simple structure to create one:
Changing [the thing you want to change] into [what you would change it into] will lift the conversion rate for [your conversion goal].
While you can test many variables in a website, ad networks offer few options. This simplifies the testing process. To create a hypothesis, pick one ad variable, then define what specific thing you will test in it and what result you expect. The variables you can test in your PPC campaigns are the following:
- Ad description (in Facebook and LinkedIn)
- Image (in Facebook, LinkedIn, and Twitter)
- Sitelinks (in Google Adwords)
If you were to create a hypothesis for a PPC A/B test, it could look like this:
Changing the headline to feature our latest discounts will lift the CTR by 10%.
Although you can test just a few variables, you can create an unlimited number of hypotheses for each. For example, within just the headline, you can test adding discounts or social proof, mention the number years in business, among other things. You can get as creative as you want with your hypothesis.
- Define a testing variable for each ad network you are going to run a test on.
- Develop a hypothesis around the selected variable using the structure shown above.
3. Come Up with Test Ideas
Once you have defined your hypothesis, come up with as many test ideas as possible. Don't worry if you come up with more than you can test because you won't be using all of them. In the next step, you will see how to prioritize them.
For example, if you were to run a test on Facebook to see what kind of headline works best, you could test:
- The unique selling proposition (USP)
- The special deals or offers you have
- The most popular products you sell
- Key copy points and messaging
- A customer testimonial
- A specific result of a customer
- Take 15-20 minutes and brainstorm as many ideas as possible. Think about what things you could test for each hypothesis.
4. Prioritize the Test Ideas
Whenever you test your PPC campaigns, you will be effectively splitting your traffic and conversions in half. To make the most out of your budget, you must prioritize your test ideas, leaving only the ones that will have the highest likelihood of improving your campaign's performance.
There are many frameworks you can use to prioritize your ideas. My favorite one is the ICE Score, invented by Sean Ellis, the founder of GrowthHackers. The ICE Score is made up of three attributes:
- Impact: What will the impact be if this works?
- Confidence: How confident am I that this will work?
- Ease: What is the ease of implementation?
With this framework, you make a list of all the testing ideas (something you have done in the step before), and then for each one you define a numerical score from 1 (lowest) to 5 (highest) associated with each ICE attribute. Then, you would sum the three attributes and get a number for each test idea. Finally, you would compare the final number of every idea with each other, and the one that had the highest number would be the first one you would use in your test.
For example, if one of your ideas was to add the Scarcity principle to your LinkedIn ads' headline, you could say the impact expected is 4, the confidence is 3, and the ease is 4. This would give the idea an ICE score of 11. If this was the idea with the highest number, you would test this idea first.
- Using the ICE method, give all your test ideas from the previous step a numerical number for each of the three attributes.
- Organize all the test ideas by the highest ICE score, and test the ones with the highest score first.
5. Define the Sample Size for each Metric
Before you start running your test, you must know what's your sample size for each metric. You want to define the minimum amount of traffic (or conversions) any ad group should receive. After the ad group receives the set amount of traffic, you'd stop the test and analyze the results (which you will see how to do in step #7).
The amount of traffic you should set depends on your current numbers. For example, if you currently have an ad group that receives 500 visitors a day, you'd like to take 5 to 10 times that amount as a sample size. You want your ad groups to have enough traffic so that a single visitor doesn't affect the overall results.
Also, you must make sure each ad group receives the amount of pre-set traffic (or conversions) before analyzing the results. If you defined a sample size of 500 conversions for each ad group, and one of them received 600 and the other one received 450, you need to wait until the latter reaches 500 to stop the test altogether.
- Define the minimum sample size for your metrics. You can use one of the countless sample size calculator tools on the web. Based on personal experience, I'd recommend you to use this one.
6. Run the Test
With your ideas and sample sizes in order, you need to start running the first tests. Don't stop them as soon as you see a result or even if you hit your sample sizes. Wait for at least a week before you pause them. Many times, people behave differently depending on the day of the week. That's why you should wait for at least a week.
After each ad group reaches the sample size, you can pause each test. This, however, doesn't mean the testing is over. You must take the results of your tests and see if they are statistically significant. Choose a threshold that you feel most comfortable with (95% or 99% are the most common), and run them with an a/b testing growth tool, such as the one linked here from Kissmetrics. Take a look at the following example:
The first variation got 300 fewer visitors and 30 fewer conversions than the second one. Statistically speaking, however, the former beats the latter. Take notice of the fact the confidence level in this test is 97%. If my confidence level was higher – say, 99% – then I'd need to keep testing until I get statistically significant results. Only after every variation of your tests reaches statistical significance, you can compare the results. If they don't reach significance, keep testing.
- Based on your hypothesis developed before, start running the tests. Stop only after all your ad groups have reached minimum sample size.
- Analyze the statistical confidence. If your results haven't reached it, keep going until they do.
7. Analyze the Results
By now, you have taken the results of your tests and compare them with each other. If everything is OK, you will have a winner. But before you call it a day, you need to do a final thing. Take the numbers of your metrics previous to the tests and use them as benchmarks. Compare them with your current metrics and see how they compare with each other.
Also, take into consideration the timeframe of the test's results. If you ran a test for 2 weeks trying to reduce your CPA, you must compare the result of that test with the performance of the CPA for the previous 2 weeks previous to the test.
- Compare your test's results with your previous performance. If the new results are better than the previous ones, the test was successful. Otherwise, you'll have to restart the process.
Any PPC specialist with some experience and skills can create a successful campaign. What's hard is to replicate it in each one you run. Today, you've seen the specific steps you need to take to get you started with A/B testing for your PPC campaigns. Running them will help you discover the specific things your audience likes and you will know how to scale them down the road.
- Facebook Ad Fatigue Best Practices -- get ideas for your next test.
- Quick Guide to Writing Successful ETAs -- learn the basics of writing ad copy that converts.
- 2017 Demand Gen Benchmarks -- compare your ad performance with industry leaders.
- 2017 Facebook Ads Benchmarks -- check out the average Facebook CTRs and how much your PPC peers are paying per click.