Reaching Landing Page Nirvana
#Paid Search - PPC #Digital Marketing

Reaching Landing Page Nirvana

Testing and improving landing pages on an ongoing basis is essential to the success of PPC campaigns. Creating a great user experience that encourages website engagement and ultimately reaches a client’s goal, be it buying a product or subscribing to a newsletter, is not always straightforward.

This is where A/B testing comes in: A/B testing or split testing is used to compare different versions of a web page to see which performs best: from trialling different colour schemes to various calls to action. Performance is usually judged by comparing conversion rates but can also refer to engagement and other metrics.

A/B Testing Methods

There are three main methods that can be used to effectively perform a split test. The set up, advantage and disadvantages vary slightly amongst these methods but each will ultimately achieve the same end result - demonstrating which version of a webpage is performing best and therefore should be taken forward.

The first of the three methods is Manual Landing Page Test (Adwords/Bing)

This method can be used to A/B test directly in AdWords or Bing. To set this up a campaign needs to be created in which ad groups contain two ads with different final URLs. The ads should otherwise be identical. Within each ad group, the current landing page should be assigned to one ad and the new landing page to the other ad. Don’t forget to set campaign settings to “Rotate indefinitely” to ensure an even number of impressions for each ad is achieved; this will ensure each ad gets a good number of impressions and that the results are easily comparable. Also remember to remove any keyword level URLs as, if they exist, these are used by AdWords and Bing instead of the ad URLs which you are planning on testing.

A huge advantage of this method is that it is a very simple process. However, once you have endured the lengthy set up time it only has the ability to do a 50/50 test and includes paid search traffic solely thus limiting test abilities.

This is when Method 2, AdWords Campaign Experiments (ACE) comes into play:

This can only be activated in AdWords and there are a number of steps to setting this up.

Firstly head to the campaign settings and click through to ‘experiment settings’ at the bottom of the page. From here choose a name for the campaign, split and choose start and end dates for the experiment. The split option allows you to choose how traffic is split between the control and experiment ad groups. Make sure to save your settings!

You will need to duplicate all ad groups in the campaign and change the destination URLs of the new ads to the new landing page. Once this is done go to the campaign view and click on the beaker icon, next to the ad group names, and select ‘Control’ for the old ad groups and ‘Experiment’ for the new ones. Don’t forget to remove any keyword level URLs.

Once all these steps are complete you are ready to start the experiment. Go back into the campaign settings to initiate this.

When the experiment is over AdWords will automatically pause the ‘losing’ ad groups, whether they’re the control or experiment ones.

Although it seems a long process to set up it is in fact quicker to do this than manual testing. More so, AdWords Campaign Experiments feature a built-in statistical significance calculator which will alert you if test results are valid – select ‘Experiment’ in the Segments dropdown to see this data. Please see the section below on ‘How to Measure the Results’ for further details.

As well as this, it offers the ability to choose how traffic is split between control and experiment ad groups. This is particularly useful if you’re testing a large change or are generally risk adverse as you can choose to send a smaller proportion of traffic to the new landing page instead of sending half of all traffic to a new page when you are not sure how it will perform.

Similar to the first method, the AdWords Campaign Experiment method only includes paid search traffic and because each ad group has its own (identical) keywords, bids and ads, any changes need to be duplicated across the control and experiment ad groups so as to not alter the results.

The final of these three methods is to use A/B testing tools:

A/B testing tools such as Unbounce or Visual Website Optimizer are extremely flexible and comprehensive in the way they allow A/B tests to be created and reported. They work by varying the page that visitors see. For example, if you send all AdWords traffic to one particular page you can choose what percentage of that traffic sees variant A and what percentage sees variant B; the change is performed directly on the website.

With features such as drag and drop landing page design tools, heat maps and built-in statistical significance calculators, these tools make A/B testing easy and provide information on how users engage with the pages helping drive future decisions.

This method is extremely quick to set up and provides comprehensive results. Not only do these tools have a built-in statistical significance calculator but also the ability to choose how traffic is split between the control and experiment pages. Such A/B testing tools have simple but advanced landing page design features and templates and unlike both the other methods they can include all website traffic (SEO, Paid Search, Email etc.) allowing for a much larger testing spectrum.

To enable these extra features a code needs to be added to the webpage(s) during the initial set up and of course there is an extra cost so be prepared to convince others as to why it is worth spending money on this area.

How to Measure the Results

First and foremost results need to be statistically significant. In essence, the concept refers to the fact that a certain number of people need to be part of the test to ensure that results are valid and not the result of random chance.

Think about it like throwing a die. It’s likely that if you throw it six times you won’t get each number on the die to come up but if you throw it 6000 times you can expect each number to come up around 1000 times. The more you throw it the more even the occurrences of each number. Random chance is effectively eliminated or at least greatly reduced. Thus, the higher the number of visitors to each landing page in your test, the more likely it is that the number of people that convert is not random but due to the element that you have changed on the page.

Statistical significance is fairly difficult to calculate but don’t worry, you don’t need to be a statistician to do an A/B test. A great number of free calculators exist online where you can simply input the number of visitors and conversions for each page in the test and find out whether your results are valid or not.

Winning at A/B Testing

This short list of test ideas should get you going in the right direction. Bear in mind that it is not a complete list of options by any means:

  • Test different colour schemes and different colours for confirmation buttons
  • Test different form and button locations
  • Test different calls to action e.g. Checkout Securely versus Buy Now
  • Test different content: different copy, different images, different formats
  • Test different promotions
  • Test duplication e.g. is it better to have one confirmation button or two buttons in different locations?
  • Test site navigation e.g. menu options and their order
  • Test forms e.g. field names, sizes, number of boxes
  • Test personalisation e.g. different messages based on visitor location, source of traffic, new versus returning visitors
  • Test different USPs (unique selling points)

Note: Test things one at a time otherwise you won’t know which change drove the improvement or decrease in performance.

Always keep best practice in mind as this will usually provide a great starting point for any test. Do your research, see what works for others and base your initial landing page changes on that. Then take it one step further and leverage your user data to come up with new suggestions.

A/B tests don’t have to be 50/50. If you have sufficient data try other proportions so that you don’t risk such a high percentage of your traffic testing changes that may perform worse than you current landing page. If you are really risk adverse try a 90/10 split (90% of traffic going to the control page, 10% of traffic going to the new page).

Remember, the testing never stops or at least it should never stop if you plan on gaining and keeping an edge over competitors.