There is little doubt that email marketing is more than holding its own within the digital arena, with the DMA’s recent research showing record numbers are signing up for regular brand communications.
But testing is the base of every successful marketing programme – and email is no exception. Only the implementation of detailed tests lets you know which elements work for your customers and which don’t. This allows you to continuously optimise and develop your email templates to ensure that your campaigns remain successful in the long run.
Most marketers know that small changes can have a great influence on the performance of their email campaigns.
But where to start? What variables have an influence on the efficiency of a campaign and how do you run the correct tests and evaluate the results? Many marketers simply just don’t know where to begin and what opportunities there are to run tests.
Adjuster Bots – What can you test?
The opening rate is mainly dependent on three factors: the timing of your send outs, the sender name and the subject line. The click and conversion rate on the other hand, is mainly dependant on the content of your email.
Is the offer relevant and interesting? Is the graphic design (including images and order of content) attractive, well arranged and easy to understand? Is there a clear and obvious call-to-action? Are you using the right language for your customer (business-like/casual/funny)? Are you using personalisation and if so, which ones (first name, last name, location)? Those are just some of the questions you should ask yourself when composing a new template.
At this stage you’ve got a seemingly endless amount of variations.
An urgent need for action can be derived from your performance (opening, click and unsubscription rates) or feedback from your customers, colleagues and friends. There is no right or wrong, only better and worse performing templates and you have to find out which ones are which. Results can be surprising, but you’ll never find out if you don’t start testing.
Methods – What can be tested?
To test different variations of your templates, there are basically two methods: A/B split testing and multivariate testing. A/B-Tests are widely known and used due to their easy handling. To do such a test, firstly copy a template and make some small variations, ie replacing a picture, adjusting the colour of the call-to-action button or using a different subject line. Both templates will be sent to the same amount of recipients chosen at random from your data list.
The template proving to have the better performance is then sent to all the other members of this group. Professional email marketing software will do this automatically, so you only need to check your results at the end and use this knowledge for future campaigns. The application of this test is very simple and can improve a campaign’s performance significantly.
The downside to this method is that you can only test one element at a time. Even though you can test 20 different subject lines at the same time, you cannot adjust any further elements. When the template is adjusted in more than one way, you cannot tell which change has had what effect.
Making changes at multiple points, causes a huge effort and is very time consuming. Also, how will you know if subject A in combination with picture B, only performs well with text C and in sequence D? To find out, you will have to run multivariate tests.
Multivariate tests can be understood as an alignment of A/B-tests, which all run at the same time. The different elements to be tested will all be varied at the same time. This way you can see what interdependencies there are and at the end you have the best combination of all the tested variables, ie subject line, offer, style of writing and colours of buttons.
Due to the high number of possible combinations, you will need quite a large database of recipients for multivariate testing. For every possible combination, you will need your test sample to be large enough so that most of the recipients will receive the best working template in the end.
The solution can be called the reduced test-design, such as the Taguchi method. Using this method, two possible combinations will be tested in pairs and the influence of the single variables can be calculated mathematically. This way the amount of test groups can be reduced drastically.
Both methods have advantages as well as disadvantages. Using an A/B split test, you can test a single variable (subject line, new picture…) quickly and easily. To vary multiple elements and test those interdependencies, you will have to use multivariate tests.
The perfect moment of time – when should you test?
There is a simple answer to this: always. Obviously testing makes a lot of sense when introducing something new or you if are working on a new template design, for example. But continuous testing makes the most sense. Even if you are happy with your current performance, how will you know that opening rates using a different sender’s name will not double or you’ll be able to improve your click rate with a green call-to-action button by 2%?
On top of that, the preferences of your recipients and the design of your competition will change over time. The results of your tests will not always continue be valid and should be checked regularly, so try to take every opportunity and include tests with every send out. Even small changes can have a huge impact on your performance. Professional email marketing software will control those tests and results for you automatically – so why not use this simple and effective method of optimising your campaign? Just let your recipients decide what template is the best for them and profit from higher open and click rates.
Simon Bowker is managing director of eCircle, a Teradata company
Related stories
Email sign-ups reach record high