R2integrated

Blog

4 Tips for Launching Successful A/B Testing Plans

Kerrie Davis (Wuenschel), Director of Analytics

A/B testing is a great way for marketers to analyze the results of a target audience by seeing two different variations of a campaign and determine which one performs better. Ultimately, A/B testing helps your company achieve specific goals such as increased conversion rates and other KPI improvements.

The variations of an A/B test can occur in a landing page, the Call to Action (CTA), email subject line, and form process optimization. There are many testing tools with defined processes make it easy to set up the variations. 

Before your team creates an A/B test, here are four tips to strategize and implement a successful plan:

Leverage data throughout the testing process

Before outlining any A/B test, look at your data to see if something needs to be improved. For example, are you seeing large drop-offs during an ecommerce or a form-completion process? Is your bounce rate on a landing page almost 100%? Is a key CTA not getting any clicks? Do you have data on what you want to test?

Creating a test based your “gut” feelings without data will impact test success/failure. If you do not have historical data on performance as to what you want to test, there are tools that can provide a proactive measure to tell you where users will go on a web page.

An example is 3M’s Visual Attention Software. If you do not have user or pageview data on what you want to test, it’s best to wait until you do. See the recommended sample size for testing in tip #2 below for more information.

Be sure to outline your hypothesis beforehand to clearly state the variable and its anticipated impact. Also set up configurations in order so you can track the results. Most importantly, determine ahead of time what you will do with the results, post action.

Finally, never change the overall meaning of a web page to see what changes will happen, i.e. providing a completely different branding experience or offer for one group over another group. Remember, the overall user experience should be cohesive and make sense in order for page visitors to complete the actions that you want them to.

Select the appropriate sample size for testing

Make sure the page(s) that you want to test receive enough visits to your test page to run a test in a timely manner. It’s best to target highly-trafficked web pages with the potential to record a lot of data on the specific user actions you want to see improve.

If you are not sure what “high amount of traffic” is applicable, you can leverage a sample size calculator to determine if the level of traffic you expect makes sense. Here are three calculators you can use:

With historical data, note any seasonal peaks/lows, or anytime where there would normally be a big swing in the data due to events unrelated to your test. (As examples, the slow winter season, or a new TV commercial during the Super Bowl). If you do not generate enough traffic to the page you want to test, you can leverage paid advertising or other marketing campaign efforts to drive more traffic to that page in order to start the test.

Test on one device-type first, not all 

A testing experience might not scale to all devices (desktops, mobile phones and tablets), even with a responsive site. Not all devices provide the same experience and behavior.

So decide which devices to test first (by leveraging data and based on practicality), and then continue the test only on that device-type until the test is complete. If you see positive results from your first test and decide to scale it, make sure to QA the use of other devices thoroughly before execution.

Don’t make mid-test changes

Making mid-test changes introduces too many variables, and you will not know why the test succeeded or failed. Test one variable at a time—patience is a virtue! 😊

We live in a quick information-turnaround world. However, for statistical significance, the test needs to run until a ‘winner’ is declared. Test front-runners might not always be the end-game winner.

If the final test results are unclear, this does not mean the test was unsuccessful. Also, don’t be afraid to perform a similar test again in the future. This may require some tweaks and more focus on your hypothesis and overall outcomes, but the same concept or idea can be used multiple times

Leveraging Data and Proactive Planning = Keys to Success

Whether you run tests on your own or with the help of a consultant, ultimately, your success will come down to how well you leverage data to perform the test—and how carefully you proactively plan what to do with the results, post action.

 

Now get out there and start testing!

About the author: Kerrie Davis (Wuenschel)

Kerrie Davis (Wuenschel) is the Director of Analytics for R2i. With 10 years experience in digital marketing, she's passionate about helping clients apply best practices to their analytics programs through audits, analysis, configuration, implementation, optimization, measuring, and reporting on success based on business goals and KPIs. Kerrie holds extensive experience with Adobe Analytics, Dynamic Tag Management, Google Analytics, and Google Tag Manager. Prior to joining R2i, Kerrie was a digital marketer in both the staffing and technology industry and received her undergraduate and graduate degrees from McDaniel College.

 

Related Articles

Sign Up for Insights