The world of multivariate and A/B split testing can provide substantial benefits to an overall marketing program, but the process to get those insights can be intimidating at times. Between what to test, testing methodology, test plans and the math, the task at hand can be quite daunting. With a bit of organization and attention to detail, a good marketer can mitigate much of this stress and provide a comprehensive, timely and organized plan of attack that will ultimately provide great insights in the end.

A test plan is at the center of this effort and will be a living document that keeps the whole process in line with timelines, strategies, design and findings, while an organized calendar and team strategy will keep everybody on time and on task. With this procedural layout, marketers will stay informed and organized in an environment where a plethora of variables can be quite distracting.  In five simple steps, a test can be created, updated and completed with ease.

Step 1: Identify a testing strategy

The first thing that needs to be done is to ensure that you have a cohesive strategy to attack the problem and find a valuable answer. This is where the test plan is initially developed and the details are sorted out. A great place to start is to articulate your hypothesis about what is causing the issue. Next, identify the essential components of your test plan. Consider the following:

  • A/B/n or MVT:Is there only one element or many elements that you need to test?  Is this a re-design of a whole page or parts of the page?
  • Primary KPI:What key performance indicator (KPI) will ultimately determine a winning creative?
  • Secondary KPI:What key performance indicators will support that primary KPI?
  • Traffic segments:Are you only looking at a particular segment of site visitors such as paid search, Apple users, or some other segment you want to track?
  • Exclusions:Are you going to exclude any user types? For instance, are you removing all IE6 users or any user from the affiliate channel?
  • Dates/Times:Create a calendar and determine how much time your design team, development team, QA team, etc. will need to work on their parts of the plan. This will help keep you on track and set expectations on when the project will be complete.

Step 2: Test design and development

Once the test plan has been sorted out, you can start developing your creative assets. Depending on the size and scope of the test, this could take just a few hours of work if you are performing a simple button test or it could take a week or two in the case of a flow re-design similar to a checkout process. For larger projects, page mapping and wireframes can help save time in the long run, helping you iron out all details before the brunt of the work begins with mock-ups and code development.

Once the creative has been finalized, the assets should be packaged with the test plan and handed off to the code development team. That's when the technical process and development within the testing platform can begin.  After this process is complete, you should have a quick huddle between the program managers, and creative and development teams to walk through the project and ensure that everything meets expectations.

Step 3: QA and tracking verification

Once the initial development is completed, the test plan, creative assets and development pages can be handed off to the QA team. This is often the most crucial step in the whole process - where the details such as browser compliance, JS errors, load time issues, among other details, need to be scrutinized and sorted out. Anything that's missed during this QA step could rear its ugly head at launch, ultimately impacting the success of your project and affecting the quality of your test data.

A very detailed test plan will benefit the QA process. Remember, it is the only document your QA team will have to use to make test comparisons and find potential problems or functionality issues.

At the same time, it is recommended that you confirm that your test is tracking properly. It is your responsibility to ensure that every creative is visited a few times and test orders are placed. Don't forget to clear your cookies each time you test so the system views you as a new visitor and check back in to your reporting dashboard to ensure you are seeing correct numbers.

Step 4: Review and launch

Development is done, QA is complete and tracking works; it is now time to launch the test.  Generally it is best to get the e-marketing team on the phone with your development and design team and walk through the test one last time to ensure everybody is satisfied. Visit all versions of the creative, open the floor to questions and confirm that all parties are content. Once everybody agrees that you are ready, hit the switch and turn the test on.

After initiating the test, it is always a good idea to periodically check back every 45-60 minutes for the first half of the day to ensure that everything is still working as expected. This is one of the reasons it is best to avoid launching tests on Fridays unless they are done early in the morning. The same reasoning applies to launches late in the afternoon on weekdays. You want to ensure you have a team on hand to remedy any potential situation that may arise.

Step 5: Final analysis and recommendations

As the test is running, check the reporting periodically to ensure it is running smoothly and no particular creative is absolutely tanking. It is also important to remember that when a test first launches there may be some large disparities between challengers and the control group. This is going to be the initial chance variation before the law of large numbers begins to play a role and you see that data as representative. Because of this, it is advisable to avoid broad distribution of the initial reports - you want to allow time for the data to reflect the actual performance of the test.

Usually, a test should run for a minimum of two weeks to ensure a repetition of days and to allow for a large number of conversions across all creative. Many tests will need much longer than two weeks due to low traffic, low conversion rates, large number of challengers, etc.

Once the test has reached the expected duration and achieved a large enough number of conversions to become statistically significant, you can begin finalizing the test.

Based on the results of your test, decide which creative won and whether further testing is needed. Provide reasoning behind that recommendation and add this to the test plan.

Once there is consensus on a decision that a particular creative won, push that to 100% of traffic in your testing tool and begin to move that testing code to the production servers. If a new test is needed, fall back to the control and begin this process again, completing the necessary testing to come to an objective decision. The important part here is that this decision is objective. The reason you run a test is to take personal opinions and guesswork out of the decision making process and let the visitors tell you what works.

Testing can be a great marketing strategy and can often lead to some interesting and unforeseen solutions to consumer hurdles on your site. It is a process that can provide amazing results in many cases and in others prove that the control or current state is the best solution. The important thing to remember is that a losing test is still a winning endeavor. With each test you run, you'll learn more about the people who visit your site.

distributed by