More often than not a website is about conversion. Conversion can mean many things depending on the site. It could be selling more products, getting more reservations or better educating your users. Regardless of your site’s goals, one of the best ways to improve performance it to make changes to your site and test those changes against an existing control.

Improving form submission

A national franchising client wanted to improve conversions on an online estimate form. This problem needed to be split into two separate solutions. First, we needed to determine not only how many users were abandoning the existing form mid-stream, but also where they were abandoning the form. Armed with this information we could solve the second part of the problem: give the users a simpler/shorter form to fill out.

We needed to build both the abandonment measurement metrics as well as the secondary short form. Finally we needed to implement the A/B trigger and begin logging the results.
We accomplished this task with the following steps:

  1. Converted the existing longer form into a multi-step form, logging both user entry and exit (bounce) on each step. Thereby determining not only the form bounce rate, but at which point the user bounced in the process. An added advantage was the collection of the vital user data at an earlier point in the process. Now, even if a user bounced before completing the process, the vital contact information was still retained.
  2. Designed and built a shorter, more compact version of the primary form. While not collecting as much information, the information collected was still enough to provide an accurate real-time estimate.
  3. Designed and built the randomization engine to place users in either the control or test group and automatically switch the version of the form delivered to each group.
  4. Implement an audit system capable of providing meaningful bounce rate comparisons between the two tests.

Complex multi-variant A/B/C/D… testing

A national client needed to test everything from messaging to clicks to actual conversions within their main e-commerce funnel. Due to the complexity of their main purchase funnel as well as the varying sources of traffic to their site (paid search, organic, third-party national campaign, traditional media), they needed to run multiple tests simultaneously and declare winners of each test independently of any other tests currently running. They also needed to include dynamic data from their back-office systems within the testing.

To support these requirements we gathered information on how many tests would be required as well as the differences between each test. In addition, we discovered the trigger mechanism for each test. Some triggers were based on entry point: did the user come from a particular URL or click on a particular keyword? Other triggers were random: 50% of site visitors would see test A while the other 50% would see test B.
We then fulfilled the following tasks:

  1. Designed and built a custom multi-variant testing engine capable of administering multiple simultaneous tests depending on the user’s entry point and method.
  2. Designed and built an audit system capable of logging the various tests and user results and paths through each of the variants.
  3. Designed and built a testing framework capable of changing both static as well as real-time dynamic data such as inventory, pricing and specials delivered via back-office integrations.

Traditional Single-Variant A/B Testing

Sometimes A/B testing is fairly straight forward and complex custom solutions are not necessary. If the data to be tested is relatively static and the number of test scenarios and entry points can be limited, we can employ a third-party tool such as Google’s Website Optimizer.

For more information or questions about this case study, please contact us today.