• Fig 1 - Source: The Complete Guide to A/B Testing, VWO
    image
  • Fig 2 - Source: A/B Testing Literature Review, Information Architecture and Web Usability
    image

A/B Testing

Improve this article. Show messages.

Summary

image
Fig 1 - Source: The Complete Guide to A/B Testing, VWO

Does your manager decide based only on his intuition and experience, not data? Do you like to experiment and test results? Would you like data to validate your assumptions? Do you prefer to make data-driven decisions?

If your answers to the above questions are "YES", then you are in the right place - learn more about A/B Testing.

A/B Testing, also known as Split Testing, is a method to compare two or more variations of an app or a webpage or an email and determine which one works better. In this method, the users are randomly assigned to the different variants and statistical analysis is performed to determine which variation performs better for a defined business goal.[1]

Milestones

1863

Placebo and Active Treatment comparison by Austin Flint to directly compare the efficacy of a dummy simulator to an active treatment.[6]

1920

Statistician and biologist Ronald Fisher discovered the most important principles behind A/B testing and randomized controlled experiments in general. He ran the first experiments in agriculture.[7]

1923

In the book "Scientific Advertising", Claude Hopkins writes,[5]

Almost any questions can be answered, cheaply, quickly and finally, by a test campaign. And that’s the way to answer them - not by arguments around a table. Go to the court of last resort - the buyers of your product.

1948

The first ever randomized clinical trial was conducted by Medical Research Council to determine the efficacy of streptomycin in the treatment of pulmonary tuberculosis.[6]

1970

The marketers used the method in direct response campaigns - "Would a postcard or a letter to target customers result in more sales?"[7]

2000

A/B Testing on the web dates back to the early days of 2000 when Google engineers wanted to test the optimal number of display results per page. The test was a failure but Google did learn many lessons about the impact of speed on the user experience.[8]

2007

Obama's presidential campaign had a new digital team advisor, Dan Siroker. He decided to introduce the A/B Testing, a crucial technique that Google relied on in developing and testing new product features, in the campaign with a goal of converting visitors to donors.[8]

2011

Google ran 7,000 experiments on its search algorithm. Similarly, experiments were conducted by Netflix, Amazon and eBay.[8]

Discussion

  • Why A/B Testing?

    With A/B Testing, you can make more out of your existing traffic. Since even small changes can lead to better conversion or lead generation, the return on investment (RoI) can be massive sometimes.[1] The results help us to understand user behavior better and determine what impacts the user experience. Since the testing validates assumptions, it removes guesswork and enables data-informed decisions from "we-think" to "we-know".[2]

  • How should you do A/B Testing?

    The correct way to run a A/B Testing is to follow a scientific process:[2]

    • Identify Goal: Based on the business goal, a conversion metric is decided. Example: Number of visitors who sign up for free trial.
    • Observe User Behavior: Observe the user behavior and understand his/her motivations.
    • Construct Hypothesis: Time to brainstorm hypotheses, prioritize based on expected impact and difficulty of implementation.
    • Create Variations: Decide on which feature you want to improve such as changing the color or size of the CTA button.
    • Run Experiment: Randomly assign the users to test and control group and collect relevant data for analysis.
    • Analyze Results: Once the experiment is complete, you can analyze the results of which variation performed better and whether there was a statistically significant difference.
  • When should you not do A/B Testing?
    image
    Fig 2 - Source: A/B Testing Literature Review, Information Architecture and Web Usability

    A/B Testing cannot make a good design choice on its own. It can compare two or more design choices and help us decide which is better.

    Incremental changes can improve design only upto a certain extent. In order to improve it radically, the designer should think more creatively and understand the user needs better.[5]

  • How does A/B testing affect SEO? Can you provide the best practices?

    A/B Testing or multivariate testing does not pose any inherent risk to the website's search rank if used properly. However, it is important to know the best practices.[2]

    1. No Cloaking

    2. Use rel="canonical"

    3. Use 302 Redirects Instead Of 301s

    4. Run Experiments Only As Long As Necessary

Milestones

1863

Placebo and Active Treatment comparison by Austin Flint to directly compare the efficacy of a dummy simulator to an active treatment.[6]

1920

Statistician and biologist Ronald Fisher discovered the most important principles behind A/B testing and randomized controlled experiments in general. He ran the first experiments in agriculture.[7]

1923

In the book "Scientific Advertising", Claude Hopkins writes,[5]

Almost any questions can be answered, cheaply, quickly and finally, by a test campaign. And that’s the way to answer them - not by arguments around a table. Go to the court of last resort - the buyers of your product.

1948

The first ever randomized clinical trial was conducted by Medical Research Council to determine the efficacy of streptomycin in the treatment of pulmonary tuberculosis.[6]

1970

The marketers used the method in direct response campaigns - "Would a postcard or a letter to target customers result in more sales?"[7]

2000

A/B Testing on the web dates back to the early days of 2000 when Google engineers wanted to test the optimal number of display results per page. The test was a failure but Google did learn many lessons about the impact of speed on the user experience.[8]

2007

Obama's presidential campaign had a new digital team advisor, Dan Siroker. He decided to introduce the A/B Testing, a crucial technique that Google relied on in developing and testing new product features, in the campaign with a goal of converting visitors to donors.[8]

2011

Google ran 7,000 experiments on its search algorithm. Similarly, experiments were conducted by Netflix, Amazon and eBay.[8]

Tags

See Also

  • Multivariate Testing
  • Discrete Choice Modelling
  • Statistical Inference
  • Spurious Correlation

Top Contributors

Last update: 2018-05-21 08:00:01 by arvindpdmn
Creation: 2017-03-17 07:22:06 by arvindpdmn

Article Stats

720
Words
4
Chats
2
Authors
4
Edits
1
Likes
483
Hits

Cite As

Devopedia. 2018. "A/B Testing." Version 4, May 21. Accessed 2018-06-18. https://devopedia.org/a-b-testing
BETA V0.15