A/B tests

Overview

Optimizely gives you all the power of traditional A/B tests, with the additional advantage of feature flags. Since they're created on top of feature flags, your experiments fit seamlessly into your software-development lifecycle.

๐Ÿ“˜

See plan details for information on which of the preceding features are compatible with your plan.

Why experiment?

Experiments let you go far beyond simple "flag on/flag off" comparisons. Experiments help you answer tackle difficult questions like "which feature or component should we invest in building?", while flag percentage deliveries are for when you're confident in that answer.

Imagine that you're building a product sorting feature. With experiments, you could build flag variables for different versions of the sorting algorithm, a different label in the interface, and different numbers of catalog items. You can test each of these variables in variations concurrently without deploying new code. Run an A/B test to learn how these variations perform with your user base, pick the winning variation, and roll it out with a feature flag.

You can shorten your market feedback time by comparing two variations of a flag in a controlled experiment in a given time frame, then view analytics on which variation was most successful-- right in the Optimizely app. You can interpret your metrics with the help of our sophisticated Stats Engine, which is built for the modern world of business decision-making. For more information, see The New Stats Engine Whitepaper.

๐Ÿ“˜

Stats engine is not available in the current Optimizely App, but will be re-enabled in a future release.

Next

Run A/B tests


Did this page help you?