Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In

A/B tests

An overview of A/B testing in Optimizely Feature Experimentation.

Overview

Optimizely Feature Experimentation gives you all the power of traditional A/B tests, with the additional advantage of feature flags. Since they are created on top of feature flags, your experiments fit seamlessly into your software development lifecycle.

📘

Note

See plan details for information on which of the preceding features are compatible with your plan.

Why experiment?

Experiments let you go far beyond simple "flag on/flag off" comparisons. Experiments help you answer and tackle difficult questions like "which feature or component should we invest in building?", while flag percentage deliveries are for when you're confident in that answer.

Imagine that you're building a product sorting feature. With experiments, you could build flag variables for different versions of the sorting algorithm, a different label in the interface, and different numbers of catalog items. You can test each of these variables in variations concurrently without deploying new code. Run an A/B test to learn how these variations perform with your user base, pick the winning variation, and roll it out with a feature flag.

You can shorten your market feedback time by comparing two variations of a flag in a controlled experiment in a given time frame, then view analytics on which variation was most successful-- right in the Optimizely app. You can interpret your metrics with the help of our sophisticated Stats Engine, built for the modern world of business decision-making. See the New Stats Engine Whitepaper.

Next

Run A/B tests