The availability of features may depend on your plan type. Contact your Customer Success Manager if you have any questions.

Dev guideRecipesAPI Reference
Dev guideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev guide

A/B tests

An overview of A/B testing in Optimizely Feature Experimentation.

Overview

Optimizely Feature Experimentation gives you the power of traditional A/B tests with the additional advantage of feature flags. Because your experiments are created on top of feature flags using flag rules, they fit seamlessly into your software development lifecycle.

Experiments

Experiments let you go far beyond simple "flag on" or "flag off" comparisons. Experiments help you answer and tackle difficult questions like, "Which feature or component should we invest in building?". In contrast, flag deliveries (targeted deliveries) are for when you are confident in that answer.

For example, you are building a product sorting feature. With experiments, you could build flag variables for different versions of the sorting algorithm, a different label in the interface, and different numbers of catalog items. You can test each variable in variations concurrently without deploying new code. Run an A/B test to learn how these variations perform with your user base, pick the winning variation, and roll it out with a feature flag.

You can shorten your market feedback time by comparing two variations of a flag in a controlled experiment in a given time frame, then view analytics on which variation was most successful right in the Optimizely app. You can interpret your metrics with the help of Optimizely's sophisticated Stats Engine, built for the modern world of business decision-making.