Feature tests give you all the power of traditional A/B tests, with the additional advantage of feature flags. Since they are created on top of feature flags, your experiments fit seamlessly into your software development lifecycle.
Feature tests give you access to powerful functionality like:
- Enterprise-scale experimentation
- Feature variables and variations
- Stats Engine for interpreting your experiment metrics
See [plan details]https://www.optimizely.com/contentassets/b09cd50720234d228c839a1c5db5e9e7/current-optimizely-full-stack-features.pdf) for information on which of the preceding features are offered by your plan.
Experiments let you go far beyond simple "feature on/feature off" comparisons. Experiments help you answer and tackle difficult questions like, "which feature or component should we invest in building?", while feature rollouts are for when you are confident in that answer.
Imagine that you are building a product recommendations feature. With feature tests (experiments), you could build feature variables for different versions of the recommendation algorithm, a different label in the interface, and different numbers of catalog items. You can test each of these variations concurrently without deploying new code. Run an experiment to learn how these variations perform with your user base, pick the winning variation, and roll it out with a feature flag.
You can shorten your market feedback time by comparing two variations of a feature in a controlled experiment in a given time frame, then view analytics on which variation was most successful -- right in the Optimizely app. Our metrics are powered by our sophisticated Stats Engine, which is built for the modern world of business decision-making. For more information, see our documentation on Stats Engine: How Optimizely calculates results or read the The New Stats Engine Whitepaper.
Updated 4 months ago