If you're new to experimentation, you can get a lot done with a simple ON/OFF A/B test. This configuration has one flag with two variations:
One "flag_on" variation
One "flag_off" variation
In your rulesets for a flag, your experiment must always be the first rule, and must be the only experiment in the ruleset. In other words, you can only run one experiment at a time for a flag.
To configure a basic A/B test:
(Prerequisite) Create a flag.
(Prerequisite) Handle user IDs.
Create and configure an experiment rule in the Optimizely app. See the following section: Create an experiment.
Integrate the example code that the Optimizely app generates with your application. See the following section: Implement the test.
QA your experiment in a non-production environment. See QA and test.
Discard any QA user events, and enable your experiment in a production environment.
To create a new experiment in the Optimizely app:
- Navigate to Flags, select your flag, and select your environment.
- Click Add Rule.
- Select A/B test.
- Configure your experiment in the following steps:
(Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions. Or, see Create advanced audience combinations for information about creating and passing advanced audience combination with JSON.
Set the percentage slider to allocate the percentage of your audience(s) to bucket into the experiment. Note that if you plan to change the traffic after you start running the experiment, for example to take advantage of the Stats Accelerator feature, you'll need to implement a user profile service before starting the experiment. For more information, see Ensure consistent visitor bucketing.
Choose the flag variations to compare in the experiment. For a basic experiment, you can include one variation in which your flag is on, and one in which your flag is off. For a more advanced A/B/n experiment, create variations with multiple flag variables. No matter how many variations you create, leave one variation with the feature flag off as a control. For more information about creating variations, see Create flag variations.
If you already implemented the flag using a Decide method, you don't need to take further action (Optimizely SDKs are designed so you can reuse the same flag implementation for different flag rules). If the flag isn't implemented yet, then copy the sample integration code into your application code and edit it so that your feature code runs or does not run, based on the output of the decision that it receives from Optimizely.
Remember, a user evaluates against each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. For more information, see Interactions between flag rules.
Once you've run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations, or flag variables.
Flag variations enable you to avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can update them remotely in the Optimizely app. For more information about what flag variations are, see Flag variations.
To set up an A/B test with multiple variations:
- Create and configure a basic A/B test. See previous steps.
- Create flag variations containing multiple variables. See Create flag variations.
- Integrate the example code with your application. See Implement flag variations.
Updated over 1 year ago