Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In

Run A/B tests

How to set up a simple A/B or ON/OFF test in Optimizely Feature Experimentation.

If you are new to experimentation, you can get a lot done with a simple ON/OFF A/B test. This configuration has one flag with two variations:

  • One "flag_on" variation

  • One "flag_off" variation

Restrictions

In your rulesets for a flag, your experiment must always be the first rule and must be the only experiment in the ruleset. In other words, you can only run one experiment at a time for a flag.

📘

Note

Having multiple experiments per flag is currently in Beta. Contact your customer success manager (CSM) if you want to be added to the Beta. See interactions between rule flags for more.

Setup overview

To configure a basic A/B test:

  1. (Prerequisite) Create a flag.

  2. (Prerequisite) Handle user IDs.

  3. Create and configure an experiment rule in the Optimizely app. See the section: Create an experiment.

  4. Integrate the example code that the Optimizely Feature Experimentation app generates with your application. See the section: Implement the test.

  5. QA your experiment in a non-production environment. See QA and troubleshoot.

  6. Discard any QA user events and enable your experiment in a production environment.

Create an experiment

To create a new experiment in the Optimizely app:

  1. Navigate to Flags, select your flag and select your environment.
  2. Click Add Rule.
Add a rule

Add an A/B test rule

  1. Select A/B Test.
  2. Configure your experiment in the following steps:
Configure A/B experiment rule

Configure A/B experiment rule

  1. (Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions.

  2. Set the percentage slider to allocate the percentage of your audience(s) to bucket into the experiment.

📘

Note

If you plan to change the traffic after running the experiment, you will need to implement a user profile service before starting the experiment. For more information, see Ensure consistent user bucketing.

One reason you would change traffic would be if you are using Stats Accelerator.

  1. Add metrics based on tracked user events. See Create events to create and track events. For more information about selecting metrics, see Choose metrics.

  2. Choose how your Audience will be distributed using Distribution Mode. Use the drop-down to select either:

    1. Manual–By default, variations are given equal traffic distribution. Customize this value for your experiment's requirements.
    2. Stats Accelerator–To get to statistical significance faster or to maximize the return of the experiment, use Optimizely’s machine learning engine, the Stats Accelerator. For more information, see Get to statistical significance faster with Stats Accelerator. For information about when to use Stats Accelerator versus running a multi-armed bandit optimization, see Multi-armed bandits vs Stats Accelerator.
  3. Choose the flag variations to compare in the experiment. For a basic experiment, you can include one variation in which your flag is on and one in which your flag is off. For a more advanced A/B/n experiment, create variations with multiple flag variables. No matter how many variations you make, leave one variation with the feature flag off as a control. For more information about creating variations, see Create flag variations.

Implement the experiment

If you have already implemented the flag using a Decide method, you do not need to take further action (Optimizely Feature Experimentation SDKs are designed so you can reuse the exact flag implementation for different flag rules). If the flag is not implemented yet, copy the sample integration code into your application code and edit it so that your feature code runs or does not run based on the output of the decision received from Optimizely.

Remember, a user evaluates each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. See Interactions between flag rules for more information.

Test with flag variables

Once you have run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations or flag variables.

Flag variations enable you to avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can edit them remotely in the Optimizely Feature Experimentation app. For more information about flag variations, see Flag variations.

To set up an A/B test with multiple variations:

  1. Create and configure a basic A/B test. See previous steps.
  2. Create flag variations containing multiple variables. See Create flag variations.
  3. Integrate the example code with your application. See Implement flag variations.

What’s Next