HomeGuidesAPI Reference
Submit Documentation FeedbackJoin Developer CommunityLog In

Run A/B tests

This topic describes how to set up a simple A/B or ON/OFF test in Optimizely Full Stack.

If you are new to experimentation, you can get a lot done with a simple ON/OFF A/B test. This configuration has one flag with two variations:

  • One "flag_on" variation

  • One "flag_off" variation

Restrictions

In your rulesets for a flag, your experiment must always be the first rule, and must be the only experiment in the ruleset. In other words, you can only run one experiment at a time for a flag.

Setup overview

To configure a basic A/B test:

  1. (Prerequisite) Create a flag.

  2. (Prerequisite) Handle user IDs.

  3. Create and configure an experiment rule in the Optimizely app. See the following section: Create an experiment.

  4. Integrate the example code that the Optimizely app generates with your application. See the following section: Implement the test.

  5. QA your experiment in a non-production environment. See QA and troubleshoot.

  6. Discard any QA user events, and enable your experiment in a production environment.

Create an experiment

To create a new experiment in the Optimizely app:

  1. Navigate to Flags, select your flag, and select your environment.
  2. Click Add Rule.
Add a ruleAdd a rule

Add a rule

  1. Select A/B test.
  2. Configure your experiment in the following steps:
Configure experimentConfigure experiment

Configure experiment

  1. (Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions.

  2. Set the percentage slider to allocate the percentage of your audience(s) to bucket into the experiment. Note that if you plan to change the traffic after you start running the experiment, for example to take advantage of the Stats Accelerator feature, you'll need to implement a user profile service before starting the experiment. For more information, see Ensure consistent user bucketing.

  3. Add metrics based on tracked user events. To create and track events, see Create events. For more information about selecting metrics, see Choose metrics.

  4. Choose the flag variations to compare in the experiment. For a basic experiment, you can include one variation in which your flag is on, and one in which your flag is off. For a more advanced A/B/n experiment, create variations with multiple flag variables. No matter how many variations you create, leave one variation with the feature flag off as a control. For more information about creating variations, see Create flag variations.

Implement the experiment

If you already implemented the flag using a Decide method, you do not need to take further action (Optimizely SDKs are designed so you can reuse the same flag implementation for different flag rules). If the flag is not implemented yet, then copy the sample integration code into your application code and edit it so that your feature code runs or does not run, based on the output of the decision that it receives from Optimizely.

Remember, a user evaluates against each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. For more information, see Interactions between flag rules.

Test with flag variables

Once you have run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations, or flag variables.

Flag variations enable you to avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can update them remotely in the Optimizely app. For more information about what flag variations are, see Flag variations.

To set up an A/B test with multiple variations:

  1. Create and configure a basic A/B test. See previous steps.
  2. Create flag variations containing multiple variables. See Create flag variations.
  3. Integrate the example code with your application. See Implement flag variations.

What’s Next
Did this page help you?