Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunitySubmit a ticketLog In
GitHubNuGetDev CommunitySubmit a ticket

Run A/B tests

How to set up a simple A/B or ON/OFF test in Optimizely Feature Experimentation.

If you are new to experimentation, you can get a lot done with a simple ON or OFF A/B test. This configuration has one flag with two variations:

  • One "flag_on" variation.

  • One "flag_off" variation.

Restrictions

If you intend on having multiple experiments and flag deliveries (targeted deliveries) in your flag, your experiment must always be the first rule in your rulesets for a flag.

Setup overview

To configure a basic A/B test:

  1. (Prerequisite) Create a flag.

  2. (Prerequisite) Handle user IDs.

  3. Create and configure an A/B Test rule in the Optimizely app.

  4. Integrate the example decide code that the Optimizely Feature Experimentation app generates with your application.

  5. Test your experiment in a non-production environment. See QA and troubleshoot.

  6. Discard any QA user events and enable your experiment in a production environment.

Create an experiment

Create A/B test rule

  1. Select a flag from the Flags list.
  2. Select the environment you want to target.
  3. Click Add Rule.
  4. Select A/B Test.
create a/b test rule

Configure your A/B test rule

  1. Enter a Name
  2. The Key is automatically created based on the Name. You can optionally update it.
  3. (Optional) Click Add description to add a description. You should add your hypothesis for your A/B test rule as the description.
  4. (Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions.
  5. Set the Ramp percentage to allocate the percentage of your audience to bucket into the experiment.

📘

Note

If you plan to change the traffic's Ramp percentage after running the experiment or select Stats Accelerator for the Distribution Mode, you will need to implement a user profile service before starting the experiment.

For more information, see Ensure consistent user bucketing.

  1. Add Metrics based on tracked user events. See Create events to create and track events. For information about selecting metrics, see Choose metrics.

  2. Choose how your audience will be distributed using Distribution Mode. Use the drop-down list to select either:

    1. Manual – By default, variations are given equal traffic distribution. Customize this value for your experiment's requirements.
    2. Stats Accelerator – Stats Accelerator automatically manipulates traffic distribution to minimize time to statistical significance using. For information, see Stats accelerator.
  3. Choose the flag variations to compare in the experiment. For a basic experiment, you can include one variation in which your flag is on and one in which your flag is off. For a more advanced A/B/n experiment, create variations with multiple flag variables. No matter how many variations you make, leave one variation with the feature flag off as a control. For information about creating variations, see Create flag variations.

  4. (Optional) Click Allowlist: Force up to 50 users into any variation(s) and enter the User ID. See Allowlisting.

  5. (Optional) Add the experiment to an Exclusion Group.

  6. Click Save.

configure a/b test rule

Implement the experiment using the decide method

Flag is implemented in your code

If you have already implemented the flag using a Decide method, you do not need to take further action (Optimizely Feature Experimentation SDKs are designed so you can reuse the exact flag implementation for different flag rules).

Flag is not implemented in your code

If the flag is not implemented yet, copy the sample flag integration code into your application code and edit it so that your feature code runs or does not run based on the output of the decision received from Optimizely.

Use the Decide method to enable or disable the flag for a user:

// Decide if user sees a feature flag variation
user := optimizely.CreateUserContext("user123", map[string]interface{}{"logged_in": true})
decision := user.Decide("flag_1", nil)
enabled := decision.Enabled
// Decide if user sees a feature flag variation
var user = optimizely.CreateUserContext("user123", new UserAttributes { { "logged_in", true } });
var decision = user.Decide("flag_1");
var enabled = decision.Enabled;
// Decide if user sees a feature flag variation
var user = await flutterSDK.createUserContext(userId: "user123");
var decideResponse = await user.decide("product_sort");
var enabled = decision.enabled;
// Decide if user sees a feature flag variation
OptimizelyUserContext user = optimizely.createUserContext("user123", new HashMap<String, Object>() { { put("logged_in", true); } });
OptimizelyDecision decision = user.decide("flag_1");
Boolean enabled = decision.getEnabled();
// Decide if user sees a feature flag variation
const user = optimizely.createUserContext('user123', { logged_in: true });
const decision = user.decide('flag_1');
const enabled = decision.enabled;
// Decide if user sees a feature flag variation
$user = $optimizely->createUserContext('user123', ['logged_in' => true]);
$decision = $user->decide('flag_1');
$enabled = $decision->getEnabled();
# Decide if user sees a feature flag variation
user = optimizely.create_user_context("user123", {"logged_in": True})
decision = user.decide("flag_1")
enabled = decision.enabled
// Decide if user sees a feature flag variation
var decision = useDecision('flag_1', null, { overrideUserAttributes: { logged_in: true }});
var enabled = decision.enabled;
# Decide if user sees a feature flag variation
user = optimizely_client.create_user_context('user123', {'logged_in' => true})
decision = user.decide('flag_1')
decision.enabled
// Decide if user sees a feature flag variation
let user = optimizely.createUserContext(userId: "user123", attributes: ["logged_in":true])
let decision = user.decide(key: "flag_1")
let enabled = decision.enabled

For more detailed examples of each SDK, see:

Adapt the integration code in your application. Show or hide the flag's functionality for a given user ID based on the boolean value your application receives.

The goal of the Decide method is to separate the process of developing and releasing code from the decision to turn a flag on. The value this method returns is determined by your flag rules. For example, the method returns false if the current user is assigned to a control or "off" variation in an experiment.

Remember, a user evaluates each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. See Interactions between flag rules for information.

Start your rule and flag

After creating your A/B test and implementing the experiment using the decide method, you can start your test.

  1. Click Run on your A/B test rule.

    run flag rule
  2. Toggle your flag On.

    turn flag on

Congrats! Your A/B test is running.

Test with flag variables

When you have run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations or flag variables.

Flag variations enable you to avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can edit them remotely in the Optimizely Feature Experimentation app. For information about flag variations, see flag variations.

To set up an A/B test with multiple variations:

  1. Create and configure a basic A/B test. See previous steps.
  2. Create flag variations containing multiple variables.
  3. Integrate the example code with your application.