If you are new to experimentation, you can get a lot done with a simple `
A/B` test. This configuration has one flag with two variations:
One "flag_on" variation
One "flag_off" variation
In your rulesets for a flag, your experiment must always be the first rule and must be the only experiment in the ruleset. In other words, you can only run one experiment at a time for a flag.
Having multiple experiments per flag is currently in Beta. Contact your customer success manager (CSM) if you want to be added to the Beta. See [interactions between rule flags](🔗) for more.
## Setup overview
To configure a basic A/B test:
(Prerequisite) [Create a flag](🔗).
(Prerequisite) [Handle user IDs](🔗).
Create and configure an experiment rule in the Optimizely app. See the section: [Create an experiment](🔗).
Integrate the example code that the Optimizely Feature Experimentation app generates with your application. See the section: [Implement the test](🔗).
QA your experiment in a non-production environment. See [QA and troubleshoot](🔗).
Discard any QA user events and enable your experiment in a production environment.
## Create an experiment
To create a new experiment in the Optimizely app:
Navigate to **Flags**, select your flag and select your environment.
Click **Add Rule**.
Select **A/B Test**.
Configure your experiment in the following steps:
(Optional) Search for and add audiences. To create an audience, see [Target audiences](🔗). Audiences evaluate in the order in which you drag and drop them. You can choose whether to match each user on any or all of the audience conditions.
Set the percentage slider to allocate the percentage of your audience(s) to bucket into the experiment.
If you plan to change the traffic after running the experiment, you will need to implement a user profile service before starting the experiment. For more information, see [Ensure consistent user bucketing](🔗).
One reason you would change traffic would be if you are using Stats Accelerator.
Add metrics based on tracked user events. See [Create events](🔗) to create and track events. For more information about selecting metrics, see [Choose metrics](🔗).
Choose how your Audience will be distributed using **Distribution Mode**. Use the drop-down to select either:
**Manual**–By default, variations are given equal traffic distribution. Customize this value for your experiment's requirements.
**Stats Accelerator**–To get to statistical significance faster or to maximize the return of the experiment, use Optimizely’s machine learning engine, the Stats Accelerator. For more information, see [Get to statistical significance faster with Stats Accelerator](🔗). For information about when to use Stats Accelerator versus running a multi-armed bandit optimization, see [Multi-armed bandits vs Stats Accelerator](🔗).
Choose the flag variations to compare in the experiment. For a basic experiment, you can include one variation in which your flag is on and one in which your flag is off. For a more advanced A/B/n experiment, create variations with multiple flag variables. No matter how many variations you make, leave one variation with the feature flag off as a control. For more information about creating variations, see [Create flag variations](🔗).
## Implement the experiment
If you have already implemented the flag using a Decide method, you do not need to take further action (Optimizely Feature Experimentation SDKs are designed so you can reuse the exact flag implementation for different flag rules). If the flag is not implemented yet, copy the sample integration code into your application code and edit it so that your feature code runs or does not run based on the output of the decision received from Optimizely.
Remember, a user evaluates each flag rule in an ordered ruleset before being bucketed into a given rule variation or not. See [Interactions between flag rules](🔗) for more information.
## Test with flag variables
Once you have run a basic "on/off" A/B test, you can increase the power of your experiments by adding remote feature configurations or flag variables.
Flag variations enable you to avoid hard-coding variables in your application. Instead of updating the variables by deploying, you can edit them remotely in the Optimizely Feature Experimentation app. For more information about flag variations, see [Flag variations](🔗).
To set up an A/B test with multiple variations:
Create and configure a basic A/B test. See previous steps.
Create flag variations containing multiple variables. See [Create flag variations](🔗).
Integrate the example code with your application. See [Implement flag variations](🔗).