HomeGuidesAPI Reference
Submit Documentation FeedbackJoin Developer CommunityOptimizely GitHubOptimizely NuGetLog In


Not all experiments are tied to specific features that you have already flagged in your code. Sometimes, you will want to run a standalone test to answer a specific question, which of two (or more) variations performs best? For example, is it more effective to sort the products on a category page by price or category?

These one-off experiments are called **A/B tests**, as opposed to [feature tests](🔗) that run on features you have already flagged. With A/B tests, you define two or more **variation keys** and then implement a different code path for each variation. From the Optimizely interface, you can determine which users are eligible for the experiment and how to split traffic between the variations, as well as the [metrics](🔗) you will use to measure each variation's performance.

## 1. Select A/B Test in your project

In the **Experiments** tab, click **Create New Experiment** and select **A/B Test**.

1438




## 2. Set an experiment key

Specify an experiment key. Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate experiments in your application.

Do not change the experiment key without making the corresponding change in your code.

808




## 3. Set experiment traffic allocation

The traffic allocation is the fraction of your total traffic to include in the experiment, specified as a percentage.

For example, you set an allocation of 50% for an experiment that is triggered when a user does a search. This means:

  • The experiment is triggered when a visitor does a search, but it will not be triggered for all users. 50% of users who do a search will be in the experiment, but 50% of users who do a search will not.

  • Users who do not do a search also will not be in the experiment. In other words, the traffic allocation percentage may not apply to all traffic for your application.

429




You can stick with the default 50%/50% split that Optimizely sets you up with, or you can increase the traffic allocation to get to statistical significance faster.

For more information, see our help center article [Changing traffic allocation and distribution in Optimizely](🔗).

Optimizely determines the traffic allocation at the point where you call the Activate method in the SDK.

You can also add your experiment to an [exclusion group](🔗) at this point.

## 4. Set variation keys and traffic distribution

Variations are the different code paths you want to experiment on. Enter a unique variation key to identify the variation in the experiment and optionally a short, human-readable description for reporting purposes.

You must specify at least one variation. There is no limit to how many variations you can create.

Use the _Distribution Mode_ dropdown to select how you distribute traffic between your variations:

  • _Manual_ - By default, variations are given equal traffic distribution. Customize this value for your experiment's requirements.

  • _Stats Accelerator_ - To get to statistical significance faster or to maximize the return of the experiment, use Optimizely’s machine learning engine, the Stats Accelerator. For more information, see [Get to statistical significance faster with Stats Accelerator](🔗). For information about when to use Stats Accelerator versus running a multi-armed bandit optimization, see [Multi-armed bandits vs Stats Accelerator](🔗).

1076


distribution mode

## 5. (Optional) Add an audience

You can opt to define audiences if you want to show your experiment only to certain groups of users. See [Define attributes](🔗) and [Create audiences](🔗).

## 6. Add a metric

[Add events](🔗) that you’re tracking with the Optimizely SDKs as metrics to measure impact. Whether you use existing events or create new events to use as metrics, you must add at least one metric to a experiment. To re-order the metrics, click and drag them into place.

Important

The top metric in an experiment is the primary metric. Stats Engine uses the primary metric to determine whether an A/B test wins or loses, overall. Learn about the [strategy behind primary and secondary metrics](🔗).

782




## 7. Complete your experiment setup

Click **Create Experiment** to complete your experiment setup.

## 8. Implement the code sample into your application

Once you have defined an A/B test, you will see a code sample for implementing it in your application.

773




For each A/B test, you use the Activate method to decide which variation a user falls into, then use an `if` statement to apply the code for that variation. See the example below.





The Activate method:

  • Evaluates whether the user is eligible for the experiment and returns a variation key if so. For more on how the variation is chosen, see [How bucketing works](🔗) and the SDK reference guide for the Activate method.

  • Sends an event to Optimizely to record that the current user has been exposed to the A/B test. You should call Activate at the point you want to record an A/B test exposure to Optimizely. If you don't want to record an A/B test exposure, use the Get Variation method instead.

Note

If any of the conditions for the experiment are not met, the response is `null`. Make sure that your code adequately handles this default case. In general, you will want to run the baseline experience.