Not all experiments are tied to specific features that you've already flagged in your code. Sometimes, you'll want to run a standalone test to answer a specific question—which of two (or more) variations performs best? For example, is it more effective to sort the products on a category page by price or category?
These one-off experiments are called **A/B tests**, as opposed to [feature tests](🔗) that run on features you've already flagged. With A/B tests, you define two or more **variation keys** and then implement a different code path for each variation. From the Optimizely interface, you can determine which users are eligible for the experiment and how to split traffic between the variations, as well as the [metrics](🔗) you'll use to measure each variation's performance.
### 1. Select A/B Test in your project
In the _Experiments_ tab, click _Create New Experiment_ and select _A/B Test_.

### 2. Set an experiment key
Specify an experiment key. Your experiment key must contain only alphanumeric characters, hyphens, and underscores. The key must also be unique for your Optimizely project so you can correctly disambiguate experiments in your application.
Do not change the experiment key without making the corresponding change in your code.

### 3. Set experiment traffic allocation
The traffic allocation is the fraction of your total traffic to include in the experiment, specified as a percentage. For example, you set an allocation of 50% for an experiment that is triggered when a user does a search. This means:
The experiment is triggered when a visitor does a search, but it won’t be triggered for all users. 50% of users who do a search will be in the experiment, but 50% of users who do a search will not.
Users who don’t do a search also won't be in the experiment. In other words, the traffic allocation percentage may not apply to all traffic for your application.
For more information, see our support documentation article [Traffic allocation and distribution](🔗).
Optimizely determines the traffic allocation at the point where you call the `Activate
` method in the SDK.
You can also add your experiment to an [exclusion group](🔗) at this point.

### 4. Set variation keys and traffic distribution
Variations are the different code paths you want to experiment on. Enter a unique variation key to identify the variation in the experiment and optionally a short, human-readable description for reporting purposes.
You must specify at least one variation. There’s no limit to how many variations you can create.
By default, variations are given equal traffic distribution. Customize this value for your experiment's requirements.

### 5. (Optional) Add an audience
You can opt to define audiences if you want to show your experiment only to certain groups of users. See [Define audiences and attributes](🔗).
### 6. Add a metric
[Add events](🔗) that you’re tracking with the Optimizely SDKs as metrics to measure impact. Whether you use existing events or create new events to use as metrics, you must add at least one metric to a experiment. To re-order the metrics, click and drag them into place.
Important
The top metric in an experiment is the primary metric. Stats Engine uses the primary metric to determine whether an A/B test wins or loses, overall. Learn about the [strategy behind primary and secondary metrics](🔗).

### 7. Complete your experiment setup
Click _Create Experiment_ to complete your experiment setup.
### 8. Implement the code sample into your application
Once you've defined an A/B test, you'll see a code sample for implementing it in your application.

For each A/B test, you use the [Activate](🔗) method to decide which variation a user falls into, then use an `if
` statement to apply the code for that variation. See the example below.
The Activate method:
Evaluates whether the user is eligible for the experiment and returns a variation key if so. For more on how the variation is chosen, see [User bucketing](🔗) and the API reference for [Activate](🔗).
Sends an event to Optimizely to record that the current user has been exposed to the A/B test. You should call Activate at the point you want to record an A/B test exposure to Optimizely. If you don't want to record an A/B test exposure, use the `
Get Variation
` method instead.
Note
If any of the conditions for the experiment aren't met, the response is `
null
`. Make sure that your code adequately handles this default case. In general, you'll want to run the baseline experience.