If you set up a new feature that you're building in Optimizely, you can run a feature test on it. A feature test enables you to change the user's experience by creating one variation where the feature is on and another where the feature is off. By tracking performance and engagement metrics in each variation, you can measure the overall impact of the feature on your user experience and your business goals.
You can also test more than two variations, each with different configurations of the [feature variables](🔗). For example, with a product recommendations feature, you could build variations with different versions of the algorithm, a different label in the interface, or a different number of items. You can test each of these variants without deploying new code.
See also [Interactions between rollouts and feature tests](🔗).
Important
Don't use the the [Activate](🔗) method with feature tests. If a feature test is running for a given user, [Is Feature Enabled](🔗) will activate it automatically. It will trigger an impression, post to notification listeners, and return the correct values for your feature configuration. Use the Activate method only for a one-off A/B test that isn't impacting a feature that is already implemented.
## Create a feature test
You implement feature tests using [features that you've already defined](🔗). Once you've instrumented a feature flag using the [Is Feature Enabled](🔗) and [Get Feature Variable](🔗) methods, you do not need any additional code. These methods will automatically determine whether the user qualifies for your experiment, which variation they should receive, and the resulting settings for each variable.
To create a new feature test:
Navigate to _Experiments > Create New..._.
Select _Feature Test_ from the dropdown menu.
Choose the feature you want to test or click _Create New Feature..._ to add a new feature. After you select a feature, Optimizely automatically generates an experiment key by appending “_test” to the end of the feature key for the feature you selected.
You can edit the experiment key if you like, as long as you always use a unique key.

## Create feature test variations
Optimizely automatically suggests two customizable variation keys for your feature tests: “variation_1” and “variation_2”.
When this feature test is live, Optimizely assigns a user to a variation, then returns the values associated with the assigned variation for the [Is Feature Enabled](🔗) method and your feature configuration (meaning, variables).
To create test variations:
Navigate to _Experiments_ and select the appropriate feature test.
(Optional) Edit the _Variation Key_ and _Description_ fields in _Variations_.
Adjust the _Traffic Distribution_ percentage value, as needed.
Toggle the feature _On_ or _Off_.
Enter or select the appropriate value for the specific variable key.
Click _Save_.

Note
If you add variations, Optimizely provides automatic suggestions according to the variation number: “variation_3”, “variation_4”, and so on. Deleting a variation won't affect the automatic numbering of Optimizely's automatic variation key suggestions.
## Use feature toggles and configurations
Feature test variations include a feature toggle and the feature configuration (if one exists). By default, the toggle is set to ON and the configuration default values load.
A common feature test includes a feature with no configuration, with one variation set to test “toggle=ON” and another variation set to test “toggle=OFF.” This enables you to experiment on the performance of your application in its current form versus its performance with your new feature enabled.
If the feature includes a feature configuration and you set a variation to “toggle=OFF,” Optimizely disables the option to modify variable values and reverts to the default variable values.
To create variations using feature configurations, update the variable values under each variation.

When this feature test is live, the [Get Feature Variable](🔗) method returns the values specified for the variation assigned to a visitor. Experimenting using a feature configuration enables you to iterate on a feature in between code deploys. Run a sequence of experiments with different combinations of variable values to determine the optimal experience for your users.
Note
If a feature test is running on a feature that is configured with variables, the [feature configuration](🔗) is locked until you pause the test.
## Launch a feature test
You assign metrics, traffic allocation, and optionally audiences and mutually exclusive groups to feature tests; see [Run A/B tests](🔗) for more information. After saving your changes, launch your feature test in a [staging environment before using it in production](🔗).