The QA approaches here are used to put yourself into a specific variation of an experiment. They aren't supported for rollouts yet.
Whitelisting is one of the easiest ways to QA an experiment. Publish your experiment in staging and enable whitelists to show specified variations to a few, select users. When you activate the experiment for these users, they can bypass audience targeting and traffic allocation to always see the variation you specify for them. Users who aren't whitelisted must pass audience targeting and traffic allocation to see the live experiment and variations.
For example, imagine that you create an experiment that compares Variation A and Variation B. You want to QA the experiment's live behavior and show the variations to a few key stakeholders:
- Create a whitelist that includes the user IDs for the people who should see the live experiment.
- To ensure that only your whitelisted users can see the experiment, create an audience targeted to an attribute no user will have or set the experiment's traffic allocation to 0%.
After QA is complete, establish your production settings for audience targeting and traffic allocation.
You don't need to do anything differently in the SDK; if you've set up a whitelist, experiment activation will force the variation output based on the whitelist you've provided. Whitelisting does not work if the experiment is not running, but you can set an experiment to 0% traffic or start in a staging environment to test with whitelisting.
The following table summaries key aspects of whitelisting:
Available for experiments, not rollouts
Can use with only 10 user IDs per experiment
We limit you to a maximum of 10 whitelisted users per experiment because:
- Forcing variations with a large number of user IDs will bias your experiment results.
- Whitelisting increases the size of the datafile.
Whitelisting evaluates after these user bucketing methods:
Set Forced Variation method
Note that the
forcedVariations field in the datafile is for whitelisted variations. It is not related to the Set Forced Variation method.
Whitelisting evaluates before these user bucketing methods:
Note: If there's a conflict over how a user should be bucketed, then the first user-bucketing method to be evaluated overrides any conflicting method.
Use whitelisting only for preview, experimenting, and QA:
- As a developer, you can use whitelisting to mock a datafile and test a feature flag or feature variable you're implementing.
- As QA, you could get whitelisted in order to perform manual tests of feature flags and feature variables in a web UI, or you could whitelist a test runner's 'user ID' to automate these tests.
- you can use whitelisting in a mock datafile. Then copy that datafile and use it as a part of your unit or integration test suite rather than the real datafile.
To target an experiment to a larger group of users for QA, such as all employees in your organization or a staging environment, use audiences instead. Create an attribute that every user in the group will share, and target the experiment to an audience that contains that attribute.
Here's how to create a whitelist for an experiment in a Full Stack project.
- Navigate to the Experiments dashboard.
- Click the Actions icon (...) for the experiment and select Whitelist.
- Specify user IDs and corresponding variations you want to force for those users.
In this example, we forced two visitors into variation_1 and two visitors into variation_2.
- Pass in the whitelisted user IDs to the SDK using the Activate or Is Feature Enabled method.
The user IDs used in the whitelist must match the user IDs passed through the SDK in Activate or Is Feature Enabled methods. Otherwise, whitelisting will not work. These user IDs are often anonymous and cryptic (for example, a cookie value), and you have to copy and paste them.
Updated 2 months ago