If you are an advanced user of experiments, this detailed topic is for you. If you primarily use flag deliveries, this topic is not as relevant.
Bucketing is the process of assigning users to a flag variation according to the flag rules. The Full Stack SDKs evaluate user IDs and attributes to determine which variation they should see.
- Deterministic: A user sees the same variation across all devices they use every time they see your experiment, thanks to how we hash user IDs. In other words, a returning user is not reassigned to a new variation.
- Sticky unless reconfigured: If you reconfigure a "live," running flag rule, for example by decreasing and then increasing traffic, a user may get rebucketed into a different variation.
During bucketing, the SDKs rely on the MurmurHash function to hash the user ID and experiment ID to an integer that maps to a bucket range, which represents a variation. MurmurHash is deterministic, so a user ID will always map to the same variation as long as the experiment conditions don’t change. This also means that any SDK will always output the same variation, as long as user IDs and user attributes are consistently shared between systems.
For example, imagine you are running an experiment with two variations (A and B), with an experiment traffic allocation of 40%, and a 50/50 distribution between the two variations. Optimizely will assign each user a number between 0 and 10000 to determine if they qualify for the experiment, and if so, which variation they will see. If they are in buckets 0 to 1999, they see variation A; if they are in buckets 2000 to 3999, they see variation B. If they are in buckets 4000 to 10000, they will not participate in the experiment at all. These bucket ranges are deterministic: if a user falls in bucket 1083, they will always be in bucket 1083.
This operation is highly efficient because it occurs in memory, and there is no need to block on a request to an external service. It also permits bucketing across channels and multiple languages, as well as experimenting without strong network connectivity.
The most common way to change "live" traffic for an enabled feature flag is to increase it. In this scenario, you can monotonically increase overall traffic without rebucketing users.
However, if you change traffic non-montonically (for example, decreasing, then increasing traffic), then your users can get rebucketed. In an ideal world, you avoid non-montonical traffic changes for a running experiment, because it can result in statistically invalid metrics. One exception is if you're using our Stats Accelerator (typically as part of a mature progressive delivery culture). If you're using Stats Accelerator or otherwise need to change "live" experiment traffic, you can ensure sticky user variation assignments by implementing a user profile service. For more information, see user profile service. User profile service is compatible with experiments, not with flag deliveries.
Reconfiguring delivery traffic
Do users get rebucketed?
Increase overall traffic monotonically
Change traffic non-monotonically
Reconfiguring experiment traffic
Do users get rebucketed?*
Increase overall traffic allocation monotonically
Pause variations in the experiment (by setting traffic to 0% for the variation)
Change overall traffic allocation non-monotonically
Change traffic distribution between variations or add/remove variations
Mutually exclusive experiments are not available in non-legacy Optimizely projects created after October 2020, but will be re-enabled in a future release.
Reconfiguring mutually exclusive experiment groups
Do users get rebucketed?*
Increase overall traffic allocation for each experiment monotonically
Pause/play experiments in the group
Change overall traffic allocation for each experiment non-monotonically
Change traffic distribution between experiments or add/remove experiments in the exclusion group
To avoid rebucketing, implement a user profile service. User profile service is not compatible with rollouts.
Let's look at a detailed example of how Optimizely attempts to preserve bucketing if you reconfigure a running experiment.
Imagine you are running an experiment with two variations (A and B), with a total experiment traffic allocation of 40%, and a 50/50 distribution between the two variations. In this example, if you change the experiment allocation to any percentage except 0%, Optimizely ensures that all variation bucket ranges are preserved whenever possible to ensure that users will not be re-bucketed into other variations. If you add variations and increase the overall traffic, Optimizely will try to put the new users into the new variation without re-bucketing existing users.
To continue the example, if you change variation traffic from 40% to 0%, Optimizely will not preserve your variation bucket ranges. After changing the experiment allocation to 0%, if you change it again, perhaps to 50%, Optimizely starts the assignment process for each user from scratch: Optimizely will not preserve the variation bucket ranges from the 40% setting.
To completely prevent variation reassignments, implement sticky bucketing with the User Profile Service, which uses a caching layer to persist user IDs to variation assignments.
The following table highlights how various features interact with each other when bucketing users:
User bucketing method
evaluates after these:
evaluates before these:
User profile service
If there is a conflict over how a user should be bucketed, then the first user-bucketing method to be evaluated overrides any conflicting method.
Let us walk through how the SDK evaluates a decision. This chart serves as a comprehensive example that explores all possible factors, including QA tools like specific users and forced variations.
- The Decide call is executed and the SDK begins its bucketing process.
- The SDK ensures that the flag rule is running.
- If the user is in an experiment rule, the SDK compares the user ID to the
Allowlisting. The specific users in the list are forced into a variation.
- If provided, the SDK checks the User Profile Service implementation to determine whether a profile exists for this user ID. If it does, the variation is immediately returned and the evaluation process ends. Otherwise, proceed to step 5.
- The SDK examines audience conditions based on the user attributes provided. If the user meets the criteria for inclusion in the target audience, the SDK will continue with the evaluation; otherwise, the user will no longer be considered eligible for the experiment.
- The hashing function returns an integer value that maps to a bucket range. The ranges are based on the traffic allocation breakdowns set in the Optimizely dashboard, and each corresponds with a specific variation assignment.
- (Beta) If you use a bucketing ID, the SDK will hash the bucketing ID (instead of the user ID) with the experiment ID and return a variation.
Updated 3 months ago