The Full Stack Developer Guide Developer Hub

Welcome to the Full Stack Developer Guide developer hub. You'll find comprehensive guides and documentation to help you start working with the Full Stack Developer Guide as quickly as possible, as well as support if you get stuck. Let's jump right in!

Get Started    

How bucketing works

Bucketing is the process of assigning users to a feature rollout or to different variations of an experiment. The Full Stack SDKs evaluate user IDs and attributes to determine which variation they should see.

Bucketing is:

  • Deterministic: A user sees the same variation across all devices they use every time they see your experiment, thanks to how we hash user IDs.
  • Sticky unless reconfigured: If you reconfigure a "live," running experiment, for example by decreasing and then increasing traffic, a user may get rebucketed into a different variation.

How users are bucketed

During bucketing, the SDKs rely on the MurmurHash function to hash the user ID and experiment ID to an integer that maps to a bucket range, which represents a variation. MurmurHash is deterministic, so a user ID will always map to the same variation as long as the experiment conditions don’t change. This also means that any SDK will always output the same variation, as long as user IDs and user attributes are consistently shared between systems.

For example, imagine you are running an experiment with two variations (A and B), with an experiment traffic allocation of 40%, and a 50/50 distribution between the two variations. Optimizely will assign each user a number between 0 and 10000 to determine if they qualify for the experiment, and if so, which variation they will see. If they are in buckets 0 to 1999, they see variation A; if they are in buckets 2000 to 3999, they see variation B. If they are in buckets 4000 to 10000, they will not participate in the experiment at all. These bucket ranges are deterministic: if a user falls in bucket 1083, they will always be in bucket 1083.

This operation is highly efficient because it occurs in memory, and there is no need to block on a request to an external service. It also permits bucketing across channels and multiple languages, as well as experimenting without strong network connectivity.

See also Interactions between rollouts and feature tests and our knowledgebase article Traffic allocation and distribution.

Changing traffic can rebucket users

The most common way to change "live" traffic for an enabled feature flag is to increase it. In this scenario, you can monotonically increase overall traffic without rebucketing users.

However, if you change traffic non-montonically (for example, decreasing, then increasing traffic), then your users can get rebucketed. In an ideal world, you avoid non-montonical traffic changes for a running experiment, because it can result in statistically invalid metrics. One exception is if you're using our Stats Accelerator (typically as part of a mature progressive delivery culture). If you're using Stats Accelerator or otherwise need to change experiment traffic "live", you can ensure sticky user variation assignments by implementing a user profile service. For more information, see user profile service. User profile service is compatible with experiments, not with rollouts.

Reconfiguring rollouts traffic
Do users get rebucketed?

Increase overall traffic monotonically

No
Existing users aren't rebucketed.

Change traffic non-monotonically

Yes
For example, if you start with 80% of an audience, then reduce the percentage to 50%, then increase back to 80%, users previously exposed to the feature might no longer see it when you increase the percentage again.

Reconfiguring experiment traffic
Do users get rebucketed?*

Increase overall traffic allocation monotonically

No

Pause/play variations in the experiment

Yes
Where possible, Optimizely preserves existing buckets

Change overall traffic allocation non-monotonically

Yes
Where possible, Optimizely preserves existing buckets

Change traffic distribution between variations or add/remove variations

Yes
Where possible, Optimizely preserves existing buckets

Reconfiguring mutually exclusive experiment groups
Do users get rebucketed?*

Increase overall traffic allocation for each experiment monotonically

No

Pause/play experiments in the group

No

Change overall traffic allocation for each experiment non-monotonically

Yes
Where possible, Optimizely preserves existing buckets

Change traffic distribution between experiments or add/remove experiments in the exclusion group

Yes
Where possible, Optimizely preserves existing buckets

To avoid rebucketing, implement a user profile service. User profile service is not compatible with rollouts.

Rebucketing example

Let's walk through an example in detail of how Optimizely attempts to preserve bucketing if you reconfigure a running experiment.

Imagine you are running an experiment with two variations (A and B), with a total experiment traffic allocation of 40%, and a 50/50 distribution between the two variations. In this example, if you change the experiment allocation to any percentage except 0%, Optimizely ensures that all variation bucket ranges are preserved whenever possible to ensure that users will not be re-bucketed into other variations. If you add variations and increase the overall traffic, Optimizely will try to put the new users into the new variation without re-bucketing existing users.

To continue the example, if you change an experiment variation allocation from 40% to 0%, Optimizely will not preserve your variation bucket ranges. After changing the experiment allocation to 0%, if you change it again, perhaps to 50%, Optimizely starts the assignment process for each user from scratch: Optimizely will not preserve the variation bucket ranges from the 40% setting.

To completely prevent variation reassignments, implement sticky bucketing with the User Profile Service, which uses a caching layer to persist user IDs to variation assignments.

End-to-end bucketing workflow

Let’s walk through how the SDK evaluates a decision. This chart serves as a comprehensive example that explores all possible factors, including QA tools like whitelisting and forced variations.

  1. The Enable Feature call is executed and the SDK begins its bucketing process.
  2. The SDK ensures that the experiment is running.
  3. It compares the user ID to the whitelisted user IDs. Whitelisting is used to force users into specific variations. If the user ID is found on the whitelist, it will be returned.
  4. If provided, the SDK checks the User Profile Service implementation to determine whether a profile exists for this user ID. If it does, the variation is immediately returned and the evaluation process ends. Otherwise, proceed to step 5.
  5. The SDK examines audience conditions based on the user attributes provided. If the user meets the criteria for inclusion in the target audience, the SDK will continue with the evaluation; otherwise, the user will no longer be considered eligible for the experiment.
  6. The hashing function returns an integer value that maps to a bucket range. The ranges are based on the traffic allocation breakdowns set in the Optimizely dashboard, and each corresponds with a specific variation assignment.
  7. (Beta) If the bucketing ID feature is used, the SDK will hash the bucketing ID (instead of the user ID) with the experiment ID and return a variation.

Updated 2 days ago


How bucketing works


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.