After you start running an experiment, you can increase total traffic and rest assured that the same users always get the enabled feature.
But things are a little more complicated if you decrease traffic or otherwise reconfigure an experiment after you start running it. If you plan to reconfigure an experiment while it is running, you need to implement a user profile service to ensure sticky bucketing. Bucketing is the process of assigning users to different variations of an experiment. There are two scenarios in which you might need to implement sticky bucketing:
- Reconfigure a running experiment to troubleshoot
- Enable Stats Accelerator
To see if Stats Accelerator is available for your plan, see
Stats Accelerator is currently compatible only with A/B tests, not with feature tests.
Optimizely buckets your users into experiment variations using a deterministic hashing of the user ID and experiment key. As long as your systems consistently share user IDs and user attributes, then this permits highly efficient bucketing across channels and multiple languages, as well as experimentation without strong network connectivity.
However, if you add variations or change traffic allocation while an experiment is running, users without profiles often get rebucketed. For example, if you dialed traffic on an experiment down to 0% from 50%, then increased it back to 50%, Optimizely would reset the bucketing for that experiment and some users would begin to see different variations than before.
For more information see How bucketing works.
To ensure sticky bucketing, implement a user profile service, which uses a caching layer to persist user IDs to variation assignments. For more information, refer to your programming language's SDK topics.
Updated 28 days ago