This topic is most helpful if you run experiments and need an in-depth understanding of how your audiences bucket into both experiments and deliveries.
Use rules to refine and target those who see a flag. You can configure the following types of rules:
- Targeted Delivery
- Experiment (A/B test)
Currently, you can have only one experiment for a flag, and it must be the first evaluated rule. See beta feature note below. This means that if you create targeted delivery rules and an experiment rule for the same flag, Optimizely first evaluates a user to see if they qualify for the experiment's audience conditions and traffic. If they do not qualify for the experiment, then Optimizely evaluates them next for the delivery rule(s).
Having multiple experiments per flag is currently in Beta. If you would like to be added to the Beta, contact your customer success manager (CSM).
If you define multiple overlapping audiences in your flag rules, then that user "moves down" multiple rules differently for experiments than for deliveries:
- For experiments, users can still move down to the next rule if they fail traffic bucketing.
- For targeted deliveries, users "fall through" to the Everyone Else rule if they fail traffic bucketing. They move down to the next rule only if they fail audience conditions.
For example, in the following screenshot, if user A failed to bucket into the A/B test, they could still bucket into rule #2 if they matched its criteria. However, if user A then failed rule #2, they'd fall through to Everyone Else, even if they would have matched the criteria for rule #3.
The following diagram shows this behavior in detail.
The following table elaborates on the preceding diagrams by showing all possible outcomes for audience and traffic conditions for a combination of one experiment rule and one delivery rule.
|User||Experiment audience||Experiment traffic||Delivery audience||Delivery traffic||Result|
As an example of how traffic evaluates against multiple rules: if you configure 80% of total traffic to an experiment, and set 50% traffic for a delivery with the same audience, then 80% of traffic goes to the test, and half of the remaining 20%, or 10%, ends up in the delivery.
The following diagrams illustrate some of the ways that traffic can be split across flag rules, depending on the traffic allocations and audiences you set.
Updated 6 days ago