Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In

Interactions between flag rules

Describes flag rules and how they interact for Optimizely Feature Experimentation.

📘

Note

This topic is most helpful if you run experiments and need an in-depth understanding of how your audiences bucket into both experiments and deliveries.

Use rules to refine and target those who see a flag. You can configure the following types of rules:

  • Targeted Delivery
  • Experiment (A/B test)
  • Multi-armed Bandit optimization

Currently, you can have only one experiment for a flag, and it must be the first evaluated rule. See beta feature note below. This means that if you create targeted delivery rules and an experiment rule for the same flag, Optimizely first evaluates a user to see if they qualify for the experiment's audience conditions and traffic. If they do not qualify for the experiment, then Optimizely evaluates them next for the delivery rule(s).

📘

Note

Having multiple experiments per flag is currently in Beta. If you would like to be added to the Beta, contact your customer success manager (CSM).

Rulesets and audiences

If you define multiple overlapping audiences in your flag rules, then that user "moves down" multiple rules differently for experiments than for deliveries:

  • For experiments, users can still move down to the next rule if they fail traffic bucketing.
  • For targeted deliveries, users "fall through" to the Everyone Else rule if they fail traffic bucketing. They move down to the next rule only if they fail audience conditions.

For example, in the following screenshot, if user A failed to bucket into the A/B test, they could still bucket into rule #2 if they matched its criteria. However, if user A then failed rule #2, they'd fall through to Everyone Else, even if they would have matched the criteria for rule #3.

1100

Click to enlarge image

The following diagram shows this behavior in detail.

960

Bucketing behavior varies for different rule types
(Click to enlarge image)

The following table elaborates on the preceding diagrams by showing all possible outcomes for audience and traffic conditions for a combination of one experiment rule and one delivery rule.

UserExperiment audienceExperiment trafficDelivery audienceDelivery trafficResult
user1passpassN/AN/AExperiment
user2passfailpasspassDelivery
user3failN/ApasspassDelivery
user4failN/ApassfailNo action
user5failN/AfailN/ANo action

Rulesets and total traffic

As an example of how traffic evaluates against multiple rules: if you configure 80% of total traffic to an experiment, and set 50% traffic for a delivery with the same audience, then 80% of traffic goes to the test, and half of the remaining 20%, or 10%, ends up in the delivery.

The following diagrams illustrate some of the ways that traffic can be split across flag rules, depending on the traffic allocations and audiences you set.

719

Click to enlarge image

735

Click to enlarge image

715

Click to enlarge image