HomeGuidesAPI Reference
Submit Documentation FeedbackJoin Developer CommunityLog In

Interactions between flag rules

This topic describes how your audiences bucket into both experiments and deliveries.

📘

Advanced

This topic is most useful if you run experiments and need an in-depth understanding of how your audiences bucket into both experiments and deliveries.

Use rules to refine and target who sees a flag. You can configure the following types of rules:

  • Targeted Delivery
  • Experiment (A/B test)

Currently, you can have only one experiment for a flag, and it must be the first evaluated rule. See note on beta feature below. This means that if you create targeted delivery rules and an experiment rule for the same flag, Optimizely first evaluates a user to see if they qualify for the experiment's audience conditions and traffic. If they don't qualify, then Optimizely evaluates them next for the delivery rule(s).

📘

Beta Feature

Having multiple rules per flag is currently in Beta. If you would like to be added to the beta, please reach out to your customer support representative.

Rulesets and audiences

If you define multiple overlapping audiences in your flag rules, then note that users "move down" multiple rules differently for experiments than for deliveries. In brief:

  • For experiments, users can still move down to the next rule if they fail traffic bucketing.
  • For deliveries, users "fall through" to the Everyone Else rule if they fail traffic bucketing. They move down to the next rule only if they fail audience conditions.

For example, in the following screenshot, if user A failed to bucket into the A/B test, they could still bucket into rule #2 if they matched its criteria. However, if user A then failed rule #2, they'd fall through to Everyone Else, even if they would have matched the criteria for rule #3.

The following diagram shows this behavior in detail.

Bucketing behavior varies for different rule typesBucketing behavior varies for different rule types

Bucketing behavior varies for different rule types

The following table elaborates on the preceding diagrams by showing all possible outcomes for audience and traffic conditions for a combination of one experiment rule and one delivery rule.

User

Experiment audience

Experiment traffic

Delivery audience

Delivery traffic

Result

user1

pass

pass

N/A

N/A

Experiment

user2

pass

fail

pass

pass

Delivery

user3

fail

N/A

pass

pass

Delivery

user4

fail

N/A

pass

fail

No action

user5

fail

N/A

fail

N/A

No action

Rulesets and total traffic

As an example of how traffic evaluates against multiple rules: if you configure 80% of total traffic to an experiment, and set 50% traffic for a delivery with the same audience, then 80% of traffic goes to the test, and half of the remaining 20%, or 10%, ends up in the delivery.

The following diagrams illustrate some of the ways that traffic can be split across flag rules, depending on the traffic allocations and audiences you set.


Did this page help you?