Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunitySubmit a ticketLog In
GitHubNuGetDev CommunitySubmit a ticket

Interactions between flag rules

Describes flag rules and how they interact for Optimizely Feature Experimentation.

πŸ“˜

Note

This topic is most helpful if you run experiments and need an in-depth understanding of how your audiences bucket into both experiments and deliveries.

Use rules to refine and target those who see a flag. You can configure the following types of rules:

Rulesets and audiences

If you define multiple overlapping audiences in your flag rules, then that user "moves down" multiple rules differently for experiments than for deliveries:

  • For experiments, users can still move down to the next rule if they fail traffic bucketing.
  • Users "fall through" to the Everyone Else rule for targeted deliveries if they fail traffic bucketing. They only move down to the next rule if they fail audience conditions.

For example, in the following screenshot, if user A failed to bucket into the A/B test, they could still bucket into rule #2 if they matched its criteria. However, if user A then failed rule #2, they'd fall through to Everyone Else, even if they would have matched the criteria for rule #3.

The following diagram shows this behavior in detail.

bucketing diagram

Bucketing behavior varies for different rule types
(Click to enlarge image)

The following table elaborates on the preceding diagrams by showing all possible outcomes for audience and traffic conditions for a combination of one experiment rule and one delivery rule.

UserExperiment audienceExperiment trafficDelivery audienceDelivery trafficResult
user1passpassN/AN/AExperiment
user2passfailpasspassDelivery
user3failN/ApasspassDelivery
user4failN/ApassfailNo action
user5failN/AfailN/ANo action

Rulesets and total traffic

As an example of how traffic evaluates against multiple rules: if you configure 80% of total traffic to an experiment, and set 50% traffic for a delivery with the same audience, then 80% of traffic goes to the test, and half of the remaining 20%, or 10%, ends up in the delivery.

The following diagrams illustrate some of the ways that traffic can be split across flag rules, depending on the traffic allocations and audiences you set.

rulesets and total traffic

Click to enlarge image

rllout a feature to a particular audience

Click to enlarge image

turn feature on for a group

Click to enlarge image