Disclaimer: This website requires Please enable JavaScript in your browser settings for the best experience.

The availability of features may depend on your plan type. Contact your Customer Success Manager if you have any questions.

Dev guideRecipesAPI Reference
Dev guideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev guide

Run a multi-armed bandit optimization

How to run a multi-armed bandit optimization in Optimizely Feature Experimentation.

You might want to run a test focusing on maximizing conversions from your variations instead of finding the variation most likely to perform consistently better than your baseline. A multi-armed bandit (MAB) optimization is a different type of experiment, compared to an A/B test, because it uses reinforcement learning to allocate traffic to variations that perform well while allocating less traffic to underperforming variations.

โš 

๏ธImportant

MAB optimizations do not generate statistical significance. Instead, the algorithm pushes traffic to variations with the most conversions; the reason for a variation's performance is unimportant.

MAB is for optimization, not experimentation. Its tests are best suited for maximizing conversions during short, temporary experiences such as headline testing or a holiday weekend sale. You should never use MAB tests for exploratory hypotheses or variation selection.

MAB's primary goal is to answer: Which variation gets us the largest reward? where the "largest reward" is the highest revenue or most conversions. For information on MABs, see how to Maximize lift with multi-armed bandit optimizations.

Best Use Cases

The following cases may be a better fit for a multi-armed bandit optimization than a traditional A/B experiment:

  • Promotions and offers โ€“ Users who sell consumer goods on their site often focus on driving higher conversion rates. One way to do this is to offer special promotions that run for a limited time. Your changes will not be permanent, and an MAB optimization will send more traffic to the over-performing variations and less traffic to the underperforming variations for the duration of the promotion.
  • Headline testing โ€“ Headlines are short-lived content that loses relevance after a fixed time. If a headline experiment takes as long to reach statistical significance as the lifespan of a headline, then insights gained from the experiment are irrelevant. Therefore, an MAB optimization lets you maximize your impact without balancing experiment runtime and the natural lifespan of a headline.
  • Webinars โ€“ You can boost registration for webinars or other events by experimenting with several different versions of calls to action to sign up for your webinar.

For algorithmic details of MABs at Optimizely, see the end-user documentation.

Setup overview

To configure a MAB:

  1. (Prerequisite) Create a flag in your Feature Experimentation project.

  2. (Prerequisite) Handle user IDs.

๐Ÿ“˜

Note

You should set up user profile service to ensure consistent user bucketing if you are using a server-side SDK.

  1. Create and configure a MAB rule.
  2. If you have not done so yet, implement the Optimizely Feature Experimentation SDK's Decide method in your application's codebase through a flag.
  3. Test your MAB rule in a development environment. See Test and troubleshoot.
  4. Discard any test user events and enable your MAB optimization rule in a production environment.

Create an MAB optimization in your Feature Experimentation project

To create an optimization in the Optimizely app:

  1. Go to Flags, select your flag, and select your environment.

  2. Click Add Rule and select Multi-Armed Bandit.

    Create Multi-Armbed Bandit rule
  3. Configure your MAB rule:

    1. (Optional) Search for and add audiences. To create an audience, see Target audiences. Audiences evaluate in the order in which you drag and drop them. You can match each user on any or all of the audience conditions.
    2. Set the Traffic allocation slider to set the percentage of your total traffic to bucket into the experiment.
    3. Add metrics based on tracked user events. See Create events to create and track events. For information about selecting metrics, see Choose metrics. For instructions on setting up metrics, see Create a metric in Optimizely using the metric builder.
    4. Choose the variations you want to optimize. Unlike A/B experiments, you do not need to compare to a baseline experiment because statistical significance is not calculated with MAB optimizations. See Why MABs do not use a baseline.
    5. (Optional) Add the MAB to an Exclusion Group.
    6. Click Save.

๐Ÿ“˜

Note

If you plan to change the traffic allocation after starting the experiment, then implement a user profile service. See Ensure consistent user bucketing. Also, create a user profile service if you plan on using Stats Accelerator.

  1. Click Run on the MAB rule. If it is not already running, click Run on the ruleset (flag).

Implement the MAB

If you have already implemented the flag in your application's codebase, no further configuration is required for the flag delivery. If you have not, implement the Decide method call in your code to enable or disable the flag for a user:

// Decide if user sees a feature flag variation
let user = optimizely.createUserContext(userId: "user123", attributes: ["logged_in":true])
let decision = user.decide(key: "flag_1")
let enabled = decision.enabled
// Decide if user sees a feature flag variation
user := optimizely.CreateUserContext("user123", map[string]interface{}{"logged_in": true})
decision := user.Decide("flag_1", nil)
enabled := decision.Enabled
# Decide if user sees a feature flag variation
user = optimizely.create_user_context("user123", {'logged_in': True})
decision = user.decide("flag_1")
enabled = decision.enabled
// Decide if user sees a feature flag variation
$user = $optimizely_client->createUserContext('user123', ['logged_in' => true]);
$decision = $user->decide('flag_1');
$enabled = $decision->getEnabled();
# Decide if user sees a feature flag variation
user = optimizely_client.create_user_context('user123', {'logged_in' => true})
decision = user.decide('flag_1')
enabled = decision.enabled
// Decide if user sees a feature flag variation
var user = optimizely.CreateUserContext("user123", new UserAttributes { { "logged_in", true } });
var decision = user.Decide("flag_1");
var enabled = decision.Enabled;
// Decide if user sees a feature flag variation
OptimizelyUserContext user = optimizely.createUserContext("user123", new HashMap<String, Object>() { { put("logged_in", true); } });
OptimizelyDecision decision = user.decide("flag_1");
Boolean enabled = decision.getEnabled();
// Decide if user sees a feature flag variation
const user = optimizely.createUserContext('user123', { logged_in: true });
const decision = user.decide('flag_1');
const enabled = decision.enabled;
// Decide if user sees a feature flag variation
const decision = useDecision('flag_1', null, { overrideUserAttributes: { logged_in: true }});
const enabled = decision.enabled;
// Decide if user sees a feature flag variation
var user = await flutterSDK.createUserContext(userId: "user123");
var decisionResponse = await user!.decide("flag_1");
var decision = decisionResponse.decision;
var enabled = decision!.enabled;

See the following for more detailed examples.

Optimizely Feature Experimentation uses the Decide method call to decide whether a user qualifies for the rule and which variation they receive. The Optimizely Feature Experimentation SDKs let you reuse the exact flag implementation for different flag rules.

Remember, a user evaluates against all the rules in a ruleset in order before being bucketed into a rule's variation. See Create feature flags.

MAB results

Because MAB optimizations do not generate statistical significance, their results page differs from an A/B test. Instead of calculating statistical significance, MABs push traffic to variations with the most conversions; the reason for a variation's performance is unimportant.

MAB results page

For information, see the user documentation on the Optimizely Experimentation Results page.