Disclaimer: This website requires Please enable JavaScript in your browser settings for the best experience.

The availability of features may depend on your plan type. Contact your Customer Success Manager if you have any questions.

Dev guideRecipesAPI Reference
Dev guideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev guide

Choose QA tests

Describes the different QA options and gives best practices for QA in Optimizely Feature Experimentation.

Best practices for what to test

The traditional method for automating QA tests will not scale with a large number of flags. If you try to unit test, integrate test, and end-to-end test every combination of every flag and its variations, you will get an explosion of tests.

At a high level, here is some guidance for automating tests selectively

  • Unit tests – Should be agnostic of flags. If a unit test has to be aware of flags, then mock and stub flags.
  • Integration tests – Should also have as little awareness of flags as possible. Focus on individual code paths to ensure proper business logic and integration. Force the particular flag variations you want to test by using mocks and stubs For example, you can mock a Decide call to return true in an integration test.
  • Manual verification – Expensive, so reserve a human QA tester for business-critical variations and flags.
  • End-to-end tests – These are the most expensive to write and maintain, so reserve these only for the most business-critical experimental or flag paths. Include a test that checks what happens if all flags are enabled. Include another test to check that the system can degrade gracefully in the unlikely event your flag system goes down.

Which QA tool to choose

Depending on your testing needs, Optimizely Feature Experimentation offers a variety of QA tools to choose from.

Use an allowlist Use a QA audience Use forced bucketing
Granularity

  • Flag variables

  • Flag variations

  • Experiments

ExperimentsFlag variations
Example use cases

  • Mocks during development.

  • Allowlist a test runner so you see the variation you want the test to assert against.


  • Mocks during development
  • Automated web UI test
  • Manual web UI test

Tests run on a behavior-driven development (BDD) framework.
Ease of useEasiest.

It can override all other configurations, allowing you to leave your intended flag rule configuration intact while you QA.
Medium

Requires some flag rule configuration changes.
Harder
Available forA/B experiments.

  • A/B tests.

  • Targeted deliveries

A/B experiments.
CommentsYou can only allowlist up to 50 user IDs per experiment. Two easy implementations are to audience match on:


  1. A URL query parameter.

  2. A cookie.

Keep your experiment data clean

The preceding QA tools are for flag rules that are already running or enabled. But how do you prevent end users from exposing themselves to your in-development, running experiment? You can run your experiment in a preproduction environment.

Feature Experimentation projects automatically come with a Development and Production environment. Each environment has its own results on the Optimizely Experiment results page. After completing QA testing in your preproduction environment, copy your rules to the production environment and run your experiments on your target audience. See Duplicate rules across environments.