Disclaimer: This website requires Please enable JavaScript in your browser settings for the best experience.

Optimizely will be sunsetting Full Stack Experimentation on July 29, 2024. See the recommended Feature Experimentation migration timeline and documentation.

Dev GuideAPI Reference
Dev GuideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev Guide

Choose QA tests

This topic describes the different QA options and gives best practices for QA in Optimizely Full Stack.

Best practices for what to test

The traditional method for automating tests will not scale with a large number of feature flags. If you try to unit test, integrate test, and end-to-end test every combination of every feature and its variations, you will get an explosion of tests.

At a high level, here is some guidance for automating tests selectively:

  • Unit tests should be agnostic of feature flags. If a unit test has to be aware of flags, then mock and stub flags.
  • Integration tests should also have as little awareness of feature flags as possible. Focus on individual code paths to ensure proper business logic and integration. Force the particular experimental variations and feature flag states you want to test by using mocks and stubs For example, you can mock an isFeatureEnabled SDK call to always return true in an integration test.
  • Manual verification is expensive, so reserve a human QA tester for business-critical variations and flags.
  • End-to-End tests are the most expensive to write and maintain, so reserve these only for the most business-critical experimental or flag paths. Include a test that checks what happens if all feature flags are enabled. Include another test to check that the system can degrade gracefully in the unlikely event your flag system goes down.

Which QA tool to choose

For a sense of how we at Optimizely use these tools, see Automation Testing Feature Flags

QA toolWhitelistAudience attribute = "QA"Forced Variation
GranularityExperiment variationsExperimentsExperiment variations
Example Use CasesMocks during development

Whitelist a test runner, so you always see the variation you want the test to assert against.
Mocks during development

Automated web UI test

Manual web UI test
Tests run on a behavior-driven development (BDD) framework
Ease of useEasiest (it can override all other configurations, allowing you to leave your intended feature test configuration intact while you QA.Medium (requires some feature test configuration changes)Harder
Available forexperimentsexperiments
feature rollouts
experiments
CommentsCan use with only 10 user IDs per experiment (it would skew experimental results if more were allowed)Two easy implementations are to audience match on:
1. a URL query parameter
2. a cookie

Keep your experiment data clean

The preceding QA tools are for experiments or feature flags that are already running or enabled. But how do you keep from exposing end users to your in-development, running experiment? You can:

  • Run in a preproduction environment
  • Set a custom audience attribute that only works for QA
  • If you are using whitelisting, set traffic allocation to zero

On the flip side, when you are done with QA and ready for your experiment to go live with real data, you want to discard the events your QA testers triggered. You don't want their testing showing up on your results page. Optimizely does not yet support discarding data automatically when you switch your experiment from running in a non-production environment to a product environment. Instead, you can easily get rid of the QA events if you "Reset Results."