Preview experiment variations
Before you publish a test or rollout a feature live to your visitors, it's important to make sure it works the way you expect.
Full Stack provides tools to help you verify your tests. Use them to verify your test end-to-end, checking that each variation of your app looks, feels, and operates correctly. It’s also important that any metrics being tracked report correctly on the Results page.
When activating a test, you must provide a userId. Normally, we use the userId to randomize which bucket to respond with. When QA testing, you should check the behavior of each variation. There are two options you can use to tell Optimizely to skip the randomized bucketing and return a specific bucket: forced bucketing and whitelisting. Both of these methods require the test to be running, so start the test in a staging environment. If you can't use Environments, you can start your test with 0% of traffic allocated to it and use these methods for testing.
Here's a QA checklist:
- Start your test in a non-production environment.
- Force yourself into each experience using either forced bucketing or whitelisting.
- Trigger each conversion event being tracked by the test.
- If you don’t see the variation or event tracking you’d expect, troubleshoot by increasing the log level and consulting your error handler to debug.
- Repeat steps 2 and 3 for each variation of your test.
- Confirm your results. There is a 5 minute interval on results page updates, so you might have to wait up to 5 minutes for it to update. The time of the last update is shown on the results page.
- Launch in your production environment.
Updated almost 4 years ago