HomeGuidesAPI Reference
Submit Documentation FeedbackJoin Developer CommunityOptimizely GitHubOptimizely NuGetLog In

Analyze results

How to analyze the results of your experiments using the Reports page for Optimizely Full Stack Flags version.

The Reports page helps you interpret experiment metrics using Optimizely's Stats Engine, a unique approach to statistics for the digital age developed in conjunction with leading researchers at Stanford University. Stats Engine embeds innovations combining sequential testing and false discovery rate control to deliver speed and accuracy for businesses making decisions based on real-time data.

📘

Reports page versus Results page

Previous versions of Full Stack, Optimizely Web Experimentation and Performance Edge refer to the Reports page as the Results page. Both pages show your experiment results, but the Flags version of Reports shows all experiment results across all environments.

Once you run an experiment, explore the Reports page to learn how users respond to your experiment.

Full Stack Flags ReportsFull Stack Flags Reports

reports navigation

Results Page Flags versionResults Page Flags version

(click to enlarge image)

For more information, see these Optimizely support articles:

Optimizely A/B tests do not currently support retroactive results calculation.

📘

Note

When you use deliveries, Optimizely does not send an impression to the Results page, this is because deliveries are for rolling out flags that are already tested in experiments. To measure the impact of a feature flag, run an experiment instead.

Segment results

Segment your results to see how different groups of users behave compared to users overall.

By default, Optimizely shows results for all users who enter your experiment. However, not all users behave like your average users. Optimizely lets you filter your results so you can see if certain groups of users behave differently from your users overall. This is called segmentation.

For example, imagine you run an experiment with a pop-up promotional offer. This generates a positive lift overall, but when you segment for users on mobile devices, it is a statistically significant loss. Maybe the pop-up is disruptive or difficult to close on a mobile device. When you implement the change or run a similar experiment in the future, you might exclude mobile users based on what you have learned from segmenting.

Segmenting results is one of the best ways to gain deeper insight beyond the average user's behavior. It is a powerful way to step up your experimentation program.

Experiments do not include “out-of-the-box” attributes like browser, device or location attributes (these attributes are included in our Web experimentation product. Optimizely’s SDKs are platform-agnostic: we do not assume which attributes are available in your application or what the format is. In experiments, all segmentation is based on the custom attributes that you create.

Here's how to set them up:

  1. Create the custom attributes that you want to use for Results page segmentation.
  2. Target audiences based on your custom attributes.
  3. Pass the custom attributes into the user context for your experiment or app. For an example, see the Create User Context topic in your language's SDK reference.

After you define custom attributes and pass them in the SDK, they'll be available as segmentation options from the drop-down menu at the top of the Results page.

Interpret the Reports page

You can segment your entire Reports page or the results for an individual metric. Segmenting results helps you get more out of your data by generating valuable insights about your user.

  1. Navigate to your Reports page.
  2. Click Segment and select an attribute from the dropdown.

Under the Segment dropdown, you will find default segments and custom segments all in one place.

📘

Note

For Optimizely to use attributes for segmentation, the attribute must be defined in the datafile, and it must be included in the user context object. However, it does not have to be added as an audience to the test.

When segmenting results, a user who belongs to more than one segment will be counted in every segment they belong to. However, if a user has more than one value for a single segment, the user is only counted for the last-seen value they had in the session.

See our support article on how visitors and conversions are counted in segments.