Dev GuideAPI Reference
Dev GuideAPI ReferenceUser GuideGitHubNuGetDev CommunityDoc feedbackLog In
GitHubNuGetDev CommunityDoc feedback
Hey! These docs are for version 2.0, which is no longer officially supported. Click here for the latest version, 3.0!

The Results page is powered by Optimizely's Stats Engine, a [unique approach to statistics for the digital age](🔗) developed in conjunction with leading researchers at Stanford University. Stats Engine embeds innovations combining sequential testing and false discovery rate control to deliver [speed and accuracy for businesses](🔗) making decisions based on real-time data.

Once you run a feature test or standalone A/B test, dig into the [Experimentation Results page](🔗) to learn how users respond to your experiment.


For more information, see these KB articles:

  • [Stats Accelerator](🔗)

  • [How Optimizely calculates results](🔗).

Full Stack does not currently support retroactive results calculation.


Feature rollouts help you launch new features using a feature flag. Optimizely doesn't track events, so no extra network traffic is generated and no results are attached. To measure the impact of a feature, run a feature test instead.

## Segment results

Segment your results to see how different groups of users behave, compared to users overall.

By default, Optimizely shows results for all users who enter your experiment. However, not all users behave like your average users. Optimizely lets you filter your results so you can see if certain groups of users behave differently from your users overall. This is called segmentation.

For example, imagine you run an experiment with a pop-up promotional offer. This generates positive lift overall, but when you segment for users on mobile devices, it's a statistically significant loss. Maybe the pop-up is disruptive or difficult to close on a mobile device. When you implement the change or run a similar experiment in the future, you might exclude mobile users based on what you've learned from segmenting.

Segmenting results is one of the best ways to gain deeper insight beyond the average user's behavior. It's a powerful way to step up your experimentation program.

Experiments in Full Stack projects don’t include “out-of-the-box” attributes like browser, device, or location attributes in Web. Optimizely’s SDKs are platform-agnostic—we don't assume which attributes are available in your application or what the format is. In Full Stack, all segmentation is based on the custom attributes that you create.

Here's how to set them up:

  1. [Create the custom attributes ](🔗) that you want to use for Results page segmentation.

  2. [Create audiences](🔗) based on your custom attributes.

  3. Pass the custom attributes into the [Activate](🔗), [Track](🔗), and [Get Variation](🔗) functions for your experiment or app. For an example, see [Pass in attributes for segmenting results](🔗).

After you define custom attributes and pass them in the SDK, they'll be available as segmentation options from the drop-down menu at the top of the Results page.

## Interpret the Results page

You can segment your entire Results page or the results for an individual metric. Segmenting results helps you get more out of your data by generating valuable insights about your user.

  1. Navigate to your [Results page](🔗).

  2. Click _Segment_ and select an attribute from the dropdown.

Under the _Segment_ dropdown, you'll find default segments and custom segments all in one place.



For Optimizely to use attributes for segmentation, the attribute must be defined in the datafile, and it must be included in both the [Activate](🔗) and [Track](🔗) calls. However, it does not have to be added as an audience to the test.

When segmenting results, a user who belongs to more than one segment will be counted in every segment they belong to. However, if a user has more than one value for a single segment, the user is only counted for the last-seen value they had in the session.

See our support documentation article on [how visitors and conversions are counted in segments](🔗).

## See also

  • [Discrepancies in third-party data](🔗)

  • [How long to run an experiment](🔗)

  • [How Optimizely counts conversions](🔗)

  • [Interpret your results](🔗)

  • [Stats Accelerator: Use algorithms to boost results](🔗)

  • [Take action based on the results of an experiment](🔗)