Dev guideAPI Reference
Dev guideAPI ReferenceUser GuideGitHubNuGetDev CommunitySubmit a ticketLog In
GitHubNuGetDev CommunitySubmit a ticket

Event tracking

This topic describes an overview of the best practices for effectively tracking events in Optimizely Feature Experimentation.

If you can install an SDK and identify a user, you can use Optimizely Feature Experimentation to track any event, from users interacting with specific elements of a web page or mobile app to more complex metrics like lifetime value that are calculated in backend systems.

Tracking and measuring user behavior is a prerequisite for understanding how your customers use your product and why they use it that way. Without that information, it is much more difficult to judge the performance of a new feature flag rule, such as an A/B test, or apply any lessons learned to future rules. You will not know which specific components of your product your customers are interacting with—and you will not have the information you need to build a user-centric roadmap for future development.

To track events effectively, you must strategically plan for metrics important to your organization and implement the appropriate infrastructure. This is a two-step process:

  1. Identify where to track the metrics in your technology stack.
  2. Instrument the SDK to actively track the events.

This topic provides an overview of some best practices for the process steps.

See also Choose metrics.

Use a customer data platform

We recommend using a customer data platform like Segment when setting up event tracking in Optimizely Feature Experimentation. Doing so can help bypass a common friction point in getting up and running on Feature Experimentation, which is configuring and validating API calls throughout your entire stack.

Without a customer data platform, you may run into a double-tagging problem. Double-tagging can occur when your stack includes multiple technologies with overlapping functions, such as tracking conversions. Unless you are diligent and organized from the beginning, it is easy to tag the same events in more than one tool inadvertently.

When the feature flags and functions of your own site are double-tagged like this, your engineering team often inherits a level of technical debt that can require substantial effort to resolve.

Avoid this by incorporating a product like Segment into your Optimizely Feature Experimentation implementation from the beginning and carefully planning your strategy for defining and tagging events early on. In particular, consider these factors:

  • The specific use cases you wish to address.
  • The feasibility of addressing these use cases.
  • The metrics that will best enable you to do this.
  • The most efficient and effective means to embed these metrics into your existing infrastructure.

Implementation

After identifying the events you want to track, the next issue is how best to implement Optimizely Feature Experimentation on your site. This involves adapting the Feature Experimentation SDK to meet your organization's needs.

Typical Optimizely Feature Experimentation use cases that require adapting the SDK include:

  • The application will handle a high volume of events.
  • Routing through a proxy server.
  • Results are required in real-time.

Specific approaches to implementation will vary depending on how many of these conditions apply to your application.

Instrument once

In many cases, SDK instrumentation can and should be done in the code itself rather than further down the stack. Because all products tie into one service in a multi-system or service-oriented architecture, you can instrument in one place. This simplifies the initial deployment as well as updates and maintenance.

This approach can work if you need results in real-time, but it is not always the best way to handle it. If you need real-time results, consider encapsulating information in an API instead.

For example, consider an event ticketing application. The application includes a module called buy_tickets that is driven by Java on the backend. In a queue-based model, all information about a user interaction is captured in a slot in the queue. The application will process the sign-in in real-time, but it will not necessarily do the same for all the actions related to that sign-in. Event processing speed in the queue depends on the volume of events and the backend's ability to handle the traffic. As a result, there may be some lag in delivering A/B test results.

Now imagine 1 million people using this application at the same time. If the app tries to record the success or failure of an A/B test somewhere on the site, those results will likely not be delivered in real-time under a queueing model. If, however, these events are tracked through an Optimizely Experimentation API, the results would be available immediately, with no noticeable lag.

If real-time results are important to you, instrument your events at the place where they actually happen.