Disclaimer: This website requires Please enable JavaScript in your browser settings for the best experience.

The availability of features may depend on your plan type. Contact your Customer Success Manager if you have any questions.

Dev guideRecipesAPI ReferenceChangelog
Dev guideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev guide

Centralized datafile service options

Explores two options for creating a centralized datafile service and recommends Redis as the best solution.

When you run the Optimizely Agent in a distributed or containerized setting, each instance needs fast, reliable access to the same, up-to-date datafile. A centralized datafile service ensures the following:

  • Consistency – All Agent instances read from a single, synchronized source of truth.
  • Scalability – You avoid overhead caused by each Agent calling out to Optimizely's CDN individually.
  • Speed – You can rapidly propagate datafile changes to every instance in near real-time.

Two common approaches

Option A: Redis backplane with webhook-driven datafile updates

You configure a webhook in Optimizely, when changes occur (starting or stopping an experiment, traffic allocations, and so on), Optimizely sends a POST to your webhook endpoint. Your system (a small web service or lambda function) receives the webhook and then fetches the updated data from Optimizely's CDN. It stores (or overwrites) that datafile in Redis. Instead of each Agent calling Optimizely's CDN, they read the datafile from Redis. When Redis is updated, all Agents effectively have the new datafile immediately.

Pros

  • Real-time updates – Webhooks push updates instantly, no polling necessary.
  • High performance – Redis is an in-memory store so reads are extremely fast.
  • Single source of truth – All Agents see the same datafile from one location.
  • Simplicity at scale – Especially effective if you have many Agents or multiple regions.

Option B: AWS storage (for example, S3) with custom service

A custom service or lambda function periodically (or with a webhook) fetches the latest datafile from Optimizely's CDN. It stores the datafile in an AWS S3 bucket. The Agents are configured to pull from the S3 bucket or a small service that fronts S3. This ensures a centralized location within AWS.

Pros

  • Familiar AWS tools – If your infrastructure already relies heavily on S3, it may be simpler operationally.
  • Persistence and versioning – S3 can store multiple datafile versions for auditing or rollback.
  • Simple backup – Files remain in S3 even if your containers go down.

Cons

  • Slightly slower updates – Even with a webhook, you are writing to S3, which is typically less "real-time" than in-memory Redis.
  • Complexity with many Agents – You need to ensure S3 is regionally replicated or handle cross-region latencies.
  • No in-memory speed – Each request has to hit S3 or a proxy.

Recommendation

For most production use cases, especially if you have multiple Agent instances requiring fast, consistent updates, you should use Redis with a webhook-driven approach.

  • Event-driven sync – You get immediate datafile updates whenever you publish changes in Optimizely.
  • Central in-memory store – Redis ensures low-latency reads for all Agents.
  • Reduced external dependencies – Only one service (the webhook) contacts Optimizely's CDN. Your Agents talk to Redis internally.

Implementation guidelines

To create a centralized datafile service, complete the following steps. See the Webhooks for Agent documentation for details on configuring webhooks.

  1. Create a webhook endpoint – This could be a lightweight web server, AWS Lambda, or another microservice that can receive POST requests from Optimizely. Ensure it can handle authentication (for example, a shared secret or token) to confirm requests genuinely come from Optimizely.
  2. Configure the webhook in Optimizely – In Optimizely's project settings, add your endpoint URL as the webhook receiver. Use secrets or IP allowlisting for security.
  3. Upon webhook trigger – Your endpoint receives the JSON payload indicating an update. It fetches the new datafile from Optimizely's CDN.
  4. Write the datafile to Redis – Overwrite the existing datafile key in Redis with the new datafile contents. If you have multiple environments (Production, Development, and so on), store each datafile in a separate Redis key.
  5. Maintain high availability – Use a managed Redis solution (for instance, AWS ElastiCache) with replication and failover to minimize downtime. Monitor health and configure alerts for potential failures or long response times.
  6. Agent configuration – Instead of pointing the Agent to Optimizely's CDN, configure it to retrieve the datafile from Redis (for example, a local microservice that reads from Redis or direct Redis integration if available). Ensure the Agent updates its in-memory cache when the Redis datafile key changes.
  7. Version Logging – Have each Agent log the datafile revision it is using. This helps troubleshoot if one Agent is somehow out of sync.

Additional recommendations

You should also consider the following for your implementation of a centralized datafile service:

  • Protect your Redis cluster (for example, VPC security group rules and network ACLs). Ensure the webhook endpoint is not publicly accessible without authentication.
  • Create fallback logic if Redis is temporarily unavailable (for example, Agents use a locally cached datafile version). If your environment demands it, store the datafile versions in S3 as a backup.
  • Configure CloudWatch or similar tooling to monitor the following:
    • Redis memory usage and latency.
    • Webhook invocation success or failure.
    • Agent logs for datafile version changes.
    • Trigger alerts if datafiles are not updated within the expected time window
  • Test how quickly updates propagate from Optimizely's webhook > Redis > Agent. If you are dealing with high volumes of traffic or large datafiles, adjust your Redis instance and network configurations as needed.

Why Redis is the best solution

Redis provides the following, which makes it the best solution for a centralized datafile service:

  • Consistency – All Agent instances see the latest datafile at the same time, reducing the risk of serving outdated experiment configurations.
  • Simplicity at scale – A single webhook and Redis solution is easier to maintain than multiple direct CDN fetches, especially as you add more Agents or microservices.
  • Real-time updates – Webhooks push changes as they happen, ensuring your feature flags and experiments stay in sync with minimal lag.
  • Proven pattern – Many enterprise Optimizely customers follow the Redis + Webhook approach for reliability and performance reasons.

Summary

A centralized datafile service is essential for consistent, high-performing use of the Optimizely Feature Experimentation Agent. At a high level, complete the following steps to create a centralized datafile service:

  • Implement a webhook in Optimizely that triggers on datafile changes.
  • Fetch and store the datafile in a Redis backplane.
  • Configure your Agents to read from Redis.

This approach ensures near real-time updates, consistent data, and a simpler path to scale. If your environment heavily relies on AWS S3 and does not need instant changes, you may consider a custom S3-based solution, but for most use cases, Redis and webhooks are the most straightforward and robust.