HomeGuidesAPI Reference
Submit Documentation FeedbackJoin Developer CommunityOptimizely GitHubOptimizely NuGetLog In

Optimizely Agent

Optimizely Agent is a standalone, open-source and highly available microservice that provides major benefits over using Optimizely Full Stack Flags version SDKs in certain use cases.

The Agent REST API Config Endpoint offers consolidated and simplified endpoints for accessing all the functionality of Optimizely Full Stack Flags version SDKs.

Example implementation

A typical production installation of Optimizely Agent is to run two or more services behind a load balancer or proxy. The service can be run via a Docker container, on Kubernetes through Helm, or installed from the source. See Setup Optimizely Agent for instructions on how to run Optimizely Agent.

Click to enlarge imageClick to enlarge image

Click to enlarge image

Reasons to use Optimizely Agent

Here are some of the top reasons to consider using Optimizely Agent:

1. You follow a Service-Oriented Architecture (SOA)

If you already separate some of your logic into services that might need to access the Optimizely decision APIs, we recommend using Optimizely Agent.

The images below compare implementation styles in a service-oriented architecture.

First without using Optimizely Agent, which shows six SDK embedded instances:

A diagram showing the use of SDKs installed on each service in a service-oriented architecture  
(Click to enlarge)A diagram showing the use of SDKs installed on each service in a service-oriented architecture  
(Click to enlarge)

A diagram showing the use of SDKs installed on each service in a service-oriented architecture
(Click to enlarge)

Now with using Optimizely Agent. Instead of installing the SDK six times, you create just one Optimizely instance. An HTTP API that every service can access as needed:

A diagram showing the use of Optimizely Agent in a single service  
(Click to enlarge)A diagram showing the use of Optimizely Agent in a single service  
(Click to enlarge)

A diagram showing the use of Optimizely Agent in a single service
(Click to enlarge)

2. You want standardized access across teams

If you want to deploy Optimizely Full Stack once, then roll out the single implementation across many teams, we recommend using Optimizely Agent.

By standardizing your teams' access to the Optimizely service, you can better enforce processes and implement governance around feature management and experimentation as a practice.

A diagram showing the central and standardized access to the Optimizely Agent service across an arbitrary number of teams.  
(Click to Enlarge)A diagram showing the central and standardized access to the Optimizely Agent service across an arbitrary number of teams.  
(Click to Enlarge)

A diagram showing the central and standardized access to the Optimizely Agent service across an arbitrary number of teams.
(Click to Enlarge)

3. You want networking centralization

You do not want many SDK instances connecting to Optimizely's cloud service from every node in your application. Optimizely Agent centralizes your network connection. Only one cluster of agent instances connects to Optimizely for tasks like update datafiles and dispatch events.

4. Your preferred programming languages is not offered as a native SDK

You are using a programming language not supported by a native SDK. For example, Elixir, Scala, or Perl. While it is possible to create your own service using an Optimizely SDK of your choice, you could also customize the open-source Optimizely Agent to your needs without building the service layer on your own.

Reasons to not use Optimizely Agent

If your use case would not benefit greatly from Optimizely Agent, you should consider the below reasons to not use Optimizely Agent and review Optimizely's many open-source SDKs instead.

1. You worry about latency

If time to provide bucketing decisions is your primary concern, you may want to use an embedded Full Stack SDK rather than Optimizely Agent.

Implementation OptionDecision Latency
Embedded SDKmicroseconds
Optimizely Agentmilliseconds

2. You wrote your app using a monolithic architecture

If your app is constructed as a monolith, embedded SDKs might be easier to install and a more natural fit for your application and development practices.

3. You worry about initial team velocity

If you are looking for the fastest way to get a single team up and running with deploying feature management and experimentation, embedding an SDK is the best option for you at first. You can always start using Optimizely Agent later, and it can even be used alongside Optimizely Full Stack SDKs running in another part of your stack.

Important information about Agent

Scaling

Optimizley Agent can scale to large decision/event tracking volumes with relatively low CPU/memory specs. For example, at Optimizely, we scaled our deployment to 740 clients with a cluster of 12 agent instances, using 6 vCPUs and 12GB RAM. You will likely need to focus more on network bandwidth than compute power.

Load balancer

Any standard load balancer should let you route traffic across your agent cluster. At Optimizely, we used an AWS Elastic Load Balancer (ELB) for our internal deployment. This allows us to easily scale our Agent cluster as internal demands increase.

Datafile synchronization across Agent instances

Agent offers eventual consistency, rather than, strong consistency across datafiles.

Each Agent instance maintains a dedicated, separate cache. Each Agent instance persists an SDK instance for each SDK key your team uses. Agent instances automatically keep datafiles up to date for each SDK key instance so that you will eventually have consistency across the cluster. The rate of the datafile update can be set as the configuration value OPTIMIZELY_CLIENT_POLLINGINTERVAL (the default is 1 minute).

Because SDKs are generally stateless, they should not need to share data. Optimizely plans to add a common backing data store, so we invite you to share your feedback through your technical support manager.

If you require strong consistency across datafiles, we recommend an active/passive deployment where all requests are made to a single vertically scaled host, with a passive, standby cluster available for high availability in the event of a failure.


Did this page help you?