The Agent REST API Config Endpoint offers consolidated and simplified endpoints for accessing all the functionality of Optimizely Full Stack Flags version SDKs.
A typical production installation of Optimizely Agent is to run two or more services behind a load balancer or proxy. The service can be run via a Docker container, on Kubernetes through Helm, or installed from the source. See Setup Optimizely Agent for instructions on how to run Optimizely Agent.
Here are some of the top reasons to consider using Optimizely Agent:
If you already separate some of your logic into services that might need to access the Optimizely decision APIs, we recommend using Optimizely Agent.
The images below compare implementation styles in a service-oriented architecture.
First without using Optimizely Agent, which shows six SDK embedded instances:
Now with using Optimizely Agent. Instead of installing the SDK six times, you create just one Optimizely instance. An HTTP API that every service can access as needed:
If you want to deploy Optimizely Full Stack once, then roll out the single implementation across many teams, we recommend using Optimizely Agent.
By standardizing your teams' access to the Optimizely service, you can better enforce processes and implement governance around feature management and experimentation as a practice.
You do not want many SDK instances connecting to Optimizely's cloud service from every node in your application. Optimizely Agent centralizes your network connection. Only one cluster of agent instances connects to Optimizely for tasks like update datafiles and dispatch events.
You are using a programming language not supported by a native SDK. For example, Elixir, Scala, or Perl. While it is possible to create your own service using an Optimizely SDK of your choice, you could also customize the open-source Optimizely Agent to your needs without building the service layer on your own.
If your use case would not benefit greatly from Optimizely Agent, you should consider the below reasons to not use Optimizely Agent and review Optimizely's many open-source SDKs instead.
If time to provide bucketing decisions is your primary concern, you may want to use an embedded Full Stack SDK rather than Optimizely Agent.
|Implementation Option||Decision Latency|
If your app is constructed as a monolith, embedded SDKs might be easier to install and a more natural fit for your application and development practices.
If you are looking for the fastest way to get a single team up and running with deploying feature management and experimentation, embedding an SDK is the best option for you at first. You can always start using Optimizely Agent later, and it can even be used alongside Optimizely Full Stack SDKs running in another part of your stack.
Optimizley Agent can scale to large decision/event tracking volumes with relatively low CPU/memory specs. For example, at Optimizely, we scaled our deployment to 740 clients with a cluster of 12 agent instances, using 6 vCPUs and 12GB RAM. You will likely need to focus more on network bandwidth than compute power.
Any standard load balancer should let you route traffic across your agent cluster. At Optimizely, we used an AWS Elastic Load Balancer (ELB) for our internal deployment. This allows us to easily scale our Agent cluster as internal demands increase.
Each Agent instance maintains a dedicated, separate cache. Each Agent instance persists an SDK instance for each SDK key your team uses. Agent instances automatically keep datafiles up to date for each SDK key instance so that you will eventually have consistency across the cluster. The rate of the datafile update can be set as the configuration value
OPTIMIZELY_CLIENT_POLLINGINTERVAL (the default is 1 minute).
Because SDKs are generally stateless, they should not need to share data. Optimizely plans to add a common backing data store, so we invite you to share your feedback through your technical support manager.
If you require strong consistency across datafiles, we recommend an active/passive deployment where all requests are made to a single vertically scaled host, with a passive, standby cluster available for high availability in the event of a failure.
Updated 28 days ago