The Optimizely Full Stack Python SDK lets you batch events and includes options to set a maximum batch size and flush interval timeout. The benefit of event batching means less network traffic for the same number of decision and conversion events tracked.
The event batching functionality works by processing decision events from Activate, Is Feature Enabled and conversion events from Track and placing them into a queue. The queue is drained when it either reaches its maximum size limit or when the flush interval is triggered.
By default, event batching is disabled in Python SDK 3.3.0. Follow along to learn how to take advantage of event batching in the Python SDK.
Event batching works with both out-of-the-box and custom event dispatchers.
Make sure that you aren't sending personally identifiable information (PII) to Optimizely. The event batching process doesn't remove PII from events.
Event batching can be enabled through the usage of
BatchEventProcessor. We provide two main options to configure event batching:
flush_interval. You can pass in both these options when creating instance of
BatchEventProcessor and pass the created instance during Optimizely client creation. When using
BatchEventProcessor, events are held in a queue until either:
- The number of events reaches the defined
- The oldest event has been in the queue for longer than the defined
flush_interval, which is specified in seconds. The queue is then flushed and all queued events are sent to Optimizely in a single network request.
- A new datafile revision is received. This occurs only when live datafile updates are enabled. See Enable automatic datafile updates.
from optimizely import optimizely from optimizely import event_dispatcher as optimizely_event_dispatcher from optimizely.event import event_processor # Set event dispatcher that the # You can reference your own implementation of event dispatcher here event_dispatcher = optimizely_event_dispatcher.EventDispatcher # Create instance of BatchEventProcessor. # # In this example here we set batch size to 15 events # and flush interval to 50 seconds. # Setting start_on_init starts the consumer # thread to start receiving events. # # See table below for explanation of these # and other configuration options. batch_processor = event_processor.BatchEventProcessor( event_dispatcher, batch_size=15, flush_interval=50 start_on_init=True, ) # Create Optimizely client and pass in instance # of BatchEventProcessor to enable batching. optimizely_client = optimizely.Optimizely( sdk_key='<Your SDK Key here>', datafile='<Your datafile here>', event_processor=batch_processor )
The table below defines these and other options that you can use to configure the BatchEventProcessor.
No default value. Required parameter.
An event handler to manage network calls.
A logger implementation to log issues.
The maximum duration in seconds that an event can exist in the queue before being flushed.
The maximum number of events to hold in the queue. Once this number is reached, all queued events are flushed and sent to Optimizely.
Boolean which if set true starts the thread consuming and queuing events on initializing the
By default the value is False and so the consumer thread is not ready to receive any events. One can always start the consumer thread by calling
Number representing time interval in seconds before joining the consumer thread.
Component that allows you to accumulate events until they are dispatched.
By default an instance of
For more information, see Initialize SDK.
The maximum payload size is 3.5 MB. If the resulting batch payload exceeds this limit, requests will be rejected with a 400 response code,
Bad Request Error.
The table lists other Optimizely functionality that may be triggered by using this class.
Whenever the event processor produces a batch of events, a LogEvent object will be created using the EventFactory.
Flush invokes the LOG_EVENT notification listener if this listener is subscribed to.
To register a LogEvent notification listener
def on_log_event(): pass optimizely_client.notification_center.add_notification_listener(enums.NotificationTypes.LOG_EVENT, on_log_event)
LogEvent object gets created using EventFactory. It represents the batch of decision and conversion events we send to the Optimizely backend.
The HTTP verb to use when dispatching the log event. It can be Get or Post.
URL to dispatch log event to.
Event Batch. It contains all the information regarding every event which is batched. including list of visitors which contains UserEvent.
If you enable event batching, it's important that you call the 'stop' method,
batch_processor.stop(), prior to exiting. This ensures that queued events are flushed as soon as possible to avoid any data loss.
Because the BatchEventProcessor maintains a buffer of queued events, you must call
stop()on the BatchEventProcessor instance before shutting down your application.
Stops and flushes the event queue.
Note: We recommend that you connect this method to a kill signal for the running process.
Updated 4 months ago