The Optimizely Ruby SDK batches decision and conversion events into a single payload before sending it to Optimizely. This is achieved through an SDK component called the event processor.
Event batching has the advantage of reducing the number of outbound requests to Optimizely depending on how you define, configure, and use the event processor. It means less network traffic for the same number of decision and conversion events tracked.
In the Ruby SDK,
BatchEventProcessor provides implementation of the
EventProcessor interface and batches events. You can control batching based on two parameters:
- Batch size: Defines the number of events that are batched together before sending to Optimizely.
- Flush interval: Defines the amount of time after which any batched events should be sent to Optimizely.
An event that consists of the batched payload is sent as soon as the batch size reaches the specified limit or flush interval reaches the specified time limit.
BatchEventProcessor options are described in more detail below.
require 'optimizely' require 'optimizely/optimizely_factory' # Initialize an Optimizely client optimizely_instance = Optimizely::OptimizelyFactory.default_instance( 'put_your_sdk_key_here' )
By default, batch size is 10 and flush interval is 30 seconds.
Set the batch size and flush interval using BatchEventProcessor's constructor.
require 'optimizely' require 'optimizely/event/batch_event_processor' # Initialize BatchEventProcessor event_processor = Optimizely::BatchEventProcessor.new( event_dispatcher: event_dispatcher, batch_size: 50, flush_interval: 1000 ) # Initialize an Optimizely client optimizely_client = Optimizely::Project.new( datafile, event_dispatcher, logger, error_handler, skip_json_validation, user_profile_service, sdk_key, config_manager, notification_center, event_processor )
BatchEventProcessor is an implementation of
EventProcessor where events are batched. The class maintains a single consumer thread that pulls events off of the
queue and buffers them for either a configured batch size or a maximum duration before the resulting
LogEvent is sent to the
The following properties can be used to customize the
SizedQueue.new(100) or Queue.new
Used to dispatch event payload to Optimizely.
The maximum number of events to batch before dispatching. Once this number is reached, all queued events are flushed and sent to Optimizely.
Maximum time to wait before batching and dispatching events. In milliseconds.
Notification center instance to be used to trigger any notifications.
For more information, see Initialize SDK.
The table lists other Optimizely functionality that may be triggered by using this class.
Whenever the event processor produces a batch of events, a LogEvent object will be created using EventFactory.
Flush invokes the LOG_EVENT notification listener if this listener is subscribed to.
To register a LogEvent notification listener
callback_reference = lambda do |*args| puts "Notified!" end optimizely_client.notification_center.add_notification_listener( Optimizely::NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT], callback_reference )
LogEvent object gets created using EventFactory. It represents the batch of decision and conversion events we send to the Optimizely backend.
The HTTP verb to use when dispatching the log event. It can be Get or Post.
URL to dispatch log event to.
It contains all the information regarding every event which is batched. including list of visitors which contains UserEvent.
If you enable event batching, make sure that you call the
optimizely.close(), prior to exiting. This ensures that queued events are flushed as soon as possible to avoid any data loss.
Because the Optimizely client maintains a buffer of queued events, you must call
close()on the Optimizely instance before shutting down your application or whenever dereferencing the instance.
Stops all timers and flushes the event queue. This method will also stop any timers that are happening for the datafile manager.
Note: We recommend that you connect this method to a kill signal for the running process.
Updated 4 months ago