Disclaimer: This website requires Please enable JavaScript in your browser settings for the best experience.

The availability of features may depend on your plan type. Contact your Customer Success Manager if you have any questions.

Dev guideRecipesAPI Reference
Dev guideAPI ReferenceUser GuideLegal TermsGitHubDev CommunityOptimizely AcademySubmit a ticketLog In
Dev guide

Event batching for the Ruby SDK

How the Optimizely Feature Experimentation Ruby SDK uses the event processor to batch impressions and conversion events into a single payload before sending it to Optimizely.

The Optimizely Feature Experimentation Ruby SDK batches decision and conversion events into a single payload before sending it to Optimizely. This is achieved through an SDK component called the event processor.

Event batching has the advantage of reducing the number of outbound requests to Optimizely Feature Experimentation depending on how you define, configure, and use the event processor. It means less network traffic for the same number of decision and conversion events tracked.

In the Ruby SDK, BatchEventProcessor provides implementation of the EventProcessor interface and batches events. You can control batching based on two parameters:

  • Batch size – Defines the number of events that are batched together before sending to Optimizely Feature Experimentation.
  • Flush interval – Defines the amount of time after which any batched events should be sent to Optimizely Feature Experimentation.

An event that consists of the batched payload is sent as soon as the batch size reaches the specified limit or flush interval reaches the specified time limit. BatchEventProcessor options are described in more detail below.

Basic example

require 'optimizely'
require 'optimizely/optimizely_factory'

# Initialize an Optimizely client
optimizely_instance = Optimizely::OptimizelyFactory.default_instance(
  'put_your_sdk_key_here'
)

By default, batch size is 10 and flush interval is 30 seconds.

Advanced example

Set the batch size and flush interval using BatchEventProcessor's constructor.

require 'optimizely'
require 'optimizely/event/batch_event_processor'

# Initialize BatchEventProcessor
event_processor = Optimizely::BatchEventProcessor.new(
  event_dispatcher: event_dispatcher,
  batch_size: 50,
  flush_interval: 1000
)

# Initialize an Optimizely client
optimizely_client = Optimizely::Project.new(
  sdk_key: '<your sdk key>',
  event_processor: event_processor
)

❗️

Warning

The maximum payload size is 3.5 MB. Optimizely rejects requests with a 400 response code, Bad Request Error, if the batch payload exceeds this limit.

The size limitation is because of the Optimizely Events API, which Feature Experimentation uses to send data to Optimizely.

The most common cause of a large payload size is a high batch size. If your payloads exceed the size limit, try configuring a smaller batch size.

BatchEventProcessor

BatchEventProcessor is an implementation of EventProcessor where events are batched. The class maintains a single consumer thread that pulls events off of the queue and buffers them for either a configured batch size or a maximum duration before the resulting LogEvent is sent to the EventDispatcher and NotificationCenter.

The following properties can be used to customize the BatchEventProcessor configuration.

PropertyDefault valueDescription
event_queue1000SizedQueue.new(100) or Queue.new
Queues individual events to be batched and dispatched by the executor.
Default value is 1000.
event_dispatchernilUsed to dispatch event payload to Optimizely Feature Experimentation.
batch_size10The maximum number of events to batch before dispatching. Once this number is reached, all queued events are flushed and sent to Optimizely Feature Experimentation.
flush_interval30000Maximum time to wait before batching and dispatching events. In milliseconds.
notification_centernilNotification center instance to be used to trigger any notifications.

For more information, see Initialize the Ruby SDK.

Side effects

The table lists other Optimizely Feature Experimentation functionality that may be triggered by using this class.

FunctionalityDescription
LogEventWhenever the event processor produces a batch of events, a LogEvent object will be created using EventFactory.

It contains batch of conversion and decision events.

This object will be dispatched using the provided event dispatcher and also it will be sent to the notification subscribers.
Notification ListenersFlush invokes the LOG_EVENT notification listener if this listener is subscribed to.

Register a LogEvent listener

To register a LogEvent notification listener

optimizely_client.notification_center.add_notification_listener(
  Optimizely::NotificationCenter::NOTIFICATION_TYPES[:LOG_EVENT]
) do |*_args|
  puts 'Notified!'
end

LogEvent

The LogEvent object gets created using EventFactory. It represents the batch of decision and conversion events we send to the Optimizely Feature Experimentation backend.

ObjectTypeDescription
http_verb
Required (non null)
StringThe HTTP verb to use when dispatching the log event. It can be GET or POST.
url
Required (non null)
StringURL to dispatch log event to.
params
Required (non null)
EventBatchIt contains all the information regarding every event which is batched. including list of visitors which contains UserEvent.
headers
Required
HashRequest Headers.

Close Optimizely Feature Experimentation on application exit

If you enable event batching, you must call the close method, optimizely.close()before exiting. This ensures that queued events are flushed as soon as possible to avoid data loss.

Warning

Because the Optimizely client maintains a buffer of queued events, you must call close() on the Optimizely Feature Experimentation instance before shutting down your application or whenever dereferencing the instance.

MethodDescription
close() Stops all timers and flushes the event queue. This method will also stop any timers that are happening for the datafile manager.

Note: Optimizely recommends that you connect this method to a kill signal for the running process.