Layer 2: Shipping to Collectors

Shipping is the process by which layers 1 and 2 (local collection and shipping) interact to guarantee the integrity and confidentiality in the sending of events from the log and metric generators, and The Platform.

Events are only removed from the local collection layer -Forwarder- storage once the collector actively responds indicating that it has successfully received the events. This happens in real time, so the layer 1 storage acts as a first local buffering layer for the events generated in the original sources.

The element of this layer 2 that maintains the encrypted tunnel with the forwarders, and that is responsible for receiving and certifying that the events have been transferred correctly, is called Collector. Collectors also perform buffering of the events before being transmitted to the next layer (processing) so that events are not lost in the event of a problem in the processing layers. This two-layer buffering system is a very robust measure to guarantee the availability of log and metric flows in monitored environments with massive events generation, where it is critical not to lose events, not only for anomaly detection and problem analytics, but also for regulatory compliance and future forensic actions and in which it is necessary to recover all events from the past to reconstruct scenarios in which to carry out investigations for incidents for your customers.

Collectors layer provides autoscaling capacity when it is necessary to absorb large spikes in events reception, for which the The Platform itself will decide the number of collectors to deploy in parallel to cope with the volume of events received from forwarders. The platform's design strategy, in which the elements in charge of the same role are deployed in the same layer, and an auto-scaling capacity, guarantees that the platform can serve in highly distributed environments in which a multitude of devices are integrated from infrastructures with massive events generation, both light telemetry events based on serial data, as well as large events of several kilobytes and thousands of fields per event.

Last updated