Component Back Pressure Behavior
When downstream components slow down, back pressure propagates upstream with each component slowing its intake to match. This is how pipelines avoid dropping data.
Components that buffer internally break this chain. When their internal buffers fill, they drop data silently with no signal to slow down upstream. This is especially dangerous with UDP or Windows event sources.
Components That Break Back Pressure
These components buffer data internally and could silently drop data under load.
Connectors
Span to Metrics
Caches span data, flushes derived metrics on interval (default 5s)
Processors
Batch
Buffers telemetry, sends on size threshold (8192) or timer (200ms). No upper bound by default (send_batch_max_size: 0); set this to cap memory growth under sustained load.
Deduplicate Logs
Holds logs in memory over time window (default 10s). Memory scales with unique log bodies, so high-cardinality logs defeat dedup and can cause unbounded memory growth under load. Held logs are lost on restart, and the deduplicated aggregate is never emitted.
Count Telemetry
Accumulates counters, emits on interval (default 60s). Up to one full window of counts is lost on restart or crash.
Compute Metric Statistics
Accumulates data over interval (default 60s). Up to one full window of statistics is lost on restart or crash.
All other processors and connectors are synchronous; they process inline and maintain back pressure.
Placement Guidelines
Filter early. Reduce volume before async components to minimize buffer pressure.
For UDP/Windows sources, ensure sufficient upstream buffering or accept potential drops during spikes.
For high-volume pipelines (100k+ events/sec), consider a gateway architecture for durable buffering. See more.
Configure destination queuing to match your durability requirements. See more.
For throughput and latency impact of processors, see Processor Impact on Collector Performance.
Last updated
Was this helpful?