Using Google SecOps with Bindplane Best Practices
This document outlines best practices for Bindplane's Google SecOps Destination. We will explain optimizing API request payload sizes, grouping log types to improve API performance, and compare the Google SecOps gRPC and HTTP API endpoints.
We will walkthrough the components of a pipeline from left to right: Sources, Processors, and finally the Google SecOps Destination.
Sources
After logs have been sent to Google SecOps they are parsed. Google SecOps expects logs to be in a format that closely resembles the specified log type. Because of this, we recommend all logs be gathered raw and unparsed. If it is available in your source, enable the option for sending raw data. If there is no option for sending raw, you may need to use a more generic source.
The following are examples of sources that support capturing raw logs:
Windows Events (With Advanced → "Raw Logs" enabled)
Microsoft SQL Server
CSV
File
HTTP
TCP
UDP
Azure Event Hub (Using Log Format: Raw Option)
Kafka (Using Raw Option)
Processors
Google SecOps Standardization Processor
Best practice is to use the Google SecOps Standardization Processor. This will help you define a log type which determines parser SecOps uses. The processor also makes it simpler to specify namespaces and ingestion labels for custom UDM's.
Batch Processor
Batching by log type improves the performance of the SecOps API. For each source with a SecOps destination, add a batch processor as the first processor for the pipeline. Our aim in changing these settings is to maximize the payload of each SecOps API request while remaining under its 4MB limit. This lowers the overhead on for their API by sending fewer requests.
We recommend starting with the following batch settings, though it is best to tune these settings for each pipeline based on volume and message size. For these settings size refers to a count of items not the size in bytes.
A "Send Batch Size" of 1365
A "Send Batch Max Size" of 2048
A "Timeout" of 2s
Google SecOps Destination
The batch settings for the exporter are for limiting the size of a batch it was given and split it if needed into multiple requests. Batching needs to be done before the destination/exporter in your configuration via a batch processor, preferably one batch processor for each log type. If you don't have a batch processor configured the exporter will send requests for each log as soon as it is received, which depending on volume can overwhelm the SecOps API and reduce your throughput.
To maximize what is sent, we can do the following: Set the batch size limit for the destination/exporter to 4000000 (to match the SecOps API request size limit of 4mb) and set up batch processors after each source that feed the destination, see Log Batch Creation Limits. For the batch processors you can set a batch size greater than 4mb but there is usually less latency if you stay closer to the SecOps API limit.
Payload Size Guidance
The BindPlane SecOps destination has a default 'Batch limit' setting that sets the maximum payload size of 4 MB (4096 KB) per API request on Bindplane configurations in versions from 1.90.1. This is a hard limit from the SecOps API that must be considered during batching to avoid a performance hit from the re-batching process. The goal of our batch processor settings are to keep the batch size as close to 4MB as possible.
Setting the Consumers
In busy systems especially, adding a higher 'Number of Consumers' in the SecOps destinations advanced settings, will greatly increase performance. Recommendation: Increase the number of consumers to 40/50 as a start and monitor for success based on 'Deadline Exceeded' agent log messages.
Use HTTPS as the Export Destination
There will be noticeable gains going to the 'https' (dataplane API) destination. Why: It will handle batches more intelligently and allow mixing ingestion labels better.
Try to Avoid Mixing Ingestion Labels
If using GRPC, each log entry should maintain consistent metadata across batches. Avoid mixing logs with differing values for labels such as:
Silent host monitoring ingestion_source label Namespaces that are unique
This helps ensure there are no problems sending the data to the SecOps API. It is not a hard limit but an opportunity for tweaking if underperforming. If it is unavoidable, that is ok, the purpose is just to minimize problems, but this alone won't break the pipeline.
Last updated
Was this helpful?