AWS Security Lake
This destination is in Alpha stability.
Supported Types
✓
The AWS Security Lake destination exports OCSF-formatted logs as Parquet files to an Amazon Security Lake S3 bucket, partitioned by account, region, and OCSF class. This destination requires the OCSF Standardization processor upstream in the pipeline.
Prerequisites
Logs must be transformed into OCSF format before reaching this destination. Add the OCSF Standardization processor upstream in your pipeline.
An AWS Security Lake custom source must be registered in your AWS account for each OCSF class you intend to export.
AWS credentials must be available to the collector via the standard AWS credential chain (environment variables, shared credentials file, or IAM role).
Configuration
region*
enum
The AWS region where the Security Lake S3 bucket resides.
s3_bucket*
string
The name of the Security Lake S3 bucket.
account_id*
string
The AWS account ID used in the S3 partition path.
ocsf_version*
enum
The OCSF schema version to use for Parquet output. Supported values: 1.0.0, 1.1.0, 1.2.0, 1.3.0.
custom_sources*
map
A mapping of custom source names registered in Security Lake to their OCSF class IDs. The key is the custom source name and the value is the integer OCSF class ID. At least one entry is required.
*required field
Advanced Configuration
role_arn
string
An optional IAM role ARN to assume for S3 writes.
endpoint
string
An optional custom endpoint for S3 writes. Overrides the default AWS endpoint. Generally not needed.
timeout
int
5
Timeout in seconds for S3 write operations.
batch_size
int
10000
Number of events to buffer before flushing to S3.
batch_timeout
int
5
Maximum time in minutes to wait before flushing to S3, regardless of batch size.
batch_sizeapplies across all OCSF classes handled by a single destination instance. To buffer independently per class ID, configure a separate destination instance for each class, each with a single entry incustom_sources.
Retry and Queuing
This destination supports the retry settings, the sending queue settings, and the persistent queue settings.
✓
✓
✓
How It Works
The destination processes each batch of OCSF-formatted logs through the following steps:
Route by class - The
class_uidfield is read from each log record's body and looked up against the configured custom sources. Records with aclass_uidthat doesn't match any configured source are skipped with a warning (see Dropped Records).Partition - Matching records are grouped into partitions by custom source name, OCSF class ID, and event day (derived from the
timefield in the log body). Each partition becomes a separate Parquet file.Serialize to Parquet - Records within each partition are sorted by
timeascending, then serialized to Parquet using the OCSF schema for the configuredocsf_version, with ZSTD compression.Upload to S3 - Each Parquet file is uploaded to the Security Lake bucket using the following key format:
Dropped Records
Log records that are not in valid OCSF format are silently dropped. A warning is logged for each dropped record. Ensure the OCSF Standardization processor is upstream in the pipeline to avoid data loss.
Credentials
With AWS Security Lake, you must provide AWS credentials that allow s3:PutObject access to the Security Lake S3 bucket. There are two ways to configure this:
Using the AWS CLI to set up a credentials profile.
Specifying environment variables with access keys.
The AWS CLI getting started guide will instruct you how to install it for your current user or all users.
The Bindplane OTel Collector runs as root by default, meaning the AWS CLI and credentials should be installed under the collector system's root account.
Environment Variables
Alternatively, AWS environment variables can be specified to override a credentials file. You can modify the collector's environment variables by configuring a systemd override. Run sudo systemctl edit observiq-otel-collector and add your access key, secret key, and region:
After making that change, reload Systemd and restart the collector service.
Example Configuration
Web Interface

Standalone Destination
Last updated
Was this helpful?