# Google SecOps Pipelines

<figure><img src="https://3570577618-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FDBruzp1JKFyzeaBOInfR%2Fuploads%2FIYkENroOIMUWCuMAqKdk%2Fimage.png?alt=media&#x26;token=9e2485ca-c3e9-4633-9449-0323aab61ade" alt=""><figcaption></figcaption></figure>

### Overview

Bindplane's SecOps Pipelines let you create, configure, and manage [Data Processing Pipelines](https://docs.cloud.google.com/chronicle/docs/ingestion/data-processing-pipeline) directly in Bindplane. These pipelines apply custom OpenTelemetry processors to your log data after it reaches Google SecOps but before parsing and ingestion. You can easily transform, enrich, filter, and redact your SecOps data through Bindplane's interface, all without the need to write raw OTel configurations or manage agent deployments.

### Key Benefits

* **No agent management**: Pipelines automatically run on your Google SecOps data
* **Simplified configuration**: Use Bindplane's visual interface instead of editing raw OTel configurations and OTTL statements
* **Native integration**: Access your pipelines directly from your Google SecOps instance via "Open in Bindplane" links
* **Pre-ingestion processing**: Transform data before it's fully ingested into SecOps

### Important Limitations

SecOps Pipelines have some constraints compared to traditional Bindplane configurations:

* **Single destination only**: All data is exclusively sent to SecOps
* **10 processor limit**: Maximum of 10 processors per pipeline
* **Limited processor types**: Only the Bindplane processors built on [OpenTelemetry](https://opentelemetry.io/docs/collector/components/processor/)'s [transform](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor#transform-processor), [filter](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/filterprocessor#filter-processor), and [redaction](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/redactionprocessor#redaction-processor) processors are supported at this time
  * See the list of supported processors [here](https://docs.cloud.google.com/chronicle/docs/ingestion/data-processing-pipeline#configure-processors)
  * See the list of all Bindplane processors [here](https://docs.bindplane.com/integrations/processors)
* **No connector support:** [Connectors](https://docs.bindplane.com/integrations/connectors) are not supported in SecOps Pipelines at this time

### When to Use SecOps Pipelines vs. a Bindplane Configuration

SecOps Pipelines are a streamlined subset of Bindplane's full capabilities, scoped specifically for in-flight processing on data already flowing into Google SecOps. A full Bindplane configuration extends beyond that, enabling collection from any source, delivery to any destination, and access to the complete set of processors, routing logic, and more.

**Use SecOps Pipelines when:**

* You want to configure processing on data being sent to Google SecOps
* Your processing requirements do not require complex processors or connectors (transformations, filtering, redaction)
* You want to avoid managing agents and infrastructure

**Use Bindplane configurations when:**

* You need to send data to multiple destinations (e.g., Google SecOps + Cloud Storage + ClickHouse)
* You require advanced processors like resource detection, batching, or sampling
* You need complex routing logic with multiple processor nodes
* You need more than 10 processors in your pipeline

### Connecting the Google SecOps Integration

{% hint style="warning" %}
It is highly recommended to setup the Google SecOps Pipelines for a given tenant with only 1 Bindplane Project
{% endhint %}

#### Prerequisites

* An Google SecOps instance with the "Data Processing Pipeline Preview" enabled
  * Contact your Google SecOps Account Manager for more information and to get enabled for this preview
* A supported Bindplane Plan
  * [Bindplane Enterprise](https://docs.bindplane.com/plans-and-pricing/enterprise)
  * [Bindplane (Google Edition)](https://docs.bindplane.com/plans-and-pricing/google-edition#bindplane-google-edition)
  * [Bindplane Enterprise (Google Edition)](https://docs.bindplane.com/plans-and-pricing/google-edition#bindplane-enterprise-google-edition)

#### Setup

<figure><img src="https://3570577618-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FDBruzp1JKFyzeaBOInfR%2Fuploads%2FC3Aunx6UpvLGRnHwcCHY%2Fimage.png?alt=media&#x26;token=1be72b16-f686-4d6f-8429-09bd1e2507e5" alt=""><figcaption></figcaption></figure>

1. Navigate to your Bindplane Project Settings page
2. Scroll down to the Integrations section and click **Connect**
3. Provide details about your SecOps instance:
   * Customer ID
   * GCP Project Number
4. Configure an Authentication Method
   1. *Service Account JSON*\
      \
      The Service Account JSON authentication method requires providing the JSON key to a service account residing in the same GCP Project as your Google SecOps Instance. The service account must have permissions according to [Google's Documentation](https://docs.cloud.google.com/chronicle/docs/ingestion/data-processing-pipeline#prerequisites).<br>
   2. *Workload Identity Federation (WIF)*\
      \
      WIF authentication allows you to authenticate the Google SecOps Integration without providing raw credentials. This authentication method is only supported in Bindplane Cloud. The following documentation provides instructions on how to set up WIF auth.\
      [How to Connect the Google SecOps Integration with WIF Auth](https://docs.bindplane.com/how-to-guides/google-secops/connect-the-google-secops-integration-with-wif-auth)<br>
5. Once connected, the **SecOps Pipelines** tab will appear in Bindplane

### Working with SecOps Pipelines

#### Creating a Pipeline

Create a new SecOps Pipeline by going to the **SecOps Pipelines** page, clicking the **Create SecOps Pipeline** button, and filling out your desired name and description.

#### Managing Sources

SecOps Pipelines work with data sources (log types, ingestion methods, feeds) in your SecOps instance. Each data source is referred to as a "Stream" and can be configured by clicking the Add Stream button and following the dialog accordingly.

Clicking the node of an existing stream in your pipeline will allow you to edit or delete the stream as you see fit.

In order to save the changes you've made to your pipeline. Click the **Start Rollout** button.

#### Configuring Processors

{% hint style="info" %}
Not all Bindplane processors are supported in SecOps Pipelines. [Read more](https://docs.cloud.google.com/chronicle/docs/ingestion/data-processing-pipeline#configure-processors)
{% endhint %}

Configure processors as you normally would in Bindplane by clicking the processor node in the center of your pipeline.

In order to save the changes you've made to your pipeline. Click the **Start Rollout** button.

#### Managing Multiple Log Types

If you need different processing for different log types, you must create a separate SecOps Pipeline. Each pipeline can only have one processor configuration that applies to all sources within that pipeline.

#### Accessing from Google SecOps

From your Google SecOps instance, you can access your pipelines directly:

1. Navigate to **Settings -> Data Processing**
2. Click a Data Pipeline
3. Click **Open in Bindplane** to open the associated SecOps Pipeline in Bindplane

### Common Pitfalls

SecOps Pipelines behave differently from standard Bindplane configurations under the hood. Even if the interface feels familiar, there are a few differences that can be confusing at first.

**Logs Ingested through the Backstory API (gRPC) are not processed by SecOps Pipelines**

If you want to process logs being ingested through [the Backstory Ingestion API (gRPC)](https://docs.cloud.google.com/chronicle/docs/reference/ingestion-api) with SecOps Pipelines, you will need to import the logs using [the new Chronicle API Ingestion Methods](https://docs.cloud.google.com/chronicle/docs/reference/ingestion-methods).&#x20;

**Sample Logs May Not Reflect Active Pipeline Data**

When viewing a snapshot of a pipeline's logs, only logs that have been ingested from the configured sources **after** the pipeline was created will be displayed. If no matching logs have been ingested yet, Bindplane will automatically fall back to displaying a sample of logs with the same log types from your existing Google SecOps logs so you can still build and test your processor configuration.

Importantly, logs ingested through the Backstory API (gRPC) may be displayed in this fall back sample, but are not being processed by the pipeline as described above.

> **Note:** When this fallback is active, an info banner will appear at the top of the snapshot experience to let you know. These sample logs are **not** being actively processed by the pipeline, they are provided solely to help you validate your processor logic before live data begins flowing.

Keep this in mind when testing early: the logs visible in the UI may not be the ones your pipeline is actually acting on yet.

***

**Bindplane Displays Google SecOps-Formatted Logs, Not OTel Fields**

SecOps Pipelines process data using OpenTelemetry processors behind the scenes. As a result, logs pass through a conversion layer: from SecOps format → OTel format → back to SecOps format after processing.

The Bindplane UI always shows the **final Google SecOps-formatted log**, which includes:

* Log Entry Time
* Collection Time
* Namespace
* Body (string)
* Labels
  * A map of keys to Label objects
    * Label objects consist of a string `value`, and a boolean `rbacEnabled`&#x20;

OTel-specific fields, such as structured key-value map bodies, attributes, and resource attributes, are not visible in the UI. However, **processing that reads or modifies these fields does work correctly**. The results just won't be directly visible in the log preview; you'll see the downstream effect once the data has been converted back into SecOps format.

For example, if a user sets the attribute `myKey` using the Add Fields processor, the attribute will not be visible in the log snapshot. However, any processing logic performed on the `myKey` attribute (filtering, transformation, redaction) will still happen correctly. Only the Google SecOps formatted resulting logs will be displayed, which do not include Attributes or Resource Attributes.

***

**Referencing Google SecOps-Specific Fields Requires Special OTTL Syntax**

Google SecOps log fields like namespace and labels don't map directly to standard OTel field paths. To reference these fields when configuring processors, use the following syntax:

<table><thead><tr><th width="235">Field</th><th>OTTL Path</th></tr></thead><tbody><tr><td>Log Entry Time</td><td><code>attributes["log_entry_time"]</code></td></tr><tr><td>Collection Time</td><td><code>attributes["collection_time"]</code></td></tr><tr><td>Namespace</td><td><code>attributes["environment_namespace"]</code></td></tr><tr><td>Label value</td><td><code>attributes["labels.&#x3C;labelName>.value"]</code></td></tr><tr><td>Label RBAC enabled</td><td><code>attributes["labels.&#x3C;labelName>.rbac_enabled"]</code></td></tr></tbody></table>

For example, to reference a label named `myLabel`, you would use `attributes["labels.myLabel.value"]` for its string value or `attributes["labels.myLabel.rbac_enabled"]` for its boolean RBAC flag.

### Additional Resources

* [Google SecOps Data Processing Pipeline Documentation](https://docs.cloud.google.com/chronicle/docs/ingestion/data-processing-pipeline)
