Google Kubernetes Engine Gateway Collector Load Balancer

Use a Google Cloud Load Balancer to proxy traffic to your Gateway Collector.

Google Kubernetes Engine Gateway Collector Load Balancer

Use a Kubernetes LoadBalancer Service to expose the Bindplane Gateway Collector to external clients on Google Kubernetes Engine (GKE). This guide provides production-ready examples for creating an external TCP load balancer that mirrors your existing ClusterIP Service, optional configuration for a static public IP, and an internal load balancer (ILB) for private access. Validation steps and troubleshooting guidance are included.

A LoadBalancer Service is preferable to a ClusterIP + Ingress configuration, as the LoadBalancer service can manage HTTP, gRPC, TCP. Traditional HTTP Ingress solutions do not support L4 TCP load balancing.

Overview

Expose the Bindplane Gateway Collector to external senders on Google Kubernetes Engine (GKE) using a Kubernetes LoadBalancer Service. This creates a Google Cloud TCP Network Load Balancer that proxies traffic to the gateway on all required ports (OTLP gRPC/HTTP, Splunk HEC/TCP).

This guide builds on the default deployment, which includes a ClusterIP Service in the bindplane-agent namespace. You will create a second LoadBalancer Service that mirrors the same ports and selector for external traffic.

Assumptions

Bindplane Gateway Collector is already deployed in bindplane-agent.

kubectl -n bindplane-agent get all
NAME                                               READY   STATUS    RESTARTS   AGE
pod/bindplane-gateway-agent-847d69c756-4nbmj       1/1     Running   0          52s
pod/bindplane-gateway-agent-847d69c756-4vjb5       1/1     Running   0          8d
pod/bindplane-gateway-agent-847d69c756-5tgnb       1/1     Running   0          22d
pod/bindplane-gateway-agent-847d69c756-pd4ld       1/1     Running   0          157m
pod/bindplane-gateway-agent-847d69c756-rghj4       1/1     Running   0          8d

NAME                                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)             AGE
service/bindplane-gateway-agent            ClusterIP   10.4.226.118   <none>        4317/TCP,4318/TCP   694d

NAME                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/bindplane-gateway-agent        8/8     8            8           561d

Deployment

1) Create an external LoadBalancer Service

Create a new service that matches the ports and labels of the gateway collector deployment, with type: LoadBalancer. This provisions a Google Cloud external TCP load balancer for each port.

# bindplane-gateway-agent-lb.yaml
apiVersion: v1
kind: Service
metadata:
  name: bindplane-gateway-agent-lb
  namespace: bindplane-agent
  labels:
    app.kubernetes.io/name: bindplane-agent
    app.kubernetes.io/component: gateway
spec:
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  sessionAffinity: None
  selector:
    app.kubernetes.io/name: bindplane-agent
    app.kubernetes.io/component: gateway
  ports:
  - name: otlp-grpc
    appProtocol: grpc
    protocol: TCP
    port: 4317
    targetPort: 4317
  - name: otlp-http
    appProtocol: http
    protocol: TCP
    port: 4318
    targetPort: 4318
  - name: splunk-tcp
    appProtocol: tcp
    protocol: TCP
    port: 9997
    targetPort: 9997
  - name: splunk-hec
    appProtocol: tcp
    protocol: TCP
    port: 8088
    targetPort: 8088
  • The externalTrafficPolicy: Cluster setting is recommended for even distribution. If you need to preserve the original client IP, set externalTrafficPolicy: Local (ensure sufficient pods on nodes to pass health checks).

Apply the service:

kubectl apply -f bindplane-gateway-agent-lb.yaml

1a) Optional: Use a static external IP

Reserve a regional static IP in the same region as your GKE cluster, then reference it via spec.loadBalancerIP.

# Replace REGION
gcloud compute addresses create bindplane-gateway-public --region=REGION
gcloud compute addresses describe bindplane-gateway-public --region=REGION --format='get(address)'

Add the IP to the service and apply:

spec:
  type: LoadBalancer
  loadBalancerIP: 203.0.113.10
  externalTrafficPolicy: Cluster
  ...

1b) Optional: Create an internal load balancer (ILB)

For private access inside your VPC, annotate the service to use an internal TCP load balancer.

apiVersion: v1
kind: Service
metadata:
  name: bindplane-gateway-agent-ilb
  namespace: bindplane-agent
  labels:
    app.kubernetes.io/name: bindplane-agent
    app.kubernetes.io/component: gateway
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
spec:
  type: LoadBalancer
  externalTrafficPolicy: Cluster
  selector:
    app.kubernetes.io/name: bindplane-agent
    app.kubernetes.io/component: gateway
  ports:
  - name: otlp-grpc
    protocol: TCP
    port: 4317
    targetPort: 4317
  - name: otlp-http
    protocol: TCP
    port: 4318
    targetPort: 4318
  - name: splunk-tcp
    protocol: TCP
    port: 9997
    targetPort: 9997
  - name: splunk-hec
    protocol: TCP
    port: 8088
    targetPort: 8088

Optionally specify a reserved internal IP from your subnet via loadBalancerIP.

2) Verify provisioning

Wait for the EXTERNAL-IP (or internal IP) to be assigned:

kubectl get svc -n bindplane-agent bindplane-gateway-agent-lb

Example output once ready:

NAME                         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                                             AGE
bindplane-gateway-agent-lb   LoadBalancer   10.12.34.56    203.0.113.10    4317:32xxx/TCP,4318:32yyy/TCP,9997:32zzz/TCP,8088:32aaa/TCP   1m

3) Test connectivity

Simple TCP checks from a client on the Internet (or within VPC for ILB):

nc -vz 203.0.113.10 4317   # OTLP gRPC
nc -vz 203.0.113.10 4318   # OTLP HTTP
nc -vz 203.0.113.10 9997   # Splunk TCP
nc -vz 203.0.113.10 8088   # Splunk HEC

For functional tests, point an OpenTelemetry SDK or collector at 203.0.113.10:4317/4318. For Splunk HEC you will need a valid token and TLS if required by your configuration.

Notes and troubleshooting

  • GKE creates firewall rules and health checks automatically. If using NetworkPolicy or custom firewalls, ensure NodePort and health check traffic is permitted.

  • Multiple ports are supported on a single Service; GKE creates a forwarding rule and backend for each.

  • If using externalTrafficPolicy: Local, ensure there is at least one ready pod on nodes receiving traffic, otherwise health checks may fail and the LB will not route to that node.

  • Keep the original ClusterIP service for in-cluster traffic and service discovery; the LoadBalancer service is for external/VPC ingress.

Reference: Exposed ports

The LoadBalancer service should mirror your ClusterIP ports and targets so both in-cluster and external clients reach the same endpoints.

Name
Port
Protocol
Purpose

otlp-grpc

4317

TCP

OpenTelemetry OTLP gRPC

otlp-http

4318

TCP

OpenTelemetry OTLP HTTP

splunk-tcp

9997

TCP

Splunk forwarder (TCP)

splunk-hec

8088

TCP

Splunk HEC

Last updated

Was this helpful?