Google GKE
Collect logs from Google Kubernetes Engine clusters
Collect and forward logs from Google GKE to Parseable.
Overview
Integrate Google GKE with Parseable to:
- Kubernetes Logs - Collect pod and container logs
- GCP Integration - Native Google Cloud support
- Autopilot Support - Works with GKE Autopilot
- Rich Metadata - Include GCP and K8s context
Prerequisites
- GKE cluster running
- kubectl configured
- Helm (recommended)
- Parseable instance accessible from GKE
Method 1: Fluent Bit DaemonSet
Deploy Fluent Bit for log collection.
Install with Helm
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
helm install fluent-bit fluent/fluent-bit \
--namespace logging \
--create-namespace \
-f fluent-bit-values.yamlValues File
config:
inputs: |
[INPUT]
Name tail
Path /var/log/containers/*.log
Parser cri
Tag kube.*
Mem_Buf_Limit 5MB
filters: |
[FILTER]
Name kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
K8S-Logging.Parser On
[FILTER]
Name modify
Match *
Add cloud.provider gcp
Add cluster.name ${CLUSTER_NAME}
outputs: |
[OUTPUT]
Name http
Match *
Host parseable.example.com
Port 8000
URI /api/v1/ingest
Format json
Header Authorization Basic YWRtaW46YWRtaW4=
Header X-P-Stream gke-logs
tls On
env:
- name: CLUSTER_NAME
value: "my-gke-cluster"
tolerations:
- operator: Exists
effect: NoScheduleMethod 2: Google Cloud Logging Export
Export from Cloud Logging to Pub/Sub, then to Parseable.
Create Log Sink
gcloud logging sinks create gke-to-pubsub \
pubsub.googleapis.com/projects/PROJECT_ID/topics/gke-logs \
--log-filter='resource.type="k8s_container"'Then use GCP Pub/Sub integration to forward to Parseable.
Method 3: OpenTelemetry Collector
Collector Configuration
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
data:
config.yaml: |
receivers:
filelog:
include: [/var/log/containers/*.log]
operators:
- type: regex_parser
regex: '^(?P<time>[^ ]+) (?P<dataset>stdout|stderr) (?P<logtag>[^ ]*) (?P<log>.*)$'
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%dT%H:%M:%S.%LZ'
processors:
k8sattributes:
extract:
metadata:
- k8s.pod.name
- k8s.namespace.name
- k8s.deployment.name
batch:
timeout: 10s
exporters:
otlphttp:
endpoint: "http://parseable:8000"
headers:
Authorization: "Basic YWRtaW46YWRtaW4="
X-P-Stream: "gke-logs"
service:
pipelines:
logs:
receivers: [filelog]
processors: [k8sattributes, batch]
exporters: [otlphttp]Workload Identity
Use Workload Identity for secure authentication:
# Create GCP service account
gcloud iam service-accounts create fluent-bit-sa
# Grant permissions
gcloud projects add-iam-policy-binding PROJECT_ID \
--member="serviceAccount:fluent-bit-sa@PROJECT_ID.iam.gserviceaccount.com" \
--role="roles/logging.viewer"
# Bind to Kubernetes service account
gcloud iam service-accounts add-iam-policy-binding \
fluent-bit-sa@PROJECT_ID.iam.gserviceaccount.com \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:PROJECT_ID.svc.id.goog[logging/fluent-bit]"Kubernetes Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: logging
annotations:
iam.gke.io/gcp-service-account: fluent-bit-sa@PROJECT_ID.iam.gserviceaccount.comGKE Autopilot
For Autopilot clusters, use Cloud Logging export method as DaemonSets are not supported.
Best Practices
- Use Workload Identity - Secure credential management
- Add GCP Metadata - Include project, cluster info
- Filter Logs - Exclude system namespaces if needed
- Monitor Resources - Watch collector resource usage
- Use Labels - Leverage GKE labels for filtering
Troubleshooting
Permission Denied
- Verify Workload Identity binding
- Check service account permissions
- Verify RBAC configuration
Missing Logs
- Check collector pod logs
- Verify log paths
- Check network policies
Next Steps
- Configure Azure AKS for multi-cloud
- Set up alerts for GKE events
- Create dashboards for cluster monitoring
Was this page helpful?