Skip to main content

Load testing Parseable with K6

· 6 min read

Integrating K6 with Kubernetes allows developers to run load tests in a scalable and distributed manner. By deploying K6 in a Kubernetes cluster, you can use Kubernetes orchestration capabilities to manage and distribute the load testing across multiple nodes. This setup ensures you can simulate real-world traffic and usage patterns more accurately, providing deeper insights into your application's performance under stress.


Before we dive into the setup, ensure you have the following tools installed and configured:

  • kubectl: A command-line tool for interacting with Kubernetes clusters.
  • Helm: A package manager for Kubernetes which helps you deploy complex applications quickly.

Configure these tools to point to your relevant Kubernetes cluster.


Setup MinIO

MinIO is a high-performance distributed object storage server for large-scale data infrastructure. It is compatible with Amazon S3 cloud storage service and easily integrates into your applications for storing and retrieving data.

MinIO serves as the backend object store for Parseable, allowing it to manage and store logs efficiently. Using MinIO in standalone mode simplifies our setup while providing the necessary capabilities.

Install MinIO:

helm repo add MinIO
helm install --namespace MinIO --create-namespace --set "buckets[0].name=parseable,buckets[0].policy=none,buckets[0].purge=false,rootUser=minioadmin,rootPassword=minioadmin,replicas=1,persistence.enabled=false,resources.requests.memory=128Mi,mode=standalone" MinIO MinIO/MinIO

This installs MinIO in a new namespace called 'MinIO'. It sets up a bucket named 'parseable' and configures the MinIO server with basic authentication. The persistence.enabled=false option disables persistent storage for simplicity in this tutorial.

Access MinIO Console:

kubectl port-forward svc/MinIO-console -n MinIO 9001:9001

You can now access the MinIO console at http://localhost:9001. Log in using the credentials 'minioadmin' for both username and password, and you will see a bucket named 'parseable' already created.

Setup Parseable

Parseable supports distributed operation modes, which allows it to handle high throughput and large data volumes. Running Parseable in distributed mode across multiple nodes ensures high availability and improves performance by balancing the load across different servers. This setup is particularly beneficial for handling large-scale log data efficiently.

Configuration and Installation

Create the Configuration Secret:

First, create a secret file for Parseable configuration:

cat << EOF > parseable-env-secret

This file sets the necessary environment variables for Parseable to connect to the MinIO object store and specifies local directories for staging and data storage.

Create Kubernetes Secret:

kubectl create ns parseable
kubectl create secret generic parseable-env-secret --from-env-file=parseable-env-secret -n parseable

Using the earlier configuration file, this command creates a Kubernetes secret in the 'parseable' namespace. Secrets in Kubernetes allow you to store and manage sensitive information securely.

Install Parseable in High-Availability Mode:

helm repo add parseable
helm install parseable parseable/parseable -n parseable --set "parseable.local=false" --set "parseable.highAvailability.enabled=true"

This command installs Parseable in high-availability mode, distributing it across multiple nodes to handle large volumes of log data. The parseable.local=false setting indicates that Parseable is not running in a local development environment.

Verify Parseable Installation:

kubectl get pods -n parseable

This command lists all the pods in the 'parseable' namespace. If you see separate pods for different Parseable components, you have successfully installed them.

Access Parseable Console:

kubectl port-forward svc/parseable 8000:80 -n parseable

Open your browser and navigate to http://localhost:8000. Log in with 'admin' for both the username and password to access the Parseable console.

Setup Grafana

Grafana allows you to visualize and analyze metrics collected from various sources, providing insights into your systems' performance and health. Grafana enables you to monitor the resource utilization of each node in your Kubernetes cluster, helping you identify bottlenecks and optimize performance.

Installation Steps

Install Prometheus and Grafana:

kubectl create ns monitoring
helm repo add prometheus-community
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack -n monitoring

This command installs Prometheus and Grafana in the 'monitoring' namespace. Prometheus is a monitoring system and time series database that collects metrics data that Grafana visualizes.

Verify Grafana Installation:

kubectl get svc -n monitoring

This command lists all the services in the 'monitoring' namespace. Look for the 'monitoring-grafana' service to confirm that Grafana is running.

Access Grafana Dashboard:

kubectl port-forward svc/monitoring-grafana 3000:80 -n monitoring

Open your browser and navigate to http://localhost:3000. Log in with 'admin' as the username and 'prom-operator' as the password to access the Grafana dashboard.

Setup K6 for Load Testing

K6 helps you to define and run complex load test scenarios to assess the performance of your applications. K6 provides a simple and efficient way to generate load on your application, simulate real-world usage patterns, and identify performance bottlenecks. Its integration with Kubernetes makes it ideal for testing distributed systems like Parseable.

Installation and Configuration

Download Load Test Script:


The above downloads a our K6 script for loading batch events into Parseable.

Install Grafana K6 Operator:

kubectl create ns k6
kubectl create configmap load-test --from-file=load_batch_events.js -n k6
helm repo add grafana
helm repo update
helm install k6-operator grafana/k6-operator

Now install the K6 operator from the Grafana Helm repository, which facilitates running and managing K6 load tests on Kubernetes.

Apply the K6 TestRun:

kubectl apply -f -n k6

This YAML file defines a K6 TestRun CR targeting one Parseable ingestor node. The parallelism: 1 setting specifies a single K6 instance to run the test, and the nodeSelector ensures that the test runs on a specific node in the cluster. This command applies the K6 TestRun configuration to the 'k6' namespace, starting the load test on the specified node.

Verify K6 TestRun:

kubectl get pods -n k6

This command lists all the pods in the 'k6' namespace. You should see the K6 pod running the load test.

Edit and Update K6 Configuration:

kubectl edit cm -n k6

Use this command to edit the K6 ConfigMap if you need to modify the load test configuration, such as the number of virtual users (VUs), test duration, or event count.

Delete and Reapply K6 TestRun:

kubectl delete testrun load-test-ing1 -n k6
kubectl apply -f k6-ing1.yaml -n k6

These commands delete the existing K6 TestRun and reapply the updated configuration to restart the load test.


Congratulations! You've just completed the intricate yet rewarding task of setting up and load-testing Parseable in a distributed mode on Kubernetes with Helm and K6.

We'd love to hear how you are using this! Check out our demo site for Parseable here.

Get Updates from Parseable

Subscribe to keep up with latest news, updates and new features on Parseable