AWS S3
Use AWS S3 as storage backend for Parseable
Configure AWS S3 as the storage backend for Parseable to store and query your observability data.
Overview
Using AWS S3 with Parseable provides:
- Scalable Storage - Virtually unlimited storage capacity
- Cost Effective - Pay only for what you use
- Durability - 99.999999999% (11 9's) durability
- Integration - Native AWS ecosystem integration
Prerequisites
- AWS account with S3 access
- S3 bucket created for Parseable data (must be fully empty for new clusters)
- AWS credentials with S3 read/write permissions
- For optimum performance, ensure the S3 bucket is in the same region as your compute instances
IAM Permissions
Create an IAM policy with the required S3 permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::your-parseable-bucket",
"arn:aws:s3:::your-parseable-bucket/*"
]
}
]
}Parseable Configuration
Environment Variables
Configure Parseable to use S3 storage:
# S3 Storage Configuration
P_S3_URL=https://s3.us-east-1.amazonaws.com
P_S3_BUCKET=your-parseable-bucket
P_S3_REGION=us-east-1
P_S3_ACCESS_KEY=your-access-key
P_S3_SECRET_KEY=your-secret-key
# Optional: S3 path prefix
P_S3_PATH_PREFIX=parseable-data/Docker Compose
version: '3.8'
services:
parseable:
image: parseable/parseable:latest
ports:
- "8000:8000"
environment:
- P_S3_URL=https://s3.us-east-1.amazonaws.com
- P_S3_BUCKET=your-parseable-bucket
- P_S3_REGION=us-east-1
- P_S3_ACCESS_KEY=${AWS_ACCESS_KEY_ID}
- P_S3_SECRET_KEY=${AWS_SECRET_ACCESS_KEY}
- P_USERNAME=admin
- P_PASSWORD=admin
command: ["parseable", "s3-store"]Instance Metadata Service (IMDS)
For Parseable instances running on EC2, AWS credentials can be sourced from the Instance Metadata Service (IMDS), avoiding the need for explicit access keys:
- Ensure that
Instance Metadata Service (IMDS)is enabled when creating the EC2 instance (under Advanced details section) - Select the Metadata version to
V1andV2(token optional) - Set
P_AWS_IMDSV1_FALLBACKto true if you want to use the V1 method - Use
P_AWS_METADATA_ENDPOINTto specify a custom endpoint URL if needed
Refer to the AWS metadata service docs for more details.
Kubernetes with IRSA
Use IAM Roles for Service Accounts (IRSA) for secure authentication:
apiVersion: v1
kind: ServiceAccount
metadata:
name: parseable
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/ParseableS3Role
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: parseable
spec:
template:
spec:
serviceAccountName: parseable
containers:
- name: parseable
image: parseable/parseable:latest
env:
- name: P_S3_URL
value: "https://s3.us-east-1.amazonaws.com"
- name: P_S3_BUCKET
value: "your-parseable-bucket"
- name: P_S3_REGION
value: "us-east-1"
args: ["parseable", "s3-store"]S3 Bucket Configuration
Bucket Policy (Optional)
Restrict bucket access to specific IAM roles:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/ParseableS3Role"
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::your-parseable-bucket",
"arn:aws:s3:::your-parseable-bucket/*"
]
}
]
}Lifecycle Rules
Configure lifecycle rules for cost optimization:
{
"Rules": [
{
"ID": "TransitionToIA",
"Status": "Enabled",
"Filter": {
"Prefix": "parseable-data/"
},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "GLACIER"
}
]
}
]
}Server-Side Encryption
Enable default encryption on the bucket:
aws s3api put-bucket-encryption \
--bucket your-parseable-bucket \
--server-side-encryption-configuration '{
"Rules": [
{
"ApplyServerSideEncryptionByDefault": {
"SSEAlgorithm": "aws:kms",
"KMSMasterKeyID": "your-kms-key-id"
}
}
]
}'SSE-C (Customer-Provided Keys)
Parseable supports server-side encryption with customer-provided keys (SSE-C). With SSE-C, you store your data encrypted with your own encryption keys while Amazon S3 manages encryption/decryption transparently.
SSE-C requires HTTPS. Amazon S3 will reject requests made over HTTP when using SSE-C.
Generate a 256-bit AES key and Base64 encode it:
ENCRYPTION_KEY=$(openssl rand -base64 32)
echo "Encryption Key: $ENCRYPTION_KEY"Add the Base64 encoded encryption key to the environment variable:
P_S3_SSEC_ENCRYPTION_KEY=SSE-C:AES256:$ENCRYPTION_KEYFor distributed deployments, set P_S3_SSEC_ENCRYPTION_KEY on both Query and Ingestor nodes.
If you lose the encryption key, you'll lose access to the log data. We recommend secure storage such as AWS Secrets Manager.
Configuration Options
| Parameter | Description |
|---|---|
P_S3_URL | S3 endpoint URL |
P_S3_BUCKET | S3 bucket name |
P_S3_REGION | AWS region |
P_S3_ACCESS_KEY | AWS access key ID |
P_S3_SECRET_KEY | AWS secret access key |
P_S3_PATH_PREFIX | Optional path prefix in bucket |
S3-Compatible Storage
Parseable also works with S3-compatible storage providers:
MinIO
P_S3_URL=http://minio:9000
P_S3_BUCKET=parseable
P_S3_REGION=us-east-1
P_S3_ACCESS_KEY=minioadmin
P_S3_SECRET_KEY=minioadminDigitalOcean Spaces
P_S3_URL=https://nyc3.digitaloceanspaces.com
P_S3_BUCKET=your-space-name
P_S3_REGION=nyc3
P_S3_ACCESS_KEY=your-spaces-key
P_S3_SECRET_KEY=your-spaces-secretCloudflare R2
P_S3_URL=https://account-id.r2.cloudflarestorage.com
P_S3_BUCKET=your-bucket
P_S3_REGION=auto
P_S3_ACCESS_KEY=your-r2-access-key
P_S3_SECRET_KEY=your-r2-secret-keyBest Practices
- Use IRSA in EKS - Avoid hardcoding credentials
- Enable Encryption - Use SSE-S3 or SSE-KMS
- Configure Lifecycle Rules - Optimize storage costs
- Use VPC Endpoints - Reduce data transfer costs
- Enable Versioning - Protect against accidental deletion
- Monitor Costs - Set up billing alerts
Troubleshooting
Access Denied Errors
- Verify IAM permissions are correct
- Check bucket policy allows access
- Verify credentials are not expired
- Check bucket region matches configuration
Connection Issues
- Verify S3 endpoint URL is correct
- Check network connectivity to S3
- Verify VPC endpoints if using private networking
- Check security group rules
Performance Issues
- Use S3 Transfer Acceleration for global access
- Consider using S3 Express One Zone for low latency
- Optimize object sizes for your workload
Next Steps
- Configure CloudWatch integration for AWS logs
- Set up alerts for storage metrics
- Create dashboards for monitoring
Was this page helpful?