The True Cost of Observability: Why $0.03/GB Is Never $0.03/GB

D
Debabrata Panigrahi
February 27, 2026
Dissecting hidden observability storage costs behind cheap per-GB headlines. Learn how compute, egress, and lock-in inflate your real TCO.
The True Cost of Observability: Why $0.03/GB Is Never $0.03/GB

You are evaluating a new observability backend. The pricing page says "$0.03/GB" and your brain does quick math: 500 GB/day times $0.03 is $15/day, roughly $450/month, about $5,400/year. For half a terabyte of daily telemetry, that sounds almost too good to be true.

It is.

That $0.03 is the storage line item — the cost of parking compressed bytes on disk. It does not include the compute cycles to ingest, index, merge, and query those bytes. It does not include the egress fees when you pull data out. It does not include the operational overhead of running the database cluster underneath. And it certainly does not account for what happens to your data and your leverage when you need to leave.

This guide breaks down the real-world total cost of ownership (TCO) for observability platforms that advertise cheap per-GB storage rates, with ClickHouse's Managed ClickStack (formerly HyperDX) as the primary example. We will show exactly where the hidden costs live, why they compound faster than the headline number suggests, and how to evaluate observability pricing with your eyes open.

The Anatomy of Observability Costs

Before we dig into specific numbers, it helps to understand the five cost categories that make up the true price of any observability platform. Most vendor pricing pages highlight only one or two of these. The rest show up on your cloud bill or in your team's time sheets.

1. Storage

This is the number on the pricing page — the per-GB cost of storing compressed telemetry data on disk or object storage. It is the smallest component of total cost for most workloads, but it is the one vendors love to advertise because it is the most flattering.

2. Compute

Ingesting data, compressing it, building indices, running background merge operations, and executing queries all require CPU and memory. Compute is almost always the largest cost driver in an observability stack, yet it is often billed separately or buried in instance-level charges that don't appear on the pricing page.

3. Egress and Data Movement

Moving data out of the platform — whether for backup, migration, feeding into analytics pipelines, or simply querying from a different region — incurs egress charges. These are easy to overlook during evaluation and painful to discover during an incident.

4. Operational Overhead

Someone has to keep the system running. Capacity planning, cluster scaling, version upgrades, replication monitoring, backup validation — these tasks consume engineering hours that have a real dollar cost. A system that requires 0.5 FTE of ongoing management is costing you $75,000-$150,000/year in personnel before you look at the infrastructure bill.

5. Lock-in and Switching Costs

When your data lives in a proprietary format on a vendor's infrastructure, the cost of leaving includes data export, query rewriting, dashboard rebuilding, alert reconfiguration, and team retraining. These costs are zero on day one and enormous on day 1,000 — which is exactly when you are most likely to want to leave.

ClickStack: A Case Study in Headline Pricing

ClickHouse launched Managed ClickStack (formerly HyperDX) as a unified observability solution built on ClickHouse Cloud. The headline: "$0.03/GB" for storage. Let's trace what actually happens to your bill.

Storage: The $0.03 That Started It All

At $0.03/GB, storing 500 GB/day for 30 days costs roughly:

  • 500 GB/day x 30 days = 15,000 GB (raw)
  • With ClickHouse's typical 5-10x compression, that is ~1,500-3,000 GB on disk
  • At $0.03/GB/month: $45-$90/month for storage

So far, so good. That is genuinely cheap storage. But your data is not useful sitting on disk. It needs to be queryable.

Compute: The Line Item That Changes Everything

ClickStack charges for compute separately, at $0.22-$0.39 per compute-unit per hour. A compute unit roughly corresponds to a vCPU with associated memory.

For a 500 GB/day observability workload, you need compute for three continuous activities:

  • Ingestion: Receiving, parsing, compressing, and writing data to MergeTree tables
  • Background merges: ClickHouse continuously merges small data parts into larger ones — this is not optional, it is how MergeTree works
  • Queries: Engineers running searches, dashboards refreshing, alerts evaluating

A realistic compute allocation for this workload is 8-16 compute units running 24/7. Let's use a moderate estimate of 12 compute units at a mid-range rate of $0.30/compute-unit/hour:

  • 12 units x $0.30/hour x 730 hours/month = $2,628/month

That single line item is 29x larger than the storage cost. The $0.03/GB headline just became $0.03 + $0.175/GB when you amortize compute across ingested volume.

Egress: The Tax on Data Access

Querying data that lives on ClickHouse Cloud infrastructure means data flows from the storage layer to the compute layer and then to your client. Cross-region or cross-cloud queries, data exports for compliance, and API calls from external tools all accumulate egress costs.

Egress pricing varies, but at standard cloud rates ($0.09/GB for inter-region traffic), a team that scans 50 TB/month across queries and exports adds another $150-$500/month depending on query patterns and data movement.

The Real Monthly Bill

Here is what 500 GB/day on ClickStack actually looks like:

Cost CategoryMonthly Cost% of Total
Storage ($0.03/GB)$45-$901-3%
Compute (12 units @ $0.30/hr)$2,62882-88%
Egress and data movement$150-$5005-15%
Total$2,823-$3,218100%
Annualized$33,876-$38,616

The advertised "$0.03/GB" storage cost represents roughly 2% of your actual bill. The real effective rate is closer to $0.19-$0.21/GB ingested when all costs are included.

This is not unique to ClickStack. Any platform that separates storage and compute pricing while advertising only the storage rate is doing the same thing. The mechanic is always identical: low storage headline draws you in, compute charges drive the real bill.

A Worked Example: 500 GB/Day, 90-Day Retention

Let's make the comparison concrete. Three scenarios, same workload: 500 GB of daily log ingestion, 90 days of retention, a team of five engineers querying the system daily for operational work and incident response.

Scenario A: ClickStack (Managed ClickHouse)

Line ItemMonthly Cost
Storage: ~4,500 GB compressed x $0.03/GB$135
Compute: 12 units x $0.30/hr x 730 hrs$2,628
Egress: ~100 GB data movement$90
Monthly total~$2,853
Annual total~$34,236

Note: This does not include the operational overhead of ClickHouse cluster management. If you are running self-managed ClickHouse instead of the managed service, replace the compute line with EC2/GCE instances (similar cost) and add 0.5 FTE of engineering time ($75,000-$100,000/year).

Scenario B: Self-Hosted ClickHouse (No Managed Service)

Some teams choose to run ClickHouse themselves to avoid managed service markups. Here is what that looks like:

Line ItemMonthly Cost
EC2 instances: 3x i4i.xlarge (NVMe)$966
ClickHouse Keeper: 3x t3.medium$93
EBS for Keeper: 3x 50 GB gp3$12
Data transfer$50-$100
Infrastructure monthly total~$1,121-$1,171
Infrastructure annual total~$13,452-$14,052
Engineering overhead: 0.5 FTE~$6,250/month
Fully-loaded monthly total~$7,371-$7,421
Fully-loaded annual total~$88,452-$89,052

The infrastructure cost is lower, but the operational burden is significant. Keeper management, merge tuning, capacity planning, upgrade coordination, and incident response for the ClickHouse cluster itself consume real engineering hours. Those hours have a cost, even if they don't appear on an AWS invoice.

Scenario C: Parseable Cloud (Pro Plan)

Parseable Cloud Pro pricing is all-inclusive at $0.39/GB ingested. Here is what the same workload costs:

Line ItemMonthly Cost
Ingestion: 500 GB/day x 30 days = 15,000 GB x $0.39$5,850
Retention: 365 days (included)$0
Query scanning: up to 10x ingestion (included)$0
Dashboards, alerts, AI analysis: (included)$0
Unlimited users: (included)$0
99.9% uptime SLA: (included)$0
Monthly total$5,850
Annual total$70,200

At first glance, $0.39/GB looks more expensive than $0.03/GB. And on a pure storage comparison, it is. But the $0.39 includes everything: compute for ingestion, merges, and queries; 365 days of retention (not just 90); scanning up to 10x your monthly ingestion volume; unlimited users and dashboards; AI-native analysis; anomaly detection; and a 99.9% uptime SLA.

There are no hidden line items. No compute units to estimate. No egress surprises. Your monthly bill is your daily ingestion volume times $0.39, period.

Side-by-Side Summary

ClickStack (Managed)Self-Hosted ClickHouseParseable Cloud Pro
Headline rate$0.03/GB storageN/A (infrastructure)$0.39/GB ingested
Annual infra cost~$34,236~$13,752$70,200
Annual personnel costMinimal (managed)~$75,000 (0.5 FTE)$0
Annual total~$34,236~$88,752$70,200
Retention includedDepends on storageDepends on disk365 days
Query compute includedNo (billed per CU)Yes (your infra)Yes (up to 10x ingestion)
Pricing predictabilityLow (usage-based compute)Medium (infra is fixed, but scaling isn't)High (per-GB, all-inclusive)
Data formatProprietary ClickHouseProprietary ClickHouseOpen Apache Parquet
Vendor lock-in riskHighMedium (OSS but proprietary format)Low (open format, your choice of deployment)

The comparison shifts depending on your priorities. If raw infrastructure cost is the only metric, self-hosted ClickHouse wins on paper — until you account for the engineering hours. If total cost including personnel is the metric, ClickStack's managed service and Parseable Cloud are in a similar range, but Parseable includes a full year of retention and has far more predictable billing. If you value data ownership, pricing transparency, and the ability to leave without a migration project, Parseable's open-format approach is in a different category entirely.

The Hidden Cost Nobody Talks About: Lock-In

Cost comparisons typically stop at the annual bill. But there is a structural cost that compounds silently over time: the cost of leaving.

Data Format Lock-In

ClickHouse stores data in its proprietary MergeTree format. Your observability data — every log line, every trace span, every metric point — lives in a format that only ClickHouse can read natively. Want to run an ad-hoc query with DuckDB? You need to export the data first. Want to feed your logs into a data science pipeline? Export. Want to switch to a different observability backend? Export everything, transform it, and re-ingest.

At 500 GB/day with 90-day retention, you are sitting on roughly 45 TB of raw data (compressed to ~4.5-9 TB on disk). Exporting that volume is not a weekend project. It is a multi-week engineering effort with significant data transfer costs.

Parseable stores data in Apache Parquet on object storage. Parquet is an open standard readable by Spark, DuckDB, Athena, Polars, Pandas, Trino, and dozens of other tools. If you are on the Enterprise plan with Bring Your Own Bucket (BYOB), the Parquet files sit in an S3 bucket you own. There is no export step. There is no migration project. Your data is already in an open format on storage you control.

Query and Dashboard Lock-In

Every dashboard you build, every alert rule you configure, and every saved query you write on a platform is a small investment in that platform's ecosystem. After two years, you may have hundreds of dashboards and alert rules that would need to be rebuilt from scratch on a new system.

This is true of any platform, including Parseable. But the magnitude differs. Parseable uses standard SQL. Your saved queries are portable to any SQL-compatible system. ClickHouse uses its own SQL dialect with extensions that do not exist elsewhere — ARRAY JOIN, FINAL, SAMPLE BY, and other ClickHouse-specific syntax that would need to be rewritten.

The Switching Cost Multiplier

A reasonable estimate for the total switching cost away from a deeply embedded observability platform:

Switching Cost CategoryEstimated Cost
Data export and migration$10,000-$50,000 (engineering time)
Query rewriting$5,000-$20,000
Dashboard and alert rebuilding$5,000-$15,000
Team retraining$5,000-$10,000
Parallel running period (2-4 weeks)2x infrastructure cost
Total estimated switching cost$25,000-$95,000

This is the cost you pay when you leave. It does not appear on any pricing page, but it is real. Choosing an open-format platform does not eliminate switching costs, but it substantially reduces the data migration and query rewriting components — typically the two largest items.

What "All-Inclusive" Actually Means

The observability industry has developed a pattern where vendors decompose their pricing into granular line items — storage, compute, queries, users, alerts, retention — and then advertise the cheapest one. This makes comparison shopping nearly impossible without a spreadsheet and several hours of reading documentation.

Parseable Cloud's Pro plan takes a different approach: $0.39/GB ingested, everything included. Here is what "everything" means:

  • 365 days of retention — no tiered storage charges, no archival fees
  • Query scanning up to 10x monthly ingestion — query your data freely without per-query billing anxiety. Additional scans beyond 10x are $0.02/GB
  • AI-native analysis and anomaly detection — built into the platform, not an add-on
  • Unlimited users — no per-seat fees that penalize collaboration
  • Unlimited dashboards and alerts — build what you need without worrying about dashboard limits
  • Full API access — integrate with your tooling without API call charges
  • 99.9% uptime SLA — availability guarantees without premium support tiers
  • 14-day free trial — evaluate with your real data before committing

The Pro plan runs on Parseable Cloud's shared, multi-tenant infrastructure. You get a fully managed experience — no infrastructure to provision, no upgrades to coordinate, no capacity to plan.

For organizations that need more control, the Enterprise plan (starting at $15,000/year) adds Bring Your Own Bucket for full data ownership, Apache Iceberg support, premium support, and flexible deployment options across Parseable Cloud, BYOC, or self-hosted. BYOB means your telemetry data lives in an S3 bucket in your own AWS, GCS, or Azure account — Parseable never stores a copy. Enterprise deployments also offer lower per-GB rates: $0.25/GB for BYOC and $0.20/GB for self-hosted.

Five Questions to Ask Any Observability Vendor

Before you sign anything, ask these questions. The answers will tell you more about your real TCO than any pricing page.

1. "What is my total monthly cost at X GB/day, including compute?"

If the vendor cannot give you a single number that includes storage, compute, and query costs, the pricing model is designed to obscure total cost. Insist on a worked example at your actual volume.

2. "What format is my data stored in, and can I read it with other tools?"

If the answer is a proprietary format, your data is locked in. If the answer is an open format like Parquet but the files live on the vendor's infrastructure, you have format portability but not data ownership. The ideal answer is "open Parquet on storage you own." See our guide on why your observability data should live in Apache Parquet.

3. "What does 365-day retention cost?"

Many platforms price retention aggressively — short retention is cheap, but extending to a year multiplies costs. If the platform charges per-GB for storage and your compressed data is 3 TB, you may be looking at an additional $1,000-$3,000/month just to keep old data queryable. With Parseable Pro, 365-day retention is included in the per-GB price.

4. "What happens to my data if I leave?"

If the answer involves an export process measured in days or weeks, that is a red flag. With an open-format, data lake approach, leaving means pointing a new query engine at your existing Parquet files. There is no export step.

5. "How many engineers does it take to keep this running?"

For managed SaaS, the answer should be zero. For self-hosted, get an honest assessment. Running a ClickHouse cluster requires real expertise — merge tuning, Keeper management, capacity planning, shard rebalancing. Running Parseable self-hosted requires deploying a single binary and pointing it at an S3 bucket. The operational complexity gap directly translates to personnel costs.

The Pricing Transparency Principle

The observability industry has a pricing transparency problem. Decomposed pricing — where storage, compute, queries, users, alerts, and retention are all separate line items — benefits vendors because it makes comparison impossible and creates opportunities for cost creep. Engineers evaluate on the headline number, finance approves a budget based on that headline, and the real bill arrives three months later at 5-10x the estimate.

All-inclusive per-GB pricing is not always the cheapest option at every volume and every workload profile. But it is always the most predictable. When your finance team asks "what will observability cost next quarter?", you can answer with a single multiplication: expected daily volume times 30 times $0.39. No spreadsheets, no compute-unit estimation, no egress modeling.

Predictability has real value. It means no surprise budget conversations. It means engineering teams can instrument freely without worrying that additional telemetry will blow through cost limits. And it means you can plan capacity and retention without running financial simulations.

Where This Fits in the Bigger Picture

This article is part of a series on the emerging observability data lake architecture. If you are evaluating your observability stack, these related pieces go deeper on specific topics:

The Bottom Line

A "$0.03/GB" headline is a storage price, not an observability price. Real observability costs include compute (the dominant expense), egress, operational overhead, and the long-term tax of vendor lock-in. When you account for all five cost categories, the cheap per-GB number evaporates.

Parseable Cloud's Pro plan at $0.39/GB is not trying to win a storage-cost comparison. It is trying to give you a number you can actually budget against — one that includes everything you need to run observability in production: storage, compute, 365 days of retention, unlimited users, AI-native analysis, and a 99.9% uptime SLA.

The question is not "which platform has the cheapest storage?" The question is "which platform gives me the most predictable, transparent, and defensible total cost of ownership?" When you ask that question, all-inclusive pricing starts to look like the better deal.

Start a 14-day free trial on Parseable Cloud and see what predictable observability pricing feels like. Pay-as-you-go at $0.39/GB ingested with everything included.

Share:

Subscribe to our newsletter

Get the latest updates on Parseable features, best practices, and observability insights delivered to your inbox.

SFO

Parseable Inc.

584 Castro St, #2112

San Francisco, California

94114-2512

Phone: +1 (650) 444 6216

BLR

Cloudnatively Services Private Limited

JBR Tech Park

Whitefield, Bengaluru

560066

Phone: +91 9480931554

All systems operational

Parseable