Your Dependencies Are Under Attack. Here's How to Watch Them in Real Time.

D
Debabrata Panigrahi
April 1, 2026Last updated: April 1, 2026
Build continuous supply-chain audit monitoring using SBOM generation, OpenTelemetry Collector, and Parseable Keystone AI to detect CVEs before attackers exploit them.
Your Dependencies Are Under Attack. Here's How to Watch Them in Real Time.

454,000 malicious packages shipped in a single year. Axios compromised yesterday. The npm debug library hijacked last September. Supply-chain auditing can't be a CI-only checkbox anymore, it needs to be a continuous, observable signal. Here's how to build that with SBOMs, OpenTelemetry, and Parseable Keystone.

The supply-chain attack surface in 2026

Let's talk about the last six months.

In September 2025, attackers phished the maintainer of debug and chalk, two npm packages with billions of combined weekly downloads, through a convincing 2FA reset email from a spoofed npmjs.help domain. Malicious code targeting cryptocurrency wallets was pushed to npm within hours.

In March 2026, a group calling themselves TeamPCP compromised four widely used open-source projects in eight days: the Trivy vulnerability scanner itself (the irony), the KICS infrastructure-as-code scanner, the LiteLLM AI proxy library on PyPI, and the Telnyx communications library. And just yesterday, March 31, 2026, attackers compromised the official axios npm package, a library with over 100 million downloads per week, injecting a remote-access trojan into what most teams consider a harmless HTTP client.

MetricValue
Malicious packages published in 12 months454K+
Weekly downloads of compromised axios100M+
TeamPCP hit 4 projects in succession8 days
Downloads of dYdX typosquatted packages121K

These aren't theoretical risks in a CISO's slide deck. They're packages your services pulled in this morning.

Why CI-only scanning is not enough

Most engineering teams today handle dependency security with one of these approaches: an npm audit or pip-audit step in CI that blocks the build on known CVEs, a container image scanner like Trivy or Grype that runs on push to the registry, or a SCA platform like Snyk or Dependabot that watches the Git repo and opens PRs.

All of these share the same blind spot: they're build-time-only. A CVE disclosed six hours after your last deploy goes unnoticed until someone pushes the next commit, and that could be days or weeks. Worse, SCA platforms are repo-centric, not deployment-centric. They tell you what's vulnerable in your package-lock.json, not what's actually running in production right now.

The gap is clear: you need continuous, post-deployment monitoring of your dependency tree, and that monitoring data needs to land in the same observability stack where your logs, metrics, and traces already live, so the team that's on-call is also the team that sees the alert.

The insight that drives this entire architecture: Supply-chain audit data is telemetry. It belongs in your observability platform, not in a separate SCA dashboard that your SREs never check. Treat every vulnerability finding as a structured log event, ship it through OpenTelemetry, and let your existing alerting and query infrastructure do what it already does well.

The architecture: SBOMs + OTel + Parseable

The system we're going to build has four components that form a continuous loop:

  1. SBOM Generator: Produces a Software Bill of Materials (CycloneDX format) for every build artifact. This is your source of truth for "what packages are in this service."
  2. Vulnerability Scanner: Matches SBOMs against CVE databases (NVD, GitHub Advisory, OSV). Runs at build time and on a schedule against all deployed SBOMs.
  3. OpenTelemetry Collector: The universal transport. Receives all audit events as structured OTLP log records, enriches them with Kubernetes metadata, and exports to Parseable.
  4. Parseable + Keystone: The observability backend. Stores audit events on S3 in columnar format. Keystone, Parseable's AI orchestration layer, lets you ask natural-language questions like "Which production services are running a package with a known exploit?" and get an instant, query-backed answer.
  CI/CD Pipeline          Scheduled Re-scanner         CVE Feed Syncer
       │                        │                          │
       ▼                        ▼                          │
  ┌──────────┐           ┌──────────┐                      │
  │   Syft   │──SBOM──▶  │  Grype   │◀─── NVD/GHSA/OSV ◀──┘
  │  (SBOM)  │           │  (Scan)  │      (every 2 hrs)
  └──────────┘           └────┬─────┘

                     Structured findings
                       (OTel log records)


                 ┌────────────────────────┐
                 │   OpenTelemetry        │
                 │   Collector            │
                 │                        │
                 │  otlp → transform →    │
                 │  k8sattributes →       │
                 │  batch → otlphttp      │
                 └───────────┬────────────┘


                 ┌────────────────────────┐
                 │      Parseable         │
                 │                        │
                 │  ┌──────────────────┐  │
                 │  │  Columnar Store  │  │
                 │  │  (S3 / Parquet)  │  │
                 │  └────────┬─────────┘  │
                 │           │            │
                 │  ┌────────▼─────────┐  │
                 │  │   Keystone AI    │  │
                 │  │  "Any critical   │  │
                 │  │   CVEs in prod?" │  │
                 │  └──────────────────┘  │
                 └────────────────────────┘

Generate SBOMs on every build

An SBOM is a machine-readable inventory of every package, library, and module in your artifact. Think of it as a nutrition label for software. We use Syft (from Anchore) to generate SBOMs in CycloneDX 1.6 JSON format, the standard with the strongest vulnerability tooling ecosystem.

Add this to your CI pipeline (GitHub Actions example):

- name: Generate SBOM
  uses: anchore/sbom-action@v0
  with:
    image: ${{ env.IMAGE_TAG }}
    format: cyclonedx-json
    output-file: sbom.cdx.json
 
- name: Scan SBOM for vulnerabilities
  uses: anchore/scan-action@v4
  with:
    sbom: sbom.cdx.json
    fail-build: false          # Don't block - alert instead
    severity-cutoff: critical
 
- name: Emit audit events to OTel Collector
  run: |
    # Convert scan results to OTel log records and ship via OTLP
    ./emit-audit-events.sh \
      --sbom sbom.cdx.json \
      --scan-results results.json \
      --service ${{ env.SERVICE_NAME }} \
      --environment production \
      --otel-endpoint http://otel-collector:4317

The SBOM itself gets stored alongside the container image, either as an OCI artifact in your registry or in an S3 bucket tagged by service name and Git SHA. This store becomes the basis for continuous re-scanning later.

Why CycloneDX over SPDX? Both are valid SBOM standards, but CycloneDX was purpose-built for security use cases. It has native VEX (Vulnerability Exploitability eXchange) support, and the tooling ecosystem, Syft, Grype, DependencyTrack, all speak CycloneDX natively. If regulatory compliance mandates SPDX (it's an ISO standard), you can always generate both.

Ship audit events through the OTel Collector

Here's the key design decision: every audit event, SBOM generated, scan completed, vulnerability found, SBOM diff, is modeled as a structured OTel log record. This means your supply-chain audit data flows through the exact same pipeline as your application logs and traces, and lands in Parseable ready to query with SQL.

The event schema

Each vulnerability finding is a log record with rich, queryable attributes:

{
  "severityText": "ERROR",
  "body": "CVE-2026-31415 found in axios@1.7.2 (CRITICAL, CVSS 9.8)",
  "attributes": {
    "event.type":           "vuln.found",
    "service.name":         "payment-gateway",
    "deploy.environment":   "production",
    "vuln.cve_id":          "CVE-2026-31415",
    "vuln.severity":        "CRITICAL",
    "vuln.cvss_score":      9.8,
    "vuln.package_name":    "axios",
    "vuln.installed_ver":   "1.7.2",
    "vuln.fixed_ver":       "1.7.9",
    "vuln.exploit_maturity": "active-in-the-wild",
    "vuln.feed_source":     "ghsa",
    "scan.id":              "scan-a1b2c3d4"
  },
  "resource": {
    "service.name":         "supply-chain-scanner",
    "k8s.namespace.name":   "platform",
    "k8s.deployment.name":  "scanner"
  }
}

OTel Collector configuration

The Collector receives these events via OTLP, enriches them with Kubernetes metadata, and exports to Parseable's native OTLP endpoint. Parseable accepts OTel logs on port 4318 out of the box: no proprietary agent, no format conversion.

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
 
processors:
  transform:
    log_statements:
      - context: log
        statements:
          - set(attributes["audit.pipeline"], "supply-chain")
          - set(severity_number, 17) where attributes["vuln.severity"] == "HIGH"
          - set(severity_number, 21) where attributes["vuln.severity"] == "CRITICAL"
 
  k8sattributes:
    auth_type: serviceAccount
    extract:
      metadata:
        - k8s.namespace.name
        - k8s.deployment.name
        - k8s.pod.name
 
  batch:
    send_batch_size: 200
    timeout: 10s
 
exporters:
  otlphttp:
    endpoint: https://your-instance.parseable.com/v1/logs
    headers:
      Authorization: "Basic ${PARSEABLE_CREDENTIALS}"
      X-P-Stream: "supply-chain-audit"
      X-P-Log-Source: "otel-logs"
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 60s
 
service:
  pipelines:
    logs/audit:
      receivers: [otlp]
      processors: [k8sattributes, transform, batch]
      exporters: [otlphttp]

That's it. The X-P-Stream header tells Parseable which log stream to route these events into, and X-P-Log-Source: otel-logs lets Parseable apply schema-on-write optimizations for OTel data. Every attribute in the log record becomes a queryable column in Parseable's columnar store: no ETL, no mapping, no delay.

Continuous re-scanning with fresh CVE feeds

Build-time scanning catches what's known at build time. But the axios CVE wasn't disclosed when you built your service last week. Continuous re-scanning closes this gap.

Deploy two Kubernetes CronJobs:

# 1. CVE Feed Syncer - refreshes the local vulnerability database
apiVersion: batch/v1
kind: CronJob
metadata:
  name: cve-feed-syncer
spec:
  schedule: "0 */2 * * *"      # Every 2 hours
  jobTemplate:
    spec:
      containers:
        - name: syncer
          image: anchore/grype:latest
          command: ["grype", "db", "update"]
 
---
# 2. Scheduled Re-scanner - re-scans all deployed SBOMs
apiVersion: batch/v1
kind: CronJob
metadata:
  name: sbom-rescanner
spec:
  schedule: "30 */6 * * *"     # Every 6 hours, 30 min after feed sync
  jobTemplate:
    spec:
      containers:
        - name: scanner
          image: your-org/sbom-rescanner:latest
          env:
            - name: SBOM_STORE
              value: "s3://sbom-registry/production/"
            - name: OTEL_ENDPOINT
              value: "http://otel-collector:4317"

The re-scanner fetches every active SBOM from the registry, runs Grype against the freshly synced CVE database, and emits the results as OTel log records, the same schema, the same pipeline, the same destination in Parseable. When a new CVE hits a package you've had in production for weeks, it shows up in your observability stack within 30 minutes of the feed update.

On top of scheduled scans, a SBOM diff engine runs on every new build, comparing the new SBOM to the previous version for the same service. Added a new dependency? Upgraded a library? Removed one? Each change becomes an sbom.diff event in Parseable, an invaluable audit trail for understanding exactly when your dependency surface changed.

Ask Keystone, not dashboards

This is where the architecture stops being a better version of Dependabot and starts being something fundamentally different.

Parseable's Keystone is an AI orchestration layer that sits on top of your telemetry data lake. It uses LLMs (including Claude) with built-in schema awareness to turn natural-language questions into point-in-time answers: no query syntax, no dashboard spelunking, no context-switching between tools.

With your supply-chain audit events flowing into Parseable, here's what becomes possible:

Natural-language CVE triage

Instead of staring at a dashboard of 200 findings, you ask Keystone:

"Which production services are affected by CVEs with active exploits
 in the last 24 hours?"

Keystone resolves the schema, generates the SQL, executes it against Parseable's columnar store, and returns something like:

3 production services are affected by actively-exploited CVEs
in the last 24 hours:
 
1. payment-gateway - CVE-2026-31415 in axios@1.7.2
   CVSS 9.8 | Exploit: active-in-the-wild | Fix: upgrade to 1.7.9
 
2. checkout-api - CVE-2026-31415 in axios@1.6.8
   CVSS 9.8 | Exploit: active-in-the-wild | Fix: upgrade to 1.7.9
 
3. notification-service - CVE-2026-28901 in jsonwebtoken@9.0.0
   CVSS 8.1 | Exploit: proof-of-concept | Fix: upgrade to 9.0.3

Cross-signal correlation

Because Parseable is a unified observability platform, logs, metrics, and traces in the same store, Keystone can correlate supply-chain events with operational data. Ask it:

"Has the payment-gateway shown any anomalous behavior since the
 axios vulnerability was first detected?"

Keystone synthesizes across your audit events and your application traces and logs to give you a unified answer, the kind of correlation that's impossible when your vulnerability data lives in Snyk and your traces live in Datadog.

Under the hood: SQL on Parquet

For teams that prefer SQL, every Keystone answer is backed by a query you can inspect, modify, and save as a dashboard panel:

SELECT
  "service.name"                AS service,
  "vuln.cve_id"                 AS cve,
  "vuln.cvss_score"             AS cvss,
  "vuln.package_name"           AS package,
  "vuln.installed_ver"          AS version,
  "vuln.fixed_ver"              AS fix_available,
  "vuln.exploit_maturity"       AS exploit_status
FROM
  "supply-chain-audit"
WHERE
  "event.type" = 'vuln.found'
  AND "deploy.environment" = 'production'
  AND "vuln.exploit_maturity" IN ('active-in-the-wild', 'proof-of-concept')
  AND p_timestamp > NOW() - INTERVAL '24 hours'
ORDER BY
  "vuln.cvss_score" DESC;

The data sits on S3 in Apache Parquet format with up to 90% compression. That means you can retain months of audit history at a fraction of the cost of traditional logging platforms, which matters for compliance, trend analysis, and post-incident forensics.

Putting it all together

Here's what the complete loop looks like in practice:

  1. Developer pushes code. CI generates an SBOM with Syft, scans it with Grype, emits findings as OTel log records.
  2. OTel Collector enriches and ships. Adds K8s metadata (namespace, deployment, pod), batches events, and exports to Parseable via /v1/logs.
  3. Parseable stores on S3. Events land in the supply-chain-audit stream, compressed into Parquet columns. Every attribute is queryable instantly.
  4. Every 6 hours, the re-scanner runs. It fetches all active SBOMs, scans them against the freshly synced CVE database, and emits new findings through the same OTel pipeline.
  5. New CVE drops. The re-scanner picks it up within 6 hours. The finding lands in Parseable. An alert fires to Slack or PagerDuty.
  6. On-call engineer asks Keystone. "Which services are affected? Is there a fix? Has any service shown anomalous behavior since the vulnerability was introduced?" Keystone answers in seconds, backed by SQL on your actual data.
  7. Fix deployed. New SBOM generated. SBOM diff engine confirms the vulnerable package was upgraded. The loop closes.

This isn't a separate security tool bolted onto your stack. It's your existing observability platform, Parseable, with supply-chain audit data flowing through the same OTel pipeline as everything else. One platform, one query language, one on-call team.

Cost matters. Parseable's object-storage-first architecture means you're paying S3 prices for data at rest, not SaaS-observability prices. With up to 90% compression on Parquet, a year of supply-chain audit data for 500 microservices costs less than a week of equivalent retention on legacy platforms. Keystone and all AI features are included in the Pro plan at $0.37/GB ingested.

Conclusion

Supply-chain attacks are no longer edge cases: they are the dominant threat vector targeting modern software teams. With hundreds of thousands of malicious packages published every year and trusted libraries like axios falling to compromise, the question isn't whether your dependencies will be affected, but how fast you'll know about it when they are.

The architecture outlined in this guide, continuous SBOM generation with Syft, automated vulnerability scanning with Grype, and structured event delivery through OpenTelemetry, turns supply-chain security from a periodic CI checkbox into a real-time observability signal. By routing every CVE finding, SBOM diff, and scan result into Parseable as structured log data, you get the same query, alerting, and correlation capabilities you already rely on for application monitoring, applied to your dependency attack surface.

Parseable's Keystone AI layer takes this further. Instead of manually triaging hundreds of vulnerability findings across dashboards, your on-call team asks plain-language questions and gets instant, SQL-backed answers that correlate CVE exposure with live application behavior. That's the difference between discovering a compromised dependency during a post-incident review and catching it within hours of disclosure.

Every tool in this pipeline, Syft, Grype, the OpenTelemetry Collector, CycloneDX, is open source or open-standard. Parseable stores everything on S3 in compressed Apache Parquet, so retaining months of audit history costs a fraction of what legacy observability platforms charge. Whether you're running 10 services or 500, the economics scale with you rather than against you.

Your dependencies won't stop being attacked. But with continuous supply-chain monitoring built into your observability stack, you stop being the last to know.

Further reading: Ingesting Kubernetes logs into Parseable with OTel Collector | OTel Collector vs FluentBit | Introducing Parseable Cloud

Share:

Subscribe to our newsletter

Get the latest updates on Parseable features, best practices, and observability insights delivered to your inbox.

SFO

Parseable Inc.

584 Castro St, #2112

San Francisco, California

94114-2512

Phone: +1 (650) 444 6216

BLR

Cloudnatively Services Pvt Ltd.

JBR Tech Park

Whitefield, Bengaluru

560066

Phone: +91 9480931554

All systems operational

Parseable