Parseable Release v1.4.0

Parseable Release v1.4.0

Parseable v1.4.0 is out now! We've added added features and enhancements based on inputs from the community.

This release puts on spotlight on the new SSD / NVMe based hot tier for faster data retrieval and reduced S3 costs, a versatile JSON view from on the explore page, saved filters in pb for streamlined workflows and much more. Let's dive into the details.

New features

Hot Tier in distributed Parseable cluster

Parseable now allows tiering of data, this means you can store data locally on the query node, for faster retrieval and reduced S3 costs. We recommend using dedicated SSD on the query nodes to fully leverage the performance. This feature is especially useful for real-time data analysis and monitoring, in cases where most of the queries are targeted to local Refer the documentation for more details.

JSON View from the Explore Page

Several of our users love the table view for even the unstructured log events, but we also got several requests for a raw view of the data. We have now added a JSON view to the Explore page, with the raw JSON data for each log event.

The JSON view also allows you to run a text search or jq queries on a given query result. This offers a more flexible way to explore the data, especially for the unstructured logs, exactly the way you want.

JSON View

Saved filters in pb

pb, the Parseable CLI client now supports saving, applying or deleting filters. We introduced the saved filter feature on the server in previous release, and it was an instant hit among the users.

The CLI users however felt left out. With this release, we have added the saved filter feature to pb as well. You can now save, apply or delete filters from the CLI itself. This feature is especially useful for the users who prefer to work from the terminal.

Enhancements

Partition management for streams

If you use streams with custom partitions, or have historical data with custom time partitions) at the time of stream creation. You can now manage the columns and partitions for the streams in the stream management page.

Unified filter and SQL modal

Filter Builder and SQL options are now merged into a single modal for easy switching between the two.

Delete offline ingestors

You can now delete all the entries for an offline ingestor from the cluster page. To remove a live ingestor, you need to stop the ingestor first and then delete it from the cluster page.

Copy capabilities for data

Copy column text directly from the explore page from both table and JSON view.

Honor new line characters in table view

The table view now respects new line characters in log event, displaying field data in multiple lines if applicable.

CORS configuration

The server now supports configurable Cross-Origin Resource Sharing (CORS) through the environment variable P_CORS. By default, CORS is disabled, but users can enable it by setting P_CORS=false. Refer the documentation for more details.

Security

One of our dependencies, the object_store crate, was detected with a security vulnerability. Link to the relevant CVE.

With the version v1.4.0, we have updated the object_store crate to the 0.10.2, which has the fix for the vulnerability. We recommend all users to upgrade to the latest version to avoid any security risks.

Load testing

Since the last release, we have made it a point to include ingestion (load testing) performance in the release notes. We have tested the ingestion performance for this release as well, and we are happy to report that the performance has improved by 10% compared to the last release.

Parseable setup

Parseable v1.4.0 was deployed on Kubernetes cluster in distributed mode. We set up 3 ingest nodes and 1 query node in the Parseable cluster. Each Parseable pod was allocated 2 vCPU and 4 GiB memory. We also ensured to deploy each pod on a separate node to avoid any resource contention.

Load generation

We use K6 on Kubernetes for load testing. K6 is a modern load testing tool that allows you to write test scripts in JavaScript. For simpler deployment and ease in scale out, we used the K6 Operator. Refer the steps we followed to set up K6 on Kubernetes in this blog post.

The load testing script is available in the Quest repository.

Results

Test Run 1: 1 Query, 3 Ingestor Nodes. 2 vCPU, 4 Gi Memory each node. 15 k6 clients to ingest data, Number of batches per http requests - 300, Run time: 10 mins

Test Run 1

Test Run 2: 1 Query, 3 Ingestor Nodes, 3 vCPU, 4 Gi Memory each node, 15 k6 clients to ingest data, Number of batches per http requests - 525, Run time: 10 mins

Test Run 2

Note: We're hard at work to run a better, standardized load test for Parseable. We will share the results in the upcoming release notes. Also, we'll add the query performance results in next releases for a better overview.