Skip to main content

20 posts tagged with "Parseable"

Read about Parseable features, integrations, and capabilities for efficient log management and data analysis.

View All Tags

Parseable Release v1.4.0

· 5 min read

Parseable v1.4.0 is out now! We've added added features and enhancements based on inputs from the community.

This release puts on spotlight on the new SSD / NVMe based hot tier for faster data retrieval and reduced S3 costs, a versatile JSON view from on the explore page, saved filters in pb for streamlined workflows and much more. Let's dive into the details.

New features

Hot Tier in distributed Parseable cluster

Parseable now allows tiering of data, this means you can store data locally on the query node, for faster retrieval and reduced S3 costs. We recommend using dedicated SSD on the query nodes to fully leverage the performance. This feature is especially useful for real-time data analysis and monitoring, in cases where most of the queries are targeted to local Refer the documentation for more details.

JSON View from the Explore Page

Several of our users love the table view for even the unstructured log events, but we also got several requests for a raw view of the data. We have now added a JSON view to the Explore page, with the raw JSON data for each log event.

The JSON view also allows you to run a text search or jq queries on a given query result. This offers a more flexible way to explore the data, especially for the unstructured logs, exactly the way you want.

JSON View

Saved filters in pb

pb, the Parseable CLI client now supports saving, applying or deleting filters. We introduced the saved filter feature on the server in previous release, and it was an instant hit among the users.

The CLI users however felt left out. With this release, we have added the saved filter feature to pb as well. You can now save, apply or delete filters from the CLI itself. This feature is especially useful for the users who prefer to work from the terminal.

Enhancements

Partition management for streams

If you use streams with custom partitions, or have historical data with custom time partitions) at the time of stream creation. You can now manage the columns and partitions for the streams in the stream management page.

Unified filter and SQL modal

Filter Builder and SQL options are now merged into a single modal for easy switching between the two.

Delete offline ingestors

You can now delete all the entries for an offline ingestor from the cluster page. To remove a live ingestor, you need to stop the ingestor first and then delete it from the cluster page.

Copy capabilities for data

Copy column text directly from the explore page from both table and JSON view.

Honor new line characters in table view

The table view now respects new line characters in log event, displaying field data in multiple lines if applicable.

CORS configuration

The server now supports configurable Cross-Origin Resource Sharing (CORS) through the environment variable P_CORS. By default, CORS is disabled, but users can enable it by setting P_CORS=false. Refer the documentation for more details.

Security

One of our dependencies, the object_store crate, was detected with a security vulnerability. Link to the relevant CVE.

With the version v1.4.0, we have updated the object_store crate to the 0.10.2, which has the fix for the vulnerability. We recommend all users to upgrade to the latest version to avoid any security risks.

Load testing

Since the last release, we have made it a point to include ingestion (load testing) performance in the release notes. We have tested the ingestion performance for this release as well, and we are happy to report that the performance has improved by 10% compared to the last release.

Parseable setup

Parseable v1.4.0 was deployed on Kubernetes cluster in distributed mode. We set up 3 ingest nodes and 1 query node in the Parseable cluster. Each Parseable pod was allocated 2 vCPU and 4 GiB memory. We also ensured to deploy each pod on a separate node to avoid any resource contention.

Load generation

We use K6 on Kubernetes for load testing. K6 is a modern load testing tool that allows you to write test scripts in JavaScript. For simpler deployment and ease in scale out, we used the K6 Operator. Refer the steps we followed to set up K6 on Kubernetes in this blog post.

The load testing script is available in the Quest repository.

Results

Test Run 1: 1 Query, 3 Ingestor Nodes. 2 vCPU, 4 Gi Memory each node. 15 k6 clients to ingest data, Number of batches per http requests - 300, Run time: 10 mins

Test Run 1

Test Run 2: 1 Query, 3 Ingestor Nodes, 3 vCPU, 4 Gi Memory each node, 15 k6 clients to ingest data, Number of batches per http requests - 525, Run time: 10 mins

Test Run 2

Note: We're hard at work to run a better, standardized load test for Parseable. We will share the results in the upcoming release notes. Also, we'll add the query performance results in next releases for a better overview.

Five Drawbacks of CloudWatch - How to Switch to Parseable

· 5 min read
Shivam Soni
Guest Author

AWS CloudWatch is a popular choice for log management and monitoring particularly for those deeply integrated into the AWS ecosystem. However despite its widespread use several drawbacks make it less appealing for specific applications especially those requiring flexibility cost-efficiency and high customizability.

In this article we'll consider when to use AWS CloudWatch versus Parseable and explain how to make the switch to Parseable.

Optimize Data Transfer from Parseable with Apache Arrow Flight

· 6 min read
Nikhil Sinha
Senior Software Engineer

Written in Rust, Parseable leverages Apache Arrow and Parquet as its underlying data structures, offering high throughput and low latency without the overhead of traditional indexing methods. This makes it an ideal solution for environments that require efficient log management, whether deployed on public or private clouds, containers, VMs, or bare metal environments. This guide will delve into the integration of Arrow Flight with Parseable, providing a comprehensive setup for your client.

How to monitor your Parseable metadata in a Grafana dashbaord

· 5 min read

As Parseable deployments in the wild are handling larger and larger volumes of logs, we needed a way to enable users to monitor their Parseable instances.

Typically this would mean setting up Prometheus to capture Parseable ingest and query node metrics and visualize those metrics on a Grafana dashboard. We added Prometheus metrics support in Parseable to enable this use case.

But we wanted a simpler, self-contained approach that allows users to monitor their Parseable instances without needing to set up Prometheus.

This led us to figuring out a way to store Parseable server's internal metrics in a special log stream called pmeta. This stream keeps track of important information about all of the ingestors in the cluster. This includes information like the URL of the ingestor, Commit id of that ingestor, number of events processed by the ingestor, and staging file location and size.

Load testing Parseable with K6

· 6 min read

Integrating K6 with Kubernetes allows developers to run load tests in a scalable and distributed manner. By deploying K6 in a Kubernetes cluster, you can use Kubernetes orchestration capabilities to manage and distribute the load testing across multiple nodes. This setup ensures you can simulate real-world traffic and usage patterns more accurately, providing deeper insights into your application's performance under stress.

Parseable Release v1.3.0

· 8 min read

Parseable v1.3.0 is out now! This release includes a good mix of new features, improvements, and bug fixes. In this post, we'll take a a detailed look at what’s new in this release.

We'll love to hear your feedback on this release. Please feel free to create an issue on our GitHub or join our Slack community.

New Features

Saved filters and queries in the explore logs view

A long-pending request from our community has been the ability to save a filter in order to return at a later date to a specific view without having to re-apply the filter from scratch. The same goes for queries.

We initially considered implementing this as a purely client-side feature, i.e. on the Console only, to deliver it more quickly. The idea was to use the browser's local store to keep a saved filter’s details and then load it from there on demand. But this approach would have been too limiting; for instance, the same user would not have been able to see their saved filters when logging in from a different browser or IP address. Also, sharing filters across users would not work and any browser event that cleared local storage would essentially mean the loss of all the saved filters, many of which are carefully created after months of analysis.

Build a robust logging system with Temporal and Parseable

· 6 min read

Temporal is a leading workflow orchestration platform. Temporal offers a robust, durable execution platform that offers guarantees on workflow execution, state management, and error handling.

One of the key aspect for a production grade application is the ability to reliably log and monitor the workflow execution. The log and event data can be used for debugging, auditing, custom behavior analysis, and more. Temporal applications are no different. Temporal provides logging capabilities, but it can be challenging to manage and analyze logs at scale.

In this post, we'll see how to extend the default Temporal logging to ship logs to a Parseable instance. By integrating Parseable with Temporal you can:

  • Ingest workflows logs to create a centralized data repository.
  • Co-relate events, errors, and activities across your workflows.
  • Analyze and query logs in Parseable for debugging and monitoring.
  • Setup reliable audit trails for your workflows using Parseable.
  • Setup alerts and notifications in Parseable for critical events in your workflows.

and much more.

Ingesting Data to Parseable Using Pandas, A Step-by-Step Guide

· 4 min read

Managing and deriving insights from vast amounts of historical data is not just a challenge but a necessity. Imagine your team grappling with numerous log files, trying to pinpoint issues. But, because logs are stored as files, it is very inefficient to search through them. This scenario is all too familiar for many developers.

Enter Parseable, a powerful solution to analyze your application logs. By integrating with pandas, the renowned Python library for data analysis, Parseable offers a seamless way to ingest and leverage historical data without the need to discard valuable logs.

In this blog post, we explore how Parseable can revolutionize your data management strategy, enabling you to unlock actionable insights from both current and archived log data effortlessly.

Streamlining Boomi Container Logs with Parseable

· 6 min read
Nikhil Sinha
Senior Software Engineer

Boomi is a leading integration Platform as a Service (iPaaS) that allows you to design, deploy, and manage integration processes. However, managing logs can be challenging. Using Parseable with Boomi reduces the headache and enables the user to experience a seamless process.

Boomi logs are a goldmine of information, helping users get the information they need, yet there are significant challenges in accessing and interpreting the logs.

Traditionally, these logs are stored in files, and extracting meaningful insights often involves manual efforts that are both time-consuming and error-prone.

In this article, learn how to use Parseable with Boomi to utilize its full potential.

Improve Observability in Salesforce with Open Source Parseable

· 5 min read
Amit Malhotra
Founder @ Factorwise

Salesforce is a highly customizable and widely used CRM platform that uses observability to maintain and troubleshoot its applications. Observability in Salesforce primarily involves monitoring logs, setting up alerts, and tracking performance metrics.

The problem is that we need more of the right insights, and the leading cause of this is the app's inflexibility. This is nothing new, but how do we fix the problem?

Using an open source, flexible, and extensible tool like Parseable, along with Factorwise for Salesforce integration - you can improve observability in your Salesforce application for optimal health and performance, helping you serve customers and generate revenue. Let's see how.

Top 4 limitations for Observability in Salesforce

Salesforce offers built-in debugging tools like debug logs dump for tracking events. These are written in Apex, or low-code features like flow. However, this has significant limitations:

  • Real-time parsing challenges: Debug logs are not designed for reading in real-time, making it difficult for developers to identify and resolve issues quickly.
  • Developer console limitations: While the Developer Console can search through logs, it is not user-friendly and can be unstable, especially with large log files. Its session-based nature can lead to frequent crashes.
  • Performance impact: Enabling Debug logs in production environments can degrade performance, creating a trade-off between observability and system efficiency.
  • Low-level detail overload: Debug logs often contain low-level details that may not be relevant to developers, who typically look for application-level logs such as System.debug calls.

How we've previously addressed these challenges

Third-party services

Several third-party services have tried to address Salesforce's observability challenges in the past. However, these solutions are often not viable for all Salesforce operations teams, particularly those in medium and small businesses, due to cost constraints. As a result, many teams continue to rely on the built-in debug logs despite their limitations.

Another constraint they encounter is the need for user-friendly interfaces for log visualization and searching.

Custom tooling

Some organizations develop custom logging solutions to overcome these challenges, but they often involve storing logs within the Salesforce instance and publishing them on the Event Bus. An example of this is NebulaLogger. While effective to some extent, without a helpful user interface for log visualization and search capabilities, insights cannot be understood, evaluated, and acted upon in a way that improves observability in Salesforce. Additionally, Salesforce's native search on large text areas can be slow, hampering efficiency.

How we address Observability in Salesforce today with Parseable & Factorwise

Parseable is an open-source log analytics tool that simplifies and enables bold insights for teams looking for a competitive edge.

Intuitive dashboard for log management

Parseable's intuitive UI and familiar SQL syntax make it simple yet powerful for developers to query logs.

Factorwise Logger, generates unique transaction IDs for every invocation of custom Salesforce logic, simplifying the process of tracking and debugging specific transactions.

With Parseable, Salesforce logs can be parsed and searched in real time, enabling rapid identification and resolution of issues. This capability is a significant improvement over the traditional Salesforce Developer Console, providing a more stable and user-friendly interface for log management.

Salesforce logs debugging

The right alert for the right team at the right time

Parseable enables real-time alerts based on defined conditions, ensuring that abnormal events are promptly identified and addressed. Notifications are delivered to the right team via Slack, Alertmanager, or custom webhooks, keeping the team or person you choose informed and able to respond sooner rather than later to potential issues.

Factorwise Logger captures Salesforce governor limits on every log event generated, storing this data in Parseable. This feature enables teams to track performance bottlenecks and optimize their applications effectively.

Salesforce logs alerting

Visualize metrics that matter to your team

Parseable also supports the aggregation and visualization of business and operational metrics!

Salesforce logs to metrics

For example, teams can track the number of leads converted over various time intervals, gaining insights that are difficult to obtain using native Salesforce reports. Parseable's fast query results make real-time data analysis straightforward and efficient.

For Grafana fans and users, Parseable integrates seamlessly with this popular platform for monitoring and observability to create real-time dashboards that provide a comprehensive view of system performance and business KPIs, enabling data-driven decision-making.

Salesforce logs visualization

Parseable x Factorwise is ideal for Salesforce log analytics

Factorwise is building a Salesforce native logger, that integrates very well with Parseable. This logger is built with detailed insights from 100s of Salesforce users. Meanwhile, Parseable stands out as the optimal choice for Salesforce observability for several reasons:

  • Ease of use: Parseable is easy to deploy, operate, and scale, making it accessible for organizations of all sizes. Its intuitive UI and SQL-based query language reduce the learning curve and enable rapid adoption.
  • Comprehensive features: Parseable offers a range of built-in features, including real-time log parsing, alerts, RBAC (Role-Based Access Control), and soft-multi tenancy through stream-based segregation. These capabilities ensure that teams have everything they need to maintain robust observability.
  • Cost-effectiveness: For medium and small businesses, Parseable provides a cost-effective solution that doesn't require extensive resources to manage. This makes it an attractive alternative to more expensive third-party services.

Parseable x Factorwise

If you'd like to talk about how Parseable x Factorwise can help your Salesforce team, please schedule a demo.

We'd love for you to try Parseable today! Ask questions and tell us what you think by joining our community in Slack.

Get Updates from Parseable

Subscribe to keep up with latest news, updates and new features on Parseable