Skip to main content

19 posts tagged with "Parseable"

Read about Parseable features, integrations, and capabilities for efficient log management and data analysis.

View All Tags

Five Drawbacks of CloudWatch - How to Switch to Parseable

· 5 min read
Shivam Soni
Guest Author

AWS CloudWatch is a popular choice for log management and monitoring particularly for those deeply integrated into the AWS ecosystem. However despite its widespread use several drawbacks make it less appealing for specific applications especially those requiring flexibility cost-efficiency and high customizability.

In this article we'll consider when to use AWS CloudWatch versus Parseable and explain how to make the switch to Parseable.

Optimize Data Transfer from Parseable with Apache Arrow Flight

· 6 min read
Nikhil Sinha
Senior Software Engineer

Written in Rust, Parseable leverages Apache Arrow and Parquet as its underlying data structures, offering high throughput and low latency without the overhead of traditional indexing methods. This makes it an ideal solution for environments that require efficient log management, whether deployed on public or private clouds, containers, VMs, or bare metal environments. This guide will delve into the integration of Arrow Flight with Parseable, providing a comprehensive setup for your client.

How to monitor your Parseable metadata in a Grafana dashbaord

· 5 min read

As Parseable deployments in the wild are handling larger and larger volumes of logs, we needed a way to enable users to monitor their Parseable instances.

Typically this would mean setting up Prometheus to capture Parseable ingest and query node metrics and visualize those metrics on a Grafana dashboard. We added Prometheus metrics support in Parseable to enable this use case.

But we wanted a simpler, self-contained approach that allows users to monitor their Parseable instances without needing to set up Prometheus.

This led us to figuring out a way to store Parseable server's internal metrics in a special log stream called pmeta. This stream keeps track of important information about all of the ingestors in the cluster. This includes information like the URL of the ingestor, Commit id of that ingestor, number of events processed by the ingestor, and staging file location and size.

Load testing Parseable with K6

· 6 min read

Integrating K6 with Kubernetes allows developers to run load tests in a scalable and distributed manner. By deploying K6 in a Kubernetes cluster, you can use Kubernetes orchestration capabilities to manage and distribute the load testing across multiple nodes. This setup ensures you can simulate real-world traffic and usage patterns more accurately, providing deeper insights into your application's performance under stress.

Parseable Release v1.3.0

· 8 min read

Parseable v1.3.0 is out now! This release includes a good mix of new features, improvements, and bug fixes. In this post, we'll take a a detailed look at what’s new in this release.

We'll love to hear your feedback on this release. Please feel free to create an issue on our GitHub or join our Slack community.

New Features

Saved filters and queries in the explore logs view

A long-pending request from our community has been the ability to save a filter in order to return at a later date to a specific view without having to re-apply the filter from scratch. The same goes for queries.

We initially considered implementing this as a purely client-side feature, i.e. on the Console only, to deliver it more quickly. The idea was to use the browser's local store to keep a saved filter’s details and then load it from there on demand. But this approach would have been too limiting; for instance, the same user would not have been able to see their saved filters when logging in from a different browser or IP address. Also, sharing filters across users would not work and any browser event that cleared local storage would essentially mean the loss of all the saved filters, many of which are carefully created after months of analysis.

Build a robust logging system with Temporal and Parseable

· 6 min read

Picture this: You've built a workflow application using Temporal, and it's humming along nicely. But when things go sideways (as they inevitably do), you find yourself drowning in a sea of logs, desperately searching for that needle in the haystack. Sound familiar?

Today, we're diving into workflow logging, combining the power of Temporal with the flexibility of Parseable. By the end of this guide, you'll have a robust logging system that will make debugging feel less like archeology and more like time travel.

Let's get started.

Integrating Temporal and Parseable

Integrating Temporal with Parseable creates a robust system for workflow logging. Temporal handles the workflow orchestration, ensuring that tasks are executed in the right order and can recover from failures. Parseable captures and stores the logs generated by these workflows, providing a platform for analyzing and monitoring the logs.

With the integration of Temporal and Parseable, debugging workflows becomes more straightforward. Temporal's ability to manage state and retries, combined with Parseable's efficient log storage and querying capabilities, allows developers to quickly identify and resolve issues.

Setting up your environment

Before we dive into the nitty-gritty, let's make sure we have everything set up:

  1. Ensure you have Temporal installed and running (either locally or in a cluster, or you can use the Cloud version).
  2. Install Parseable. We'll use the Quick Start version for simplicity.
  3. Have your Temporal workflow application ready. This application will use Temporal to run tasks and workflows, and we'll set it up to send logs to Parseable.

Starting the Parseable server

Let's get Parseable up and running:

docker run -p 8000:8000 \
-v /tmp/parseable/data:/parseable/data \
-v /tmp/parseable/staging:/parseable/staging \
-e P_FS_DIR=/parseable/data \
-e P_STAGING_DIR=/parseable/staging \
containers.parseable.com/parseable/parseable:latest \
parseable local-store

This command pulls the latest Parseable image and runs it. It exposes port 8000 and creates a local volume for data persistence. This setup ensures that Parseable will store its data on your local machine and be accessible through port 8000 on your web browser.

Starting the Temporal server

Assuming your temporal server is installed as per previous steps, let's start the temporal server to create our first workflow

temporal server start-dev

The Temporal server is served on the URL localhost:7233 and you can access the Web UI at http://localhost:8233. The Temporal Web UI provides a comprehensive interface for monitoring and managing workflows, making it easier to visualize workflow states, debug issues, and understand the overall health of your system.

Next step, Let's get our Sample application cloned and running!

Setting up the Demo Workflow application

In this example, we'll be using the Typescript implementation. Feel free to choose any other favourable stack. Let's first clone the repository locally. Open your terminal and type the command:

git clone https://github.com/parseablehq/blog-samples.git
cd blog-samples/temporal-workflow-blog

You can choose any other stack that you are comfortable with. Cloning the repository gives us a pre-configured setup for a Temporal application integrated with Parseable. The example provided includes sample workflows and logging configurations, which can serve as a reference for your own applications. This step is crucial for getting hands-on experience with the integration process.

Installing the packages

Install the required node packages using the command.

npm install

Integrating Parseable-winston with our custom Logger

We'll be using the parseable-winston package to send forward the logs generated and captured by winston to the parseable instance.

You'll find these configuration in the logging.ts file in the repository. Winston is a versatile logging library for Node.js, offering various transports for log data, including files, databases, and third-party services like Parseable. By integrating parseable-winston, we ensure that logs are seamlessly forwarded to Parseable, allowing for centralized log management and analysis.

Inside the parseable function, you can change the following values as per your requirements in .env file:

  • PARSEABLE_LOGS_URL: Replace this with your Parseable URL. Example: https://demo.parseable.com/api/v1/logstream.
  • PARSEABLE_LOGS_USERNAME: Replace this with your username.
  • PARSEABLE_LOGS_PASSWORD: Replace this with your password.
  • PARSEABLE_LOGS_LOGSTREAM: Enter the logstream in which you want to ingest the workflow logs.

The environment variables allow for flexible configuration of the logging system.

Starting the Worker

The Worker Process is where Workflow Functions and Activity Functions are executed. To start the worker, type the command

npm run start.watch

This will initiate the worker and it will start generating logs and will interact with the workflow. Now, to run the workflow, open a separate terminal and type:

npm run workflow

Once you run the workflow, you should have the worker log output in your terminal. This two-step process ensures that the worker process and the workflow execution are handled independently, allowing for better management and debugging. The start.watch command starts the worker in watch mode, meaning it will automatically reload if there are changes in the code, which is particularly useful during development.

Log ingestion in Parseable

We have setup everything required to have the ingested logs in Parseable. Let's login to the parseable dashboard and check if the logs are being ingested or not. You can see the logs being ingested into your logstream in the parseable dashboard. The Parseable dashboard provides a user-friendly interface to search, filter, and analyze logs. By confirming that logs are being ingested correctly, you ensure that your logging setup is functioning as expected, enabling effective debugging and monitoring.

Additional tips

  1. Configuring Alerts: Set up alerts in Parseable to notify you of specific events or errors in your workflows.
  2. Custom Log Streams: Create custom log streams for different parts of your application to better organize your logs.
  3. Advanced Queries: Use Parseable's query language to filter and search logs based on various criteria, making troubleshooting easier.

Conclusion

Congratulations! You've just leveled up your Temporal workflow observability game. By integrating Parseable, you've created a powerful logging system that allows for easy querying, analysis, and alerting. Remember, this is just the tip of the iceberg. As you become more familiar with Parseable, you can create more application based ingestion and fine-tune your alerts to match your specific needs.

Effective logging and observability are crucial for maintaining the health and performance of your workflow applications. By following the steps outlined in this guide, you’ve taken a significant step towards achieving robust observability. With Parseable’s advanced features and Temporal’s powerful workflow management capabilities, you can ensure that your applications run smoothly and efficiently.

Ingesting Data to Parseable Using Pandas, A Step-by-Step Guide

· 4 min read

Managing and deriving insights from vast amounts of historical data is not just a challenge but a necessity. Imagine your team grappling with numerous log files, trying to pinpoint issues. But, because logs are stored as files, it is very inefficient to search through them. This scenario is all too familiar for many developers.

Enter Parseable, a powerful solution to analyze your application logs. By integrating with pandas, the renowned Python library for data analysis, Parseable offers a seamless way to ingest and leverage historical data without the need to discard valuable logs.

In this blog post, we explore how Parseable can revolutionize your data management strategy, enabling you to unlock actionable insights from both current and archived log data effortlessly.

Streamlining Boomi Container Logs with Parseable

· 6 min read
Nikhil Sinha
Senior Software Engineer

Boomi is a leading integration Platform as a Service (iPaaS) that allows you to design, deploy, and manage integration processes. However, managing logs can be challenging. Using Parseable with Boomi reduces the headache and enables the user to experience a seamless process.

Boomi logs are a goldmine of information, helping users get the information they need, yet there are significant challenges in accessing and interpreting the logs.

Traditionally, these logs are stored in files, and extracting meaningful insights often involves manual efforts that are both time-consuming and error-prone.

In this article, learn how to use Parseable with Boomi to utilize its full potential.

Improve Observability in Salesforce with Open Source Parseable

· 5 min read
Amit Malhotra
Founder @ Factorwise

Salesforce is a highly customizable and widely used CRM platform that uses observability to maintain and troubleshoot its applications. Observability in Salesforce primarily involves monitoring logs, setting up alerts, and tracking performance metrics.

The problem is that we need more of the right insights, and the leading cause of this is the app's inflexibility. This is nothing new, but how do we fix the problem?

Using an open source, flexible, and extensible tool like Parseable, along with Factorwise for Salesforce integration - you can improve observability in your Salesforce application for optimal health and performance, helping you serve customers and generate revenue. Let's see how.

Top 4 limitations for Observability in Salesforce

Salesforce offers built-in debugging tools like debug logs dump for tracking events. These are written in Apex, or low-code features like flow. However, this has significant limitations:

  • Real-time parsing challenges: Debug logs are not designed for reading in real-time, making it difficult for developers to identify and resolve issues quickly.
  • Developer console limitations: While the Developer Console can search through logs, it is not user-friendly and can be unstable, especially with large log files. Its session-based nature can lead to frequent crashes.
  • Performance impact: Enabling Debug logs in production environments can degrade performance, creating a trade-off between observability and system efficiency.
  • Low-level detail overload: Debug logs often contain low-level details that may not be relevant to developers, who typically look for application-level logs such as System.debug calls.

How we've previously addressed these challenges

Third-party services

Several third-party services have tried to address Salesforce's observability challenges in the past. However, these solutions are often not viable for all Salesforce operations teams, particularly those in medium and small businesses, due to cost constraints. As a result, many teams continue to rely on the built-in debug logs despite their limitations.

Another constraint they encounter is the need for user-friendly interfaces for log visualization and searching.

Custom tooling

Some organizations develop custom logging solutions to overcome these challenges, but they often involve storing logs within the Salesforce instance and publishing them on the Event Bus. An example of this is NebulaLogger. While effective to some extent, without a helpful user interface for log visualization and search capabilities, insights cannot be understood, evaluated, and acted upon in a way that improves observability in Salesforce. Additionally, Salesforce's native search on large text areas can be slow, hampering efficiency.

How we address Observability in Salesforce today with Parseable & Factorwise

Parseable is an open-source log analytics tool that simplifies and enables bold insights for teams looking for a competitive edge.

Intuitive dashboard for log management

Parseable's intuitive UI and familiar SQL syntax make it simple yet powerful for developers to query logs.

Factorwise Logger, generates unique transaction IDs for every invocation of custom Salesforce logic, simplifying the process of tracking and debugging specific transactions.

With Parseable, Salesforce logs can be parsed and searched in real time, enabling rapid identification and resolution of issues. This capability is a significant improvement over the traditional Salesforce Developer Console, providing a more stable and user-friendly interface for log management.

Salesforce logs debugging

The right alert for the right team at the right time

Parseable enables real-time alerts based on defined conditions, ensuring that abnormal events are promptly identified and addressed. Notifications are delivered to the right team via Slack, Alertmanager, or custom webhooks, keeping the team or person you choose informed and able to respond sooner rather than later to potential issues.

Factorwise Logger captures Salesforce governor limits on every log event generated, storing this data in Parseable. This feature enables teams to track performance bottlenecks and optimize their applications effectively.

Salesforce logs alerting

Visualize metrics that matter to your team

Parseable also supports the aggregation and visualization of business and operational metrics!

Salesforce logs to metrics

For example, teams can track the number of leads converted over various time intervals, gaining insights that are difficult to obtain using native Salesforce reports. Parseable's fast query results make real-time data analysis straightforward and efficient.

For Grafana fans and users, Parseable integrates seamlessly with this popular platform for monitoring and observability to create real-time dashboards that provide a comprehensive view of system performance and business KPIs, enabling data-driven decision-making.

Salesforce logs visualization

Parseable x Factorwise is ideal for Salesforce log analytics

Factorwise is building a Salesforce native logger, that integrates very well with Parseable. This logger is built with detailed insights from 100s of Salesforce users. Meanwhile, Parseable stands out as the optimal choice for Salesforce observability for several reasons:

  • Ease of use: Parseable is easy to deploy, operate, and scale, making it accessible for organizations of all sizes. Its intuitive UI and SQL-based query language reduce the learning curve and enable rapid adoption.
  • Comprehensive features: Parseable offers a range of built-in features, including real-time log parsing, alerts, RBAC (Role-Based Access Control), and soft-multi tenancy through stream-based segregation. These capabilities ensure that teams have everything they need to maintain robust observability.
  • Cost-effectiveness: For medium and small businesses, Parseable provides a cost-effective solution that doesn't require extensive resources to manage. This makes it an attractive alternative to more expensive third-party services.

Parseable x Factorwise

If you'd like to talk about how Parseable x Factorwise can help your Salesforce team, please schedule a demo.

We'd love for you to try Parseable today! Ask questions and tell us what you think by joining our community in Slack.

What are the Best Practices for Logging?

· 7 min read

If you are a programmer, you might have come across various mainstream practices, one of them being logging. The process itself can act as your silent guardian and helps you record the changes and steps your application is taking. While there is no standardized rule book for you to refer, we have compiled a few rules of thumbs to start your journey with. Effective logging isn’t just about collecting data, it involves a well-defined strategy. Consider this blog post as your beginner’s guide to logging practices.

Get Updates from Parseable

Subscribe to keep up with latest news, updates and new features on Parseable