See Rockset
in action

Get a product tour with a Rockset engineer

Apache Pinot vs Elasticsearch

Compare and contrast Apache Pinot and Elasticsearch by architecture, ingestion, queries, performance, and scalability.

Compare Apache Pinot to Rockset here

Compare Elasticsearch to Rockset here

Apache Pinot vs Elasticsearch Architecture

Architecture
Apache Pinot
Elasticsearch
Deployment model
PaaS or self managed
On-prem, PaaS options
Use of storage hierarchy
Hot storage plus Deep Store for backup and restore operations
• Hot, warm and cold storage on disk • Frozen storage on cloud storage
Isolation of ingest and query
No
No - There are dedicated ingestion nodes but indexing, compaction and updates occur on the data nodes
Separation of compute and storage
No, although StarTree separates compute and cloud storage, which is an important consideration for those considering StarTree vs. open-source Pinot
No
Isolation for multiple applications
Full isolation with replication
Full isolation with replication

Pinot is a real-time distributed OLAP datastore that ingests both batch and streaming data. It has a distributed systems architecture that scales both horizontally and vertically, but unlike alternative OLAP databases, it does not decouple storage and compute. It supports both self-managed and PaaS options.

Elasticsearch is an open-source distributed search engine built on Apache Lucene, a full text search library. Elasticsearch is a distributed system, which means that it is designed to operate across multiple nodes, each responsible for a part of the data.


Apache Pinot vs Elasticsearch Ingestion

Ingestion
Apache Pinot
Elasticsearch
Data sources
Streaming • AWS Kinesis • Apache Kafka Batch • Cloud Storage • PostgreSQL • MySQL • Snowflake • Google BigQuery File upload Write API
• Logstash JDBC input plugin for relational databases • Open-source Kafka plugin or Kafka Elasticsearch Service Sink Connector (available only to managed Confluent and Elasticsearch) • REST APIs or client libraries to sync data directly from the application
Semi structured data
Support for JSON column types, but JSON must first be converted to a string
Yes- Ingests JSON and XML without a predefined schema
Transformations and rollups
Yes
Yes - Ingest pipelines can be configured to remove fields, extract values from text and enrich data. Ingest pipelines require ingest nodes in the cluster. Rolling up historical data is in technical preview

Pinot supports high-performance ingest from streaming data sources. Each table is either offline or real time. Real-time tables have a smaller retention period and scale based on ingestion rate while offline tables have a larger retention period and scale based on the amount of data. In order to persistently store the generated segments that make up a table, you will need to change controller and server configs to add deep storage.

Elasticsearch has a number of integrations as well as a REST API. It is a NoSQL database and natively supports semi-structured data. Transformations typically occur upstream so that data can be modeled for optimal performance before it is indexed in Elasticsearch.

See Rockset in action
Get a product tour with a Rockset engineer.

Apache Pinot vs Elasticsearch Performance

Performance
Apache Pinot
Elasticsearch
Updates
• By default, all data in Pinot is immutable • Upserts only supported for streaming ingest • No support for upserts on data using the star tree index • No support for upserts on out-of-order events
Update API can update, delete or skip modifying the document. The entire document must be reindexed; in-place updates are not supported
Indexing
• Manually configured • Forward index, inverted Index, Star-Tree index (columnar), bloom filter, range index, search index, JSON index, geospatial index, timestamp index
Inverted index
Query latency
50-1000ms queries on 100s of TB
50-1000ms queries on 100s of TBs
Storage format
Pinot stores data in a columnar format and adds additional indexes to perform fast filtering, aggregation and group by
JSON documents
Streaming ingest
1-2 second ingest for streaming data
• Ingests on a per-record or batch basis • Data latency on a per-record basis is typically 1-2 seconds

Like its competitors, Pinot can achieve sub-second query latency at high concurrency. However, this level of performance requires tuning, management, and deep expertise. Compared with the open-source version, the PaaS versions of Pinot address some of these issues, but similarly require expertise while making tradeoffs affecting query performance.

Elasticsearch is a search engine that utilizes an inverted index. Although this approach leads to storage amplification, it also enables low-latency queries that demand less computation. Elasticsearch is tailored to accommodate large scale, append-only data such as logs, events, and metrics. To manage frequently updated data, users often utilize the Bulk API to minimize computational costs and ensure consistent query performance.


Apache Pinot vs Elasticsearch Queries

Queries
Apache Pinot
Elasticsearch
Joins
SQL joins limited to inner join, left-outer, and semi-join
No- Need to use workarounds including data denormalization, application-side joins, nested objects or parent-child relationships
Query language
SQL
DSL - domain specific language
Developer tooling
• API for querying SQL via POST command • Clients for JDBC, Java, Python, and Golang • Integrations with Trino and Presto
• REST API • Java, Javascript, Go, .NET, PHP, Perl, Python, Ruby, Rust
Visualization tools
Integrations with Tableau and Superset
• Kibana • PowerBI, Qlik, Tableau

In Pinot, SQL-like queries are received by brokers and scatter the request between real-time and offline servers. The two tables then process requests, send results back to the broker, and responds with the result. Joins are limited, as is support for UDFs and subqueries, making Pinot more or less useful depending on the use case.

Elasticsearch has its own domain specific language (DSL) based on JSON. Joins are not a first class citizen in Elasticsearch requiring a number of complex and expensive workarounds. Elasticsearch is known for its developer tooling and supports a number of client libraries. Kibana is the visualization layer for Elasticsearch and is frequently used for log analytics and monitoring.


Apache Pinot vs Elasticsearch Scalability

Scalability
Apache Pinot
Elasticsearch
Vertical scaling
All 4 pinot node types can be resized manually
Manually resize machines
Horizontal scaling
• Users can add additional Pinot nodes to scale horizontally • Rebalancing is manual
• Elasticsearch is horizontally scalable and can scale by adding nodes to the cluster • When using managed Elastic, autoscaling policies can be used to self-monitor cluster health and it is the responsibility of the operator to update resource allocations either manually or using APIs. Elasticsearch rebalances the data automatically obeying shard allocation rules • There are many cluster-level operations that need to be monitored when scaling

Pinot allows for vertical scaling by increasing CPU and memory for each node as well as horizontal scaling by adding additional nodes. Capacity planning is a time-consuming, iterative, and manual task. It involves load testing and tuning across multiple vectors including read QPS, write QPS, number of streaming partitions, daily data size, retention period, types of workloads, number and type of segments, and much more.

Elasticsearch is horizontally scalable and can scale by adding more nodes to the cluster. Its tightly coupled architecture means that compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling Elasticsearch often requires deep expertise as there are many levels of the system that need to be managed- the server, operating system, network and software.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.