See Rockset
in action

Get a product tour with a Rockset engineer

Elasticsearch vs Apache Pinot

Compare and contrast Elasticsearch and Apache Pinot by architecture, ingestion, queries, performance, and scalability.

Compare Elasticsearch to Rockset here

Compare Apache Pinot to Rockset here

Elasticsearch vs Apache Pinot Architecture

Architecture
Elasticsearch
Apache Pinot
Deployment model
On-prem, PaaS options
PaaS or self managed
Use of storage hierarchy
• Hot, warm and cold storage on disk • Frozen storage on cloud storage
Hot storage plus Deep Store for backup and restore operations
Isolation of ingest and query
No - There are dedicated ingestion nodes but indexing, compaction and updates occur on the data nodes
No
Separation of compute and storage
No
No, although StarTree separates compute and cloud storage, which is an important consideration for those considering StarTree vs. open-source Pinot
Isolation for multiple applications
Full isolation with replication
Full isolation with replication

Elasticsearch is an open-source distributed search engine built on Apache Lucene, a full text search library. Elasticsearch is a distributed system, which means that it is designed to operate across multiple nodes, each responsible for a part of the data.

Pinot is a real-time distributed OLAP datastore that ingests both batch and streaming data. It has a distributed systems architecture that scales both horizontally and vertically, but unlike alternative OLAP databases, it does not decouple storage and compute. It supports both self-managed and PaaS options.


Elasticsearch vs Apache Pinot Ingestion

Ingestion
Elasticsearch
Apache Pinot
Data sources
• Logstash JDBC input plugin for relational databases • Open-source Kafka plugin or Kafka Elasticsearch Service Sink Connector (available only to managed Confluent and Elasticsearch) • REST APIs or client libraries to sync data directly from the application
Streaming • AWS Kinesis • Apache Kafka Batch • Cloud Storage • PostgreSQL • MySQL • Snowflake • Google BigQuery File upload Write API
Semi structured data
Yes- Ingests JSON and XML without a predefined schema
Support for JSON column types, but JSON must first be converted to a string
Transformations and rollups
Yes - Ingest pipelines can be configured to remove fields, extract values from text and enrich data. Ingest pipelines require ingest nodes in the cluster. Rolling up historical data is in technical preview
Yes

Elasticsearch has a number of integrations as well as a REST API. It is a NoSQL database and natively supports semi-structured data. Transformations typically occur upstream so that data can be modeled for optimal performance before it is indexed in Elasticsearch.

Pinot supports high-performance ingest from streaming data sources. Each table is either offline or real time. Real-time tables have a smaller retention period and scale based on ingestion rate while offline tables have a larger retention period and scale based on the amount of data. In order to persistently store the generated segments that make up a table, you will need to change controller and server configs to add deep storage.

See Rockset in action
Get a product tour with a Rockset engineer.

Elasticsearch vs Apache Pinot Performance

Performance
Elasticsearch
Apache Pinot
Updates
Update API can update, delete or skip modifying the document. The entire document must be reindexed; in-place updates are not supported
• By default, all data in Pinot is immutable • Upserts only supported for streaming ingest • No support for upserts on data using the star tree index • No support for upserts on out-of-order events
Indexing
Inverted index
• Manually configured • Forward index, inverted Index, Star-Tree index (columnar), bloom filter, range index, search index, JSON index, geospatial index, timestamp index
Query latency
50-1000ms queries on 100s of TBs
50-1000ms queries on 100s of TB
Storage format
JSON documents
Pinot stores data in a columnar format and adds additional indexes to perform fast filtering, aggregation and group by
Streaming ingest
• Ingests on a per-record or batch basis • Data latency on a per-record basis is typically 1-2 seconds
1-2 second ingest for streaming data

Elasticsearch is a search engine that utilizes an inverted index. Although this approach leads to storage amplification, it also enables low-latency queries that demand less computation. Elasticsearch is tailored to accommodate large scale, append-only data such as logs, events, and metrics. To manage frequently updated data, users often utilize the Bulk API to minimize computational costs and ensure consistent query performance.

Like its competitors, Pinot can achieve sub-second query latency at high concurrency. However, this level of performance requires tuning, management, and deep expertise. Compared with the open-source version, the PaaS versions of Pinot address some of these issues, but similarly require expertise while making tradeoffs affecting query performance.


Elasticsearch vs Apache Pinot Queries

Queries
Elasticsearch
Apache Pinot
Joins
No- Need to use workarounds including data denormalization, application-side joins, nested objects or parent-child relationships
SQL joins limited to inner join, left-outer, and semi-join
Query language
DSL - domain specific language
SQL
Developer tooling
• REST API • Java, Javascript, Go, .NET, PHP, Perl, Python, Ruby, Rust
• API for querying SQL via POST command • Clients for JDBC, Java, Python, and Golang • Integrations with Trino and Presto
Visualization tools
• Kibana • PowerBI, Qlik, Tableau
Integrations with Tableau and Superset

Elasticsearch has its own domain specific language (DSL) based on JSON. Joins are not a first class citizen in Elasticsearch requiring a number of complex and expensive workarounds. Elasticsearch is known for its developer tooling and supports a number of client libraries. Kibana is the visualization layer for Elasticsearch and is frequently used for log analytics and monitoring.

In Pinot, SQL-like queries are received by brokers and scatter the request between real-time and offline servers. The two tables then process requests, send results back to the broker, and responds with the result. Joins are limited, as is support for UDFs and subqueries, making Pinot more or less useful depending on the use case.


Elasticsearch vs Apache Pinot Scalability

Scalability
Elasticsearch
Apache Pinot
Vertical scaling
Manually resize machines
All 4 pinot node types can be resized manually
Horizontal scaling
• Elasticsearch is horizontally scalable and can scale by adding nodes to the cluster • When using managed Elastic, autoscaling policies can be used to self-monitor cluster health and it is the responsibility of the operator to update resource allocations either manually or using APIs. Elasticsearch rebalances the data automatically obeying shard allocation rules • There are many cluster-level operations that need to be monitored when scaling
• Users can add additional Pinot nodes to scale horizontally • Rebalancing is manual

Elasticsearch is horizontally scalable and can scale by adding more nodes to the cluster. Its tightly coupled architecture means that compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling Elasticsearch often requires deep expertise as there are many levels of the system that need to be managed- the server, operating system, network and software.

Pinot allows for vertical scaling by increasing CPU and memory for each node as well as horizontal scaling by adding additional nodes. Capacity planning is a time-consuming, iterative, and manual task. It involves load testing and tuning across multiple vectors including read QPS, write QPS, number of streaming partitions, daily data size, retention period, types of workloads, number and type of segments, and much more.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.