Elasticsearch vs Apache Pinot
Compare and contrast Elasticsearch and Apache Pinot by architecture, ingestion, queries, performance, and scalability.
Elasticsearch vs Apache Pinot Architecture
Elasticsearch is an open-source distributed search engine built on Apache Lucene, a full text search library. Elasticsearch is a distributed system, which means that it is designed to operate across multiple nodes, each responsible for a part of the data.
Pinot is a real-time distributed OLAP datastore that ingests both batch and streaming data. It has a distributed systems architecture that scales both horizontally and vertically, but unlike alternative OLAP databases, it does not decouple storage and compute. It supports both self-managed and PaaS options.
Elasticsearch vs Apache Pinot Ingestion
Elasticsearch has a number of integrations as well as a REST API. It is a NoSQL database and natively supports semi-structured data. Transformations typically occur upstream so that data can be modeled for optimal performance before it is indexed in Elasticsearch.
Pinot supports high-performance ingest from streaming data sources. Each table is either offline or real time. Real-time tables have a smaller retention period and scale based on ingestion rate while offline tables have a larger retention period and scale based on the amount of data. In order to persistently store the generated segments that make up a table, you will need to change controller and server configs to add deep storage.
Elasticsearch vs Apache Pinot Performance
Elasticsearch is a search engine that utilizes an inverted index. Although this approach leads to storage amplification, it also enables low-latency queries that demand less computation. Elasticsearch is tailored to accommodate large scale, append-only data such as logs, events, and metrics. To manage frequently updated data, users often utilize the Bulk API to minimize computational costs and ensure consistent query performance.
Like its competitors, Pinot can achieve sub-second query latency at high concurrency. However, this level of performance requires tuning, management, and deep expertise. Compared with the open-source version, the PaaS versions of Pinot address some of these issues, but similarly require expertise while making tradeoffs affecting query performance.
Elasticsearch vs Apache Pinot Queries
Elasticsearch has its own domain specific language (DSL) based on JSON. Joins are not a first class citizen in Elasticsearch requiring a number of complex and expensive workarounds. Elasticsearch is known for its developer tooling and supports a number of client libraries. Kibana is the visualization layer for Elasticsearch and is frequently used for log analytics and monitoring.
In Pinot, SQL-like queries are received by brokers and scatter the request between real-time and offline servers. The two tables then process requests, send results back to the broker, and responds with the result. Joins are limited, as is support for UDFs and subqueries, making Pinot more or less useful depending on the use case.
Elasticsearch vs Apache Pinot Scalability
Elasticsearch is horizontally scalable and can scale by adding more nodes to the cluster. Its tightly coupled architecture means that compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling Elasticsearch often requires deep expertise as there are many levels of the system that need to be managed- the server, operating system, network and software.
Pinot allows for vertical scaling by increasing CPU and memory for each node as well as horizontal scaling by adding additional nodes. Capacity planning is a time-consuming, iterative, and manual task. It involves load testing and tuning across multiple vectors including read QPS, write QPS, number of streaming partitions, daily data size, retention period, types of workloads, number and type of segments, and much more.