See Rockset
in action

Get a product tour with a Rockset engineer

Apache Druid vs Apache Pinot

Compare and contrast Apache Druid and Apache Pinot by architecture, ingestion, queries, performance, and scalability.

Compare Apache Druid to Rockset here

Compare Apache Pinot to Rockset here

Apache Druid vs Apache Pinot Architecture

Architecture
Apache Druid
Apache Pinot
Deployment model
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
PaaS or self managed
Use of storage hierarchy
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
Hot storage plus Deep Store for backup and restore operations
Isolation of ingest and query
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
No
Separation of compute and storage
No, although Imply’s offering separates compute and storage
No, although StarTree separates compute and cloud storage, which is an important consideration for those considering StarTree vs. open-source Pinot
Isolation for multiple applications
No
Full isolation with replication

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.

Pinot is a real-time distributed OLAP datastore that ingests both batch and streaming data. It has a distributed systems architecture that scales both horizontally and vertically, but unlike alternative OLAP databases, it does not decouple storage and compute. It supports both self-managed and PaaS options.


Apache Druid vs Apache Pinot Ingestion

Ingestion
Apache Druid
Apache Pinot
Data sources
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Streaming • AWS Kinesis • Apache Kafka Batch • Cloud Storage • PostgreSQL • MySQL • Snowflake • Google BigQuery File upload Write API
Semi structured data
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Support for JSON column types, but JSON must first be converted to a string
Transformations and rollups
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply
Yes

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

Pinot supports high-performance ingest from streaming data sources. Each table is either offline or real time. Real-time tables have a smaller retention period and scale based on ingestion rate while offline tables have a larger retention period and scale based on the amount of data. In order to persistently store the generated segments that make up a table, you will need to change controller and server configs to add deep storage.

See Rockset in action
Get a product tour with a Rockset engineer.

Apache Druid vs Apache Pinot Performance

Performance
Apache Druid
Apache Pinot
Updates
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
• By default, all data in Pinot is immutable • Upserts only supported for streaming ingest • No support for upserts on data using the star tree index • No support for upserts on out-of-order events
Indexing
• Bitmap index
• Manually configured • Forward index, inverted Index, Star-Tree index (columnar), bloom filter, range index, search index, JSON index, geospatial index, timestamp index
Query latency
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
50-1000ms queries on 100s of TB
Storage format
Columnar format partitioned by time
Pinot stores data in a columnar format and adds additional indexes to perform fast filtering, aggregation and group by
Streaming ingest
• Sub 15 seconds
1-2 second ingest for streaming data

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.

Like its competitors, Pinot can achieve sub-second query latency at high concurrency. However, this level of performance requires tuning, management, and deep expertise. Compared with the open-source version, the PaaS versions of Pinot address some of these issues, but similarly require expertise while making tradeoffs affecting query performance.


Apache Druid vs Apache Pinot Queries

Queries
Apache Druid
Apache Pinot
Joins
Yes, only for datasets that fit in memory, and with a query latency penalty
SQL joins limited to inner join, left-outer, and semi-join
Query language
Druid native queries • Druid SQL
SQL
Developer tooling
• Druid SQL API
• API for querying SQL via POST command • Clients for JDBC, Java, Python, and Golang • Integrations with Trino and Presto
Visualization tools
Pivot, maintained by Imply
Integrations with Tableau and Superset

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.

In Pinot, SQL-like queries are received by brokers and scatter the request between real-time and offline servers. The two tables then process requests, send results back to the broker, and responds with the result. Joins are limited, as is support for UDFs and subqueries, making Pinot more or less useful depending on the use case.


Apache Druid vs Apache Pinot Scalability

Scalability
Apache Druid
Apache Pinot
Vertical scaling
Users can manually increase or decrease server sizes
All 4 pinot node types can be resized manually
Horizontal scaling
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.
• Users can add additional Pinot nodes to scale horizontally • Rebalancing is manual

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

Pinot allows for vertical scaling by increasing CPU and memory for each node as well as horizontal scaling by adding additional nodes. Capacity planning is a time-consuming, iterative, and manual task. It involves load testing and tuning across multiple vectors including read QPS, write QPS, number of streaming partitions, daily data size, retention period, types of workloads, number and type of segments, and much more.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.