See Rockset
in action

Get a product tour with a Rockset engineer

Apache Druid vs ClickHouse

Compare and contrast Apache Druid and ClickHouse by architecture, ingestion, queries, performance, and scalability.

Compare Apache Druid to Rockset here

Compare ClickHouse to Rockset here

Apache Druid vs ClickHouse Architecture

Architecture
Apache Druid
ClickHouse
Deployment model
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
• Self-managed on-premises or on cloud infrastructure • Several managed cloud services available
Use of storage hierarchy
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
• Designed to use hard disk drives for storage • Can also use SSD if available
Isolation of ingest and query
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
No
Separation of compute and storage
No, although Imply’s offering separates compute and storage
No - although ClickHouse Cloud decouples compute and cloud storage
Isolation for multiple applications
No
No

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.

ClickHouse is open source and can be deployed anywhere. Several vendors such as ClickHouse, the company, and Tinybird also offer cloud versions. Compute and storage are tightly coupled, although ClickHouse Cloud was rearchitected to decouple compute and storage. ClickHouse Cloud pricing is based on compute and storage usage.


Apache Druid vs ClickHouse Ingestion

Ingestion
Apache Druid
ClickHouse
Data sources
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
• Core integrations for ingestion from Kafka, S3, Google Cloud Storage • Other partner and community integrations available
Semi structured data
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
• JSON Object type for handling nested JSON • Automatically infers schema from a subset of rows
Transformations and rollups
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply
Yes - several storage engines can pre-aggregate data

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

ClickHouse has core integrations from common sources such as Kafka and S3. It recently introduced greater ability to handle semi-structured data using the JSON Object type and automatic schema inference.

See Rockset in action
Get a product tour with a Rockset engineer.

Apache Druid vs ClickHouse Performance

Performance
Apache Druid
ClickHouse
Updates
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
• Writes to immutable files • Updates rewrite and merge data files asynchronously • Frequent updates are not recommended due to potential for large rewrites
Indexing
• Bitmap index
• Primary indexes use sparse indexing on data ordered by primary key • Secondary data skipping indexes
Query latency
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
Sub-100ms to seconds, optimized for large-scale aggregations
Storage format
Columnar format partitioned by time
• Column-oriented • Heavily compressed to minimize storage footprint
Streaming ingest
• Sub 15 seconds
Recommends inserting in batches of >1000 rows and <1 insert per second

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.

ClickHouse leverages column orientation and heavy compression for better performance on analytics workloads. It also uses indexing to accelerate queries as well. While ClickHouse use cases often involve streaming data from Kafka, batching data is recommended for efficient ingestion.


Apache Druid vs ClickHouse Queries

Queries
Apache Druid
ClickHouse
Joins
Yes, only for datasets that fit in memory, and with a query latency penalty
Yes
Query language
Druid native queries • Druid SQL
SQL
Developer tooling
• Druid SQL API
• API for querying SQL via POST command • Python, Java, Node.js and Go language clients
Visualization tools
Pivot, maintained by Imply
Integrations with Metabase, Superset, Grafana, Tableau, Deepnote and Rocket BI

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.

ClickHouse uses SQL for querying, with support for SQL joins. ClickHouse integrates with some common tools for visual analytics, including Superset, Grafana and Tableau.


Apache Druid vs ClickHouse Scalability

Scalability
Apache Druid
ClickHouse
Vertical scaling
Users can manually increase or decrease server sizes
Scale up single-node ClickHouse for vertical scaling
Horizontal scaling
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.
• Compute and storage scaled in lockstep • Data rebalanced to populate newly added nodes • Cloud offerings automate some of the scaling and rebalancing effort

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

ClickHouse can be used in both single-node and distributed modes. Tight coupling of compute and storage and the need to rebalance data make scaling out more complex, but cloud versions of ClickHouse help automate this process.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.