See Rockset
in action

Get a product tour with a Rockset engineer

Elasticsearch vs Apache Druid

Compare and contrast Elasticsearch and Apache Druid by architecture, ingestion, queries, performance, and scalability.

Compare Elasticsearch to Rockset here

Compare Apache Druid to Rockset here

Elasticsearch vs Apache Druid Architecture

Architecture
Elasticsearch
Apache Druid
Deployment model
On-prem, PaaS options
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
Use of storage hierarchy
• Hot, warm and cold storage on disk • Frozen storage on cloud storage
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
Isolation of ingest and query
No - There are dedicated ingestion nodes but indexing, compaction and updates occur on the data nodes
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
Separation of compute and storage
No
No, although Imply’s offering separates compute and storage
Isolation for multiple applications
Full isolation with replication
No

Elasticsearch is an open-source distributed search engine built on Apache Lucene, a full text search library. Elasticsearch is a distributed system, which means that it is designed to operate across multiple nodes, each responsible for a part of the data.

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.


Elasticsearch vs Apache Druid Ingestion

Ingestion
Elasticsearch
Apache Druid
Data sources
• Logstash JDBC input plugin for relational databases • Open-source Kafka plugin or Kafka Elasticsearch Service Sink Connector (available only to managed Confluent and Elasticsearch) • REST APIs or client libraries to sync data directly from the application
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Semi structured data
Yes- Ingests JSON and XML without a predefined schema
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Transformations and rollups
Yes - Ingest pipelines can be configured to remove fields, extract values from text and enrich data. Ingest pipelines require ingest nodes in the cluster. Rolling up historical data is in technical preview
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply

Elasticsearch has a number of integrations as well as a REST API. It is a NoSQL database and natively supports semi-structured data. Transformations typically occur upstream so that data can be modeled for optimal performance before it is indexed in Elasticsearch.

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

Elasticsearch vs Apache Druid Performance

Performance
Elasticsearch
Apache Druid
Updates
Update API can update, delete or skip modifying the document. The entire document must be reindexed; in-place updates are not supported
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
Indexing
Inverted index
• Bitmap index
Query latency
50-1000ms queries on 100s of TBs
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
Storage format
JSON documents
Columnar format partitioned by time
Streaming ingest
• Ingests on a per-record or batch basis • Data latency on a per-record basis is typically 1-2 seconds
• Sub 15 seconds

Elasticsearch is a search engine that utilizes an inverted index. Although this approach leads to storage amplification, it also enables low-latency queries that demand less computation. Elasticsearch is tailored to accommodate large scale, append-only data such as logs, events, and metrics. To manage frequently updated data, users often utilize the Bulk API to minimize computational costs and ensure consistent query performance.

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.


Elasticsearch vs Apache Druid Queries

Queries
Elasticsearch
Apache Druid
Joins
No- Need to use workarounds including data denormalization, application-side joins, nested objects or parent-child relationships
Yes, only for datasets that fit in memory, and with a query latency penalty
Query language
DSL - domain specific language
Druid native queries • Druid SQL
Developer tooling
• REST API • Java, Javascript, Go, .NET, PHP, Perl, Python, Ruby, Rust
• Druid SQL API
Visualization tools
• Kibana • PowerBI, Qlik, Tableau
Pivot, maintained by Imply

Elasticsearch has its own domain specific language (DSL) based on JSON. Joins are not a first class citizen in Elasticsearch requiring a number of complex and expensive workarounds. Elasticsearch is known for its developer tooling and supports a number of client libraries. Kibana is the visualization layer for Elasticsearch and is frequently used for log analytics and monitoring.

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.


Elasticsearch vs Apache Druid Scalability

Scalability
Elasticsearch
Apache Druid
Vertical scaling
Manually resize machines
Users can manually increase or decrease server sizes
Horizontal scaling
• Elasticsearch is horizontally scalable and can scale by adding nodes to the cluster • When using managed Elastic, autoscaling policies can be used to self-monitor cluster health and it is the responsibility of the operator to update resource allocations either manually or using APIs. Elasticsearch rebalances the data automatically obeying shard allocation rules • There are many cluster-level operations that need to be monitored when scaling
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.

Elasticsearch is horizontally scalable and can scale by adding more nodes to the cluster. Its tightly coupled architecture means that compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling Elasticsearch often requires deep expertise as there are many levels of the system that need to be managed- the server, operating system, network and software.

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.