See Rockset
in action

Get a product tour with a Rockset engineer

StarRocks vs Apache Druid

Compare and contrast StarRocks and Apache Druid by architecture, ingestion, queries, performance, and scalability.

Compare StarRocks to Rockset here

Compare Apache Druid to Rockset here

StarRocks vs Apache Druid Architecture

Architecture
StarRocks
Apache Druid
Deployment model
PaaS or self managed
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
Use of storage hierarchy
Data is stored on disk and in memory
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
Isolation of ingest and query
No, but you can limit resources for ingestion and querying separately
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
Separation of compute and storage
No, but StarRocks supports nodes that don't store data locally
No, although Imply’s offering separates compute and storage
Isolation for multiple applications
No
No

StarRocks is a high-performance OLAP database that can be deployed on the cloud or self managed. StarRocks does not separate compute and storage and offers limited options for resource isolation. It offers a robust set of features and high performance but requires considerable expertise to operate and scale.

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.


StarRocks vs Apache Druid Ingestion

Ingestion
StarRocks
Apache Druid
Data sources
Streaming • Kafka • Flink Data lakes • HDFS compatible • Cloud storage
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Semi structured data
Supports columns with JSON data • Does not support mixed-type columns • Support for star and snowflake schemas
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Transformations and rollups
Yes, via materialized views
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply

StarRocks ingests data from a variety of sources, including both batch and streaming data. StarRocks can ingest nested JSON data, but enforces type at the column level.

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

See Rockset in action
Get a product tour with a Rockset engineer.

StarRocks vs Apache Druid Performance

Performance
StarRocks
Apache Druid
Updates
While StarRocks is mutable, the update rate is slow, which is why it is most often used for append-only workloads
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
Indexing
Columnar index, limited support for inverted indexes
• Bitmap index
Query latency
50-1000ms queries on 100s of TB
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
Storage format
• StarRocks is a columnstore that organizes data into prefix indexes, per-column data blocks, and per-column indexes • All data is replicated 3 times to achieve both fault-tolerance and concurrency
Columnar format partitioned by time
Streaming ingest
Data latency is typically 1-2 seconds
• Sub 15 seconds

StarRocks was purpose-built for high-performance ingest, low-latency queries, and high concurrency. Optimized performance requires significant manual tuning.

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.


StarRocks vs Apache Druid Queries

Queries
StarRocks
Apache Druid
Joins
Multi-table join support
Yes, only for datasets that fit in memory, and with a query latency penalty
Query language
SQL
Druid native queries • Druid SQL
Developer tooling
Minimal
• Druid SQL API
Visualization tools
Compatibility with MySQL protocols enables StarRocks to work with BI tools
Pivot, maintained by Imply

StarRocks uses a high-performance vectorized SQL engine, a custom-built cost-based optimizer, and has support for materialized views.

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.


StarRocks vs Apache Druid Scalability

Scalability
StarRocks
Apache Druid
Vertical scaling
• Both frontend and backend nodes can be manually resized
Users can manually increase or decrease server sizes
Horizontal scaling
• Both frontend and backend nodes can be manually scaled horizontally
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.

StarRocks can scale up or out, but its tightly coupled compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling StarRocks often requires deep expertise as there are many levels of the system that need to be managed.

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.