See Rockset
in action

Get a product tour with a Rockset engineer

Apache Druid vs StarRocks

Compare and contrast Apache Druid and StarRocks by architecture, ingestion, queries, performance, and scalability.

Compare Apache Druid to Rockset here

Compare StarRocks to Rockset here

Apache Druid vs StarRocks Architecture

Architecture
Apache Druid
StarRocks
Deployment model
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
PaaS or self managed
Use of storage hierarchy
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
Data is stored on disk and in memory
Isolation of ingest and query
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
No, but you can limit resources for ingestion and querying separately
Separation of compute and storage
No, although Imply’s offering separates compute and storage
No, but StarRocks supports nodes that don't store data locally
Isolation for multiple applications
No
No

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.

StarRocks is a high-performance OLAP database that can be deployed on the cloud or self managed. StarRocks does not separate compute and storage and offers limited options for resource isolation. It offers a robust set of features and high performance but requires considerable expertise to operate and scale.


Apache Druid vs StarRocks Ingestion

Ingestion
Apache Druid
StarRocks
Data sources
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Streaming • Kafka • Flink Data lakes • HDFS compatible • Cloud storage
Semi structured data
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Supports columns with JSON data • Does not support mixed-type columns • Support for star and snowflake schemas
Transformations and rollups
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply
Yes, via materialized views

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

StarRocks ingests data from a variety of sources, including both batch and streaming data. StarRocks can ingest nested JSON data, but enforces type at the column level.

Apache Druid vs StarRocks Performance

Performance
Apache Druid
StarRocks
Updates
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
While StarRocks is mutable, the update rate is slow, which is why it is most often used for append-only workloads
Indexing
• Bitmap index
Columnar index, limited support for inverted indexes
Query latency
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
50-1000ms queries on 100s of TB
Storage format
Columnar format partitioned by time
• StarRocks is a columnstore that organizes data into prefix indexes, per-column data blocks, and per-column indexes • All data is replicated 3 times to achieve both fault-tolerance and concurrency
Streaming ingest
• Sub 15 seconds
Data latency is typically 1-2 seconds

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.

StarRocks was purpose-built for high-performance ingest, low-latency queries, and high concurrency. Optimized performance requires significant manual tuning.


Apache Druid vs StarRocks Queries

Queries
Apache Druid
StarRocks
Joins
Yes, only for datasets that fit in memory, and with a query latency penalty
Multi-table join support
Query language
Druid native queries • Druid SQL
SQL
Developer tooling
• Druid SQL API
Minimal
Visualization tools
Pivot, maintained by Imply
Compatibility with MySQL protocols enables StarRocks to work with BI tools

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.

StarRocks uses a high-performance vectorized SQL engine, a custom-built cost-based optimizer, and has support for materialized views.


Apache Druid vs StarRocks Scalability

Scalability
Apache Druid
StarRocks
Vertical scaling
Users can manually increase or decrease server sizes
• Both frontend and backend nodes can be manually resized
Horizontal scaling
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.
• Both frontend and backend nodes can be manually scaled horizontally

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

StarRocks can scale up or out, but its tightly coupled compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling StarRocks often requires deep expertise as there are many levels of the system that need to be managed.