See Rockset
in action

Get a product tour with a Rockset engineer

Apache Druid vs SingleStore

Compare and contrast Apache Druid and SingleStore by architecture, ingestion, queries, performance, and scalability.

Compare Apache Druid to Rockset here

Compare SingleStore to Rockset here

Apache Druid vs SingleStore Architecture

Architecture
Apache Druid
SingleStore
Deployment model
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
Self managed and SaaS deployment options
Use of storage hierarchy
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
• Memory - for data requiring the highest performance • High-performance block storage for persistent cache - the working dataset should fit within the persistent cache • Cloud object storage for long-term retention
Isolation of ingest and query
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
No - databases share ingest and queries
Separation of compute and storage
No, although Imply’s offering separates compute and storage
Yes - Singlestore Cloud uses cloud object storage for separation of compute and storage
Isolation for multiple applications
No
No

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.

SingleStore is a proprietary distributed relational database that handles both transactional and analytical workloads. It relies on memory and a persistent cache to deliver low latency queries. For longer term data retention, SingleStore Cloud separates compute from cloud object storage. SingleStore Cloud pricing is based on compute and storage usage.


Apache Druid vs SingleStore Ingestion

Ingestion
Apache Druid
SingleStore
Data sources
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Integrations to: Amazon S3, Apache Beam, GCS, HDFS, Kafka, Spark, Qlik Replicate, HVR
Semi structured data
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Ingests JSON as a JSON column type
Transformations and rollups
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply
SingleStore pipelines do common data shaping including normalizing and denormalizing data, adding computed columns, filtering data, mapping data, splitting records into multiple destination tables

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

SingleStore has integrations to common data lakes and streams. With SingleStore pipelines, users can perform common data transformations during the ingestion process. SingleStore provides limited support for semi-structured data with its JSON column type. Many users structure data prior to ingestion for optimal query performance.

Apache Druid vs SingleStore Performance

Performance
Apache Druid
SingleStore
Updates
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
SingleStore columnar store/universal storage is immutable. Updates are fast when the data still resides in memory
Indexing
• Bitmap index
Indexes can be manually configured: Skiplist index, hash index, full-text index, geospatial index
Query latency
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
50-1000ms queries when the working set is contained in memory
Storage format
Columnar format partitioned by time
Two table formats-either use the rowstore or columnstore/universal storage
Streaming ingest
• Sub 15 seconds
• Columnnar store/universal storage ingests on a batch basis • Data latency is typically seconds by relying on memory

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.

SingleStore has two storage formats: a rowstore and a columnar store referred to as universal storage. The columnar store is used for analytical workloads, loading data in batch and relying on memory to achieve seconds of data latency. The columnar store can also execute queries in seconds when the working set is contained in memory. SingleStore provides the ability to configure and manage additional indexes on the data for faster performance.


Apache Druid vs SingleStore Queries

Queries
Apache Druid
SingleStore
Joins
Yes, only for datasets that fit in memory, and with a query latency penalty
Yes
Query language
Druid native queries • Druid SQL
SQL
Developer tooling
• Druid SQL API
• API for querying data via POST command • JDBC driver, Python client • Compatibility with MySQL and MariaDB to support additional drivers
Visualization tools
Pivot, maintained by Imply
Integrations with Cognos Analytics, Dremio, Looker, Microstrategy, Power BI, Sisense, Tableau and Tibco Spotfire

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.

SingleStore supports SQL as its native query language and can perform SQL joins. It is designed for querying structured data with static schemas. Users can create data APIs to execute SQL statements against the database over an HTTP connection. Common SingleStore use cases include business intelligence and analytics, and the database offers a number of integrations to visualization tools.


Apache Druid vs SingleStore Scalability

Scalability
Apache Druid
SingleStore
Vertical scaling
Users can manually increase or decrease server sizes
• Cloud offering: Resize compute workspaces in the UI or using the Management API • Self-managed offering: Change cluster configuration by updating command-line arguments or to the cluster directly.
Horizontal scaling
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.
Self-managed offering: Increase or decrease the number of nodes in the cluster. Rebalancing required

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

SingleStore Cloud can be sized up or down using the UI or the Management API. There is no ability to scale out by increasing or decreasing the leaf and aggregator nodes in the cloud offering. In the self-managed offering, horizontal and vertical scaling can occur by updating command-line arguments or the cluster directly. Horizontal scaling does require rebalancing