See Rockset
in action

Get a product tour with a Rockset engineer

Snowflake vs Apache Druid

Compare and contrast Snowflake and Apache Druid by architecture, ingestion, queries, performance, and scalability.

Compare Snowflake to Rockset here

Compare Apache Druid to Rockset here

Snowflake vs Apache Druid Architecture

Architecture
Snowflake
Apache Druid
Deployment model
SaaS - infrastructure, software and cluster ops managed by service provider
• SaaS or self managed. • Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
Use of storage hierarchy
Cloud object storage for shared data accessible from any virtual warehouse
• Queries are served from data on disk and an in-memory cache • Cloud storage or HDFS for deep storage
Isolation of ingest and query
Yes - separate virtual warehouses for batch data loading, ELT jobs and queries
• Ingestion and queries are run on the same node by default • The ingestion and querying processes can be run on separate nodes, however not for real-time data
Separation of compute and storage
Yes
No, although Imply’s offering separates compute and storage
Isolation for multiple applications
Yes - separate virtual warehouses for each workload
No

Snowflake is the data warehouse built for the cloud. Snowflake is well-known for separating storage and compute for better price performance. With Snowflake, multiple virtual warehouses can be spun up or down for batch data loading, transformations and queries all on the same shared data.

Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.


Snowflake vs Apache Druid Ingestion

Ingestion
Snowflake
Apache Druid
Data sources
• Third party ETL tool to ingest data into Snowflake including Fivetran, Hevo or Striim • Bulk loading from S3, GCS, Azure Blob Storage • Sink Connector for Apache Kafka in Confluent Cloud
Data connectors to: • Events streams (e.g. Kafka, Kinesis) • Data lakes (e.g. S3, Google Cloud Storage) • RDBMS and HDFS databases CDC events from databases require manual conversion to Druid events
Semi structured data
Ingests JSON and XML as a VARIANT data type
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Transformations and rollups
• Third party ELT/ETL tools like dbt • Simple COPY commands at data loading for column recording, omission and casts
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply

Snowflake is an immutable data warehouse that is built for batch ingestion and relies heavily on the modern data stack ecosystem for data connectors and transformations. Snowflake has a number of integrations to ETL and ELT solutions including Fivetran, Hevo, Striim and dbt. While Snowflake does have support for semi-structured data in the form of a VARIANT type, it is best to structure the data for optimal query performance.

Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.

See Rockset in action
Get a product tour with a Rockset engineer.

Snowflake vs Apache Druid Performance

Performance
Snowflake
Apache Druid
Updates
Data warehouse with immutable storage. Updates rewrite and merge entire partitions
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
Indexing
No
• Bitmap index
Query latency
Seconds to minutes on petabytes of data
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
Storage format
Compressed columnar format stored in cloud object storage
Columnar format partitioned by time
Streaming ingest
• Ingests on a batch basis • Snowpipe typically ingests in minutes
• Sub 15 seconds

Snowflake is designed for batch analytics with analysts and data scientists infrequently accessing large-scale data for trend analysis. Snowflake, like many data warehouses, is immutable and does not support frequently changing data efficiently. Snowflake uses a columnar store to return aggregations and metrics efficiently, often with query response times in the seconds to minutes on petabytes of data.

Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.


Snowflake vs Apache Druid Queries

Queries
Snowflake
Apache Druid
Joins
Yes
Yes, only for datasets that fit in memory, and with a query latency penalty
Query language
SQL
Druid native queries • Druid SQL
Developer tooling
• SQL APIs - make SQL calls to Snowflake programmatically • UDFs for Javascript, Python, Java and SQL functions • Go, JDBC, .NET, Node.js, ODBC, PHP, Python drivers
• Druid SQL API
Visualization tools
Integrations with QuickSight, Chartio, Domo, Looker, PowerBI, Mode, Qlik, Sigma, Sisense, Tableau, ThoughtSpot and more
Pivot, maintained by Imply

Snowflake supports SQL as its native query language and can perform SQL joins. Snowflake for developers introduced a number of developer tools including SQL APIs, UDFs and drivers to support application development. As Snowflake was originally built for business intelligence workloads, it integrates with a number of visualization tools for trend analysis.

Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.


Snowflake vs Apache Druid Scalability

Scalability
Snowflake
Apache Druid
Vertical scaling
Resize virtual warehouses via web interface or using DDL commands for warehouses
Users can manually increase or decrease server sizes
Horizontal scaling
• Multi-cluster warehouses allocate additional clusters for higher concurrency workloads • Auto scaling policies can be set
• Users can manually add additional nodes to a scaled-out cluster. • Imply automates some of the horizontal scaling process.

Snowflake virtual warehouses can be scaled up for faster queries or scaled out using multi-cluster warehouses to support higher concurrency workloads. Snowflake has shared blob storage that scales automatically and independently.

Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.