SingleStore Architecture vs Apache Druid
Separation of compute and storage
Yes - Singlestore Cloud uses cloud object storage for separation of compute and storage
No, although Imply’s offering separates compute and storage
Isolation of ingest and query
No - databases share ingest and queries
• Ingestion and queries are run on the same node by default
• The ingestion and querying processes can be run on separate nodes, however not for real-time data
Isolation for multiple applications
Use of storage hierarchy
• Memory - for data requiring the highest performance
• High-performance block storage for persistent cache - the working dataset should fit within the persistent cache
• Cloud object storage for long-term retention
• Queries are served from data on disk and an in-memory cache
• Cloud storage or HDFS for deep storage
Self managed and SaaS deployment options
• SaaS or self managed.
• Imply’s Druid cloud offering requires customers to configure, scale, and capacity plan
SingleStore is a proprietary distributed relational database that handles both transactional and analytical workloads. It relies on memory and a persistent cache to deliver low latency queries. For longer term data retention, SingleStore Cloud separates compute from cloud object storage. SingleStore Cloud pricing is based on compute and storage usage.
Druid’s architecture employs nodes called data servers that are used for both ingestion and queries. High ingestion or query load can cause CPU and memory contention compared with Druid alternatives. Breaking apart the pre-packaged ingestion and query server components involves planning ahead and additional complexity, and is not dynamic.
SingleStore Ingestion vs Apache Druid
Integrations to: Amazon S3, Apache Beam, GCS, HDFS, Kafka, Spark, Qlik Replicate, HVR
Data connectors to:
• Events streams (e.g. Kafka, Kinesis)
• Data lakes (e.g. S3, Google Cloud Storage)
• RDBMS and HDFS databases
CDC events from databases require manual conversion to Druid events
Semi structured data
Ingests JSON as a JSON column type
Druid requires flattening nested data at ingest and maintaining a flattening spec as the schema changes over time
Transformations and rollups
SingleStore pipelines do common data shaping including normalizing and denormalizing data, adding computed columns, filtering data, mapping data, splitting records into multiple destination tables
Yes, using ingestion specs written in JSON support rollups and simple transformations. SQL ingest transformations available for Imply
SingleStore has integrations to common data lakes and streams. With SingleStore pipelines, users can perform common data transformations during the ingestion process. SingleStore provides limited support for semi-structured data with its JSON column type. Many users structure data prior to ingestion for optimal query performance.
Druid has built-in connectors that manage ingestion from common data sources. Unlike some Druid competitors, it doesn’t support nested data, so data must be flattened at ingest. Denormalization is also required at ingest, increasing operational burden for certain use cases.
See Rockset in action
Get a product tour with a Rockset engineer.
SingleStore Queries vs Apache Druid
Druid native queries
• Druid SQL
Yes, only for datasets that fit in memory, and with a query latency penalty
• API for querying data via POST command
• JDBC driver, Python client
• Compatibility with MySQL and MariaDB to support additional drivers
• Druid SQL API
Integrations with Cognos Analytics, Dremio, Looker, Microstrategy, Power BI, Sisense, Tableau and Tibco Spotfire
Pivot, maintained by Imply
SingleStore supports SQL as its native query language and can perform SQL joins. It is designed for querying structured data with static schemas. Users can create data APIs to execute SQL statements against the database over an HTTP connection. Common SingleStore use cases include business intelligence and analytics, and the database offers a number of integrations to visualization tools.
Druid has a native JSON-based query language and provides Druid SQL as an alternative that translates into its native queries. JOINs are not recommended.
SingleStore Performance vs Apache Druid
• Columnnar store/universal storage ingests on a batch basis
• Data latency is typically seconds by relying on memory
• Sub 15 seconds
SingleStore columnar store/universal storage is immutable. Updates are fast when the data still resides in memory
Druid segments become immutable once committed and published, making it more appropriate for append-only use cases
Two table formats-either use the rowstore or columnstore/universal storage
Columnar format partitioned by time
Indexes can be manually configured: Skiplist index, hash index, full-text index, geospatial index
• Bitmap index
50-1000ms queries when the working set is contained in memory
Typically sub-second query latency for denormalized, flattened datasets up to 100s of TBs
SingleStore has two storage formats: a rowstore and a columnar store referred to as universal storage. The columnar store is used for analytical workloads, loading data in batch and relying on memory to achieve seconds of data latency. The columnar store can also execute queries in seconds when the working set is contained in memory. SingleStore provides the ability to configure and manage additional indexes on the data for faster performance.
Druid is designed to make streaming data queryable as quickly as possible. JOINs are either impossible or incur a large performance penalty. Updates are only possible via batch jobs. Druid leverages data denormalization and write-time aggregation at ingestion to reduce query latency.
SingleStore Scalability vs Apache Druid
• Cloud offering: Resize compute workspaces in the UI or using the Management API
• Self-managed offering: Change cluster configuration by updating command-line arguments or to the cluster directly.
Users can manually increase or decrease server sizes
Self-managed offering: Increase or decrease the number of nodes in the cluster. Rebalancing required
• Users can manually add additional nodes to a scaled-out cluster.
• Imply automates some of the horizontal scaling process.
SingleStore Cloud can be sized up or down using the UI or the Management API. There is no ability to scale out by increasing or decreasing the leaf and aggregator nodes in the cloud offering. In the self-managed offering, horizontal and vertical scaling can occur by updating command-line arguments or the cluster directly. Horizontal scaling does require rebalancing
Druid users are exposed to complex decisions about the number and size of servers as clusters are scaled.