How to Handle Database Joins in Apache Druid vs Rockset

July 7, 2021


Apache Druid is a real-time analytics database, providing business intelligence to drive clickstream analytics, analyze risk, monitor network performance, and more.

When Druid was introduced in 2011, it did not initially support joins, but a join feature was added in 2020. This is important because it’s often helpful to include fields from multiple Druid files — or multiple tables in a normalized data set — in a single query, providing the equivalent of an SQL join in a relational database.

This article focuses on implementing database joins in Apache Druid, looks at some limitations developers face, and explores possible solutions.


We’ll start by acknowledging that the Druid documentation says query-time joins aren’t recommended and that, if possible, you should join your data before loading it into Druid. If you’ve worked with relational databases, you may recognize this pre-joining concept by another name: denormalization.

We don’t have space to dive into denormalization in depth, but it boils down to determining ahead of time which fields you’d like to include across several tables, creating a single table that contains all of those fields, and then populating that table with data. This removes the need to do a runtime join because all of the data you need is available in a single table.

Denormalization is great when you know in advance what data you want to query. This doesn’t always match real-world needs, however. If you need to do a variety of ad-hoc queries on data that spans many tables, denormalization may be a poor fit. It’s also less-than-ideal when you need true real-time querying because the time needed to denormalize data before making it available to Druid may introduce unacceptable latency.

If we do need to perform a query-time join in Druid, what are our options?

Types of Database Joins in Druid

There are two approaches to Druid database joins: join operators and query-time lookups.

Join Operators

Join operators connect two or more datasources such as data files and Druid tables. Essentially, datasources in Apache Druid are things that you can query. You can join datasources in a way similar to joins in a relational database, and you can even use an SQL query to do so. You can stack joins on top of each other to join many datasources, enabling quicker execution and allowing for better query performance.

Druid supports two types of queries: native queries, and SQL queries — and you can do joins with both of them. Native queries are specified using JSON, and SQL queries are very similar to the kinds of SQL queries available on a relational database.

Joins in SQL Queries

Internally, Druid translates SQL queries into native queries using a data broker, and any Druid SQL JOIN operators that the native layer can handle are then translated into join datasources from which Druid extracts data. A Druid SQL join takes the form:

 <fields from tables>
FROM <base table>
[INNER | OUTER] JOIN <other table> ON <join condition>

The first important thing to note is that due to the broadcast hash-join algorithm Druid uses, the base table must fit in memory. If the base table you want to join against is too large to fit in memory, see if denormalization is an option. If not, you’ll have to add more memory to the machine Druid is running on, or look to a different datastore.

The join condition in an SQL join query must be an equality that tells Druid which columns in each of the two tables contain identical data so Druid can determine which rows to combine data from. A simple join condition might look like = puppy.parent_id. You can also use functions in the join condition equality, for example LOWER(t1.x) = t2.x.

Note that Druid SQL is more permissive than native Druid queries. In some cases, Druid can’t translate a SQL join into a single native query - so a SQL join may result in several native subqueries to return the desired results. For instance, foo OUTER JOIN users ON = UPPER(users.def) is an SQL join that cannot be directly translated to a join datasource because there is an expression on the right side instead of simple column access.

Subqueries carry a substantial performance penalty, so use caution when specifying complex join conditions. Usually, Druid buffers the results from subqueries in memory in the data broker, and some additional processing occurs in the broker. Subqueries with large result sets can cause bottlenecks or run into memory limits in the broker — which is another reason to avoid subqueries if at all possible.

Be aware that Druid SQL does not support the following SQL join features:

  • Join between two native data sources, including tables and lookups
  • Join conditions that are not equal between expressions from both sides
  • Join conditions with a constant variable inside the condition

We’ll finish up with a complete example of a Druid join query:

The following is an example of an SQL join.

   shop_to_product.v AS product,
   SUM(purchases.revenue) AS product_revenue
   INNER JOIN lookup.shop_to_product ON = shop_to_product.k

Join Datasources in Native Queries

Next, we’ll examine how to create join datasources in native queries. We’re assuming you’re already familiar with regular native JSON queries in Druid.

The following properties characterize join data sources in native queries:

Left — The left-hand side of the join must be a table, join, lookup, query, or inline datasource. Alternatively, the left-hand data source can be another join, connecting multiple data sources.

Right — The right-hand data source must be a lookup, query, or inline datasource.

Right Prefix — This is a string prefix placed on columns from the right-hand data source to avoid a conflict with columns from the left-hand side. The string must be non-empty.

Condition — The condition must be an equality that compares the data source from the left-hand side to those from the right-hand side.

Join type — INNER or LEFT.

The following is an example of a Druid native join:

  "QueryType": "GroupBy",
    "dataSource": {
      "type": "join",
      "left": "purchases",
      "right": {
      "type": "lookup",
      "lookup": "shop_to_product"
      "rightPrefix": "r.",
      "condition": "shop == \"r.k\"",
      "joinType": "INNER"
    "intervals": ["0000/3000"],
    "granularity": "all",
    "dimensions": [
      { "type": "default", "outputName": "product", "dimension": "r.v" }
    "aggregations": [
      { "type": "longSum", "name": "product_revenue", "fieldName": "revenue" }

This will return a result set showing cumulative revenue for each product in a shop.

Query-Time Lookups

Query-time lookups are pre-defined key-value associations that reside in-memory on all servers in a Druid cluster. With query-time lookups, Druid replaces data with new data during runtime. They’re a special case of Druid’s standard lookup functionality, and although we don’t have space to cover lookups in minute detail, let’s walk through them briefly.

Query-time lookups support one-to-one matching of unique values, such as user privilege ID and user privilege name. For example, P1-> Delete, P2-> Edit, P3-> View. They also support use cases where the operation must match multiple values to a single value. Here’s a case where user privilege IDs map to a single user account: P1-> Admin, P2-> Admin, P3-> Admin.

One advantage of query-time lookups is that they do not have history. Instead, they use current data as they update. That means if a particular user privilege ID is mapped to an individual administrator (for example, P1-> David_admin), and a new administrator comes in, a lookup query of the privilege ID returns the name of the new administrator.

One drawback of query-time lookups is that they don’t support time-range-sensitive data lookups.

Some Disadvantages of Druid Join Operators

Although Druid does support database joins, they are relatively new and have some drawbacks.

Data sources on the left-hand side of joins must fit in memory. Druid stores subquery results in memory to enable speedy retrieval. Also, you use a broadcast hash-join algorithm to implement Druid joins. So subqueries with large result sets occupy (and may exhaust) the memory.

Not all datasources support joins. Druid join operators do not support all joins. One example of this is non-broadcast hash joins. Neither do join conditions support columns of multiple dimensional values.

A single join query may generate multiple (possibly slow) subqueries. You cannot implement some SQL queries with Druid’s native language. This means you must first add them to a subquery to make them executable. This sometimes generates multiple subqueries that consume a lot of memory, causing a performance bottleneck.

For these reasons, Druid’s documentation recommends against running joins at query time.

Rockset Compared to Apache Druid

Although Druid has many useful features for real-time analytics, it presents a couple of challenges, such as a lack of support for all database joins and significant performance overhead when doing joins. Rockset addresses these challenges with one of its core features: high-performance SQL joins.

In supporting full-featured SQL, Rockset was designed with join performance in mind. Rockset partitions the joins, and these partitions run in parallel on distributed Aggregators that can be scaled out if needed. It also has multiple ways of performing joins:

  • Hash Join
  • Nested loop Join
  • Broadcast Join
  • Lookup Join

The ability to join data in Rockset is particularly useful when analyzing data across different database systems and live data streams. Rockset can be used, for example, to join a Kafka stream with dimension tables from MySQL. In many situations, pre-joining the data is not an option because data freshness is important or the ability to perform ad hoc queries is required.

You can think of Rockset as an alternative to Apache Druid, with improved flexibility and manageability. Rockset enables you to perform schemaless ingestion and query that data immediately, without having to denormalize your data or avoid runtime joins.

If you are looking to minimize data and performance engineering needed for real-time analytics, Rockset may be a better choice.


Next Steps

Apache Druid processes high volumes of real-time data in online analytical processing applications. The platform offers a range of real-time analytics features, such as low-latency data ingestion. However, it also has its shortcomings, like not supporting all forms of database joins.

Rockset helps overcome Druid’s limited join support. As a cloud-native, real-time indexing database, Rockset offers both speed and scale and supports a wide range of features, including joins. Start a free trial today to experience the most flexible real-time analytics in the cloud.