See Rockset
in action

Get a product tour with a Rockset engineer

StarRocks vs Apache Pinot

Compare and contrast StarRocks and Apache Pinot by architecture, ingestion, queries, performance, and scalability.

Compare StarRocks to Rockset here

Compare Apache Pinot to Rockset here

StarRocks vs Apache Pinot Architecture

Architecture
StarRocks
Apache Pinot
Deployment model
PaaS or self managed
PaaS or self managed
Use of storage hierarchy
Data is stored on disk and in memory
Hot storage plus Deep Store for backup and restore operations
Isolation of ingest and query
No, but you can limit resources for ingestion and querying separately
No
Separation of compute and storage
No, but StarRocks supports nodes that don't store data locally
No, although StarTree separates compute and cloud storage, which is an important consideration for those considering StarTree vs. open-source Pinot
Isolation for multiple applications
No
Full isolation with replication

StarRocks is a high-performance OLAP database that can be deployed on the cloud or self managed. StarRocks does not separate compute and storage and offers limited options for resource isolation. It offers a robust set of features and high performance but requires considerable expertise to operate and scale.

Pinot is a real-time distributed OLAP datastore that ingests both batch and streaming data. It has a distributed systems architecture that scales both horizontally and vertically, but unlike alternative OLAP databases, it does not decouple storage and compute. It supports both self-managed and PaaS options.


StarRocks vs Apache Pinot Ingestion

Ingestion
StarRocks
Apache Pinot
Data sources
Streaming • Kafka • Flink Data lakes • HDFS compatible • Cloud storage
Streaming • AWS Kinesis • Apache Kafka Batch • Cloud Storage • PostgreSQL • MySQL • Snowflake • Google BigQuery File upload Write API
Semi structured data
Supports columns with JSON data • Does not support mixed-type columns • Support for star and snowflake schemas
Support for JSON column types, but JSON must first be converted to a string
Transformations and rollups
Yes, via materialized views
Yes

StarRocks ingests data from a variety of sources, including both batch and streaming data. StarRocks can ingest nested JSON data, but enforces type at the column level.

Pinot supports high-performance ingest from streaming data sources. Each table is either offline or real time. Real-time tables have a smaller retention period and scale based on ingestion rate while offline tables have a larger retention period and scale based on the amount of data. In order to persistently store the generated segments that make up a table, you will need to change controller and server configs to add deep storage.

See Rockset in action
Get a product tour with a Rockset engineer.

StarRocks vs Apache Pinot Performance

Performance
StarRocks
Apache Pinot
Updates
While StarRocks is mutable, the update rate is slow, which is why it is most often used for append-only workloads
• By default, all data in Pinot is immutable • Upserts only supported for streaming ingest • No support for upserts on data using the star tree index • No support for upserts on out-of-order events
Indexing
Columnar index, limited support for inverted indexes
• Manually configured • Forward index, inverted Index, Star-Tree index (columnar), bloom filter, range index, search index, JSON index, geospatial index, timestamp index
Query latency
50-1000ms queries on 100s of TB
50-1000ms queries on 100s of TB
Storage format
• StarRocks is a columnstore that organizes data into prefix indexes, per-column data blocks, and per-column indexes • All data is replicated 3 times to achieve both fault-tolerance and concurrency
Pinot stores data in a columnar format and adds additional indexes to perform fast filtering, aggregation and group by
Streaming ingest
Data latency is typically 1-2 seconds
1-2 second ingest for streaming data

StarRocks was purpose-built for high-performance ingest, low-latency queries, and high concurrency. Optimized performance requires significant manual tuning.

Like its competitors, Pinot can achieve sub-second query latency at high concurrency. However, this level of performance requires tuning, management, and deep expertise. Compared with the open-source version, the PaaS versions of Pinot address some of these issues, but similarly require expertise while making tradeoffs affecting query performance.


StarRocks vs Apache Pinot Queries

Queries
StarRocks
Apache Pinot
Joins
Multi-table join support
SQL joins limited to inner join, left-outer, and semi-join
Query language
SQL
SQL
Developer tooling
Minimal
• API for querying SQL via POST command • Clients for JDBC, Java, Python, and Golang • Integrations with Trino and Presto
Visualization tools
Compatibility with MySQL protocols enables StarRocks to work with BI tools
Integrations with Tableau and Superset

StarRocks uses a high-performance vectorized SQL engine, a custom-built cost-based optimizer, and has support for materialized views.

In Pinot, SQL-like queries are received by brokers and scatter the request between real-time and offline servers. The two tables then process requests, send results back to the broker, and responds with the result. Joins are limited, as is support for UDFs and subqueries, making Pinot more or less useful depending on the use case.


StarRocks vs Apache Pinot Scalability

Scalability
StarRocks
Apache Pinot
Vertical scaling
• Both frontend and backend nodes can be manually resized
All 4 pinot node types can be resized manually
Horizontal scaling
• Both frontend and backend nodes can be manually scaled horizontally
• Users can add additional Pinot nodes to scale horizontally • Rebalancing is manual

StarRocks can scale up or out, but its tightly coupled compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling StarRocks often requires deep expertise as there are many levels of the system that need to be managed.

Pinot allows for vertical scaling by increasing CPU and memory for each node as well as horizontal scaling by adding additional nodes. Capacity planning is a time-consuming, iterative, and manual task. It involves load testing and tuning across multiple vectors including read QPS, write QPS, number of streaming partitions, daily data size, retention period, types of workloads, number and type of segments, and much more.

See Rockset in action
Sub-second SQL on streaming data with surprising efficiency.