Elasticsearch Architecture vs ClickHouse
Separation of compute and storage
No - although ClickHouse Cloud decouples compute and cloud storage
Isolation of ingest and query
No - There are dedicated ingestion nodes but indexing, compaction and updates occur on the data nodes
Isolation for multiple applications
Full isolation with replication
Use of storage hierarchy
• Hot, warm and cold storage on disk
• Frozen storage on cloud storage
• Designed to use hard disk drives for storage
• Can also use SSD if available
On-prem, PaaS options
• Self-managed on-premises or on cloud infrastructure
• Several managed cloud services available
Elasticsearch is an open-source distributed search engine built on Apache Lucene, a full text search library. Elasticsearch is a distributed system, which means that it is designed to operate across multiple nodes, each responsible for a part of the data.
ClickHouse is open source and can be deployed anywhere. Several vendors such as ClickHouse, the company, and Tinybird also offer cloud versions. Compute and storage are tightly coupled, although ClickHouse Cloud was rearchitected to decouple compute and storage. ClickHouse Cloud pricing is based on compute and storage usage.
Elasticsearch Ingestion vs ClickHouse
• Logstash JDBC input plugin for relational databases
• Open-source Kafka plugin or Kafka Elasticsearch Service Sink Connector (available only to managed Confluent and Elasticsearch)
• REST APIs or client libraries to sync data directly from the application
• Core integrations for ingestion from Kafka, S3, Google Cloud Storage
• Other partner and community integrations available
Semi structured data
Yes- Ingests JSON and XML without a predefined schema
• JSON Object type for handling nested JSON
• Automatically infers schema from a subset of rows
Transformations and rollups
Yes - Ingest pipelines can be configured to remove fields, extract values from text and enrich data. Ingest pipelines require ingest nodes in the cluster. Rolling up historical data is in technical preview
Yes - several storage engines can pre-aggregate data
Elasticsearch has a number of integrations as well as a REST API. It is a NoSQL database and natively supports semi-structured data. Transformations typically occur upstream so that data can be modeled for optimal performance before it is indexed in Elasticsearch.
ClickHouse has core integrations from common sources such as Kafka and S3. It recently introduced greater ability to handle semi-structured data using the JSON Object type and automatic schema inference.
See Rockset in action
Get a product tour with a Rockset engineer.
Elasticsearch Queries vs ClickHouse
DSL - domain specific language
No- Need to use workarounds including data denormalization, application-side joins, nested objects or parent-child relationships
• REST API
• API for querying SQL via POST command
• Python, Java, Node.js and Go language clients
• PowerBI, Qlik, Tableau
Integrations with Metabase, Superset, Grafana, Tableau, Deepnote and Rocket BI
Elasticsearch has its own domain specific language (DSL) based on JSON. Joins are not a first class citizen in Elasticsearch requiring a number of complex and expensive workarounds. Elasticsearch is known for its developer tooling and supports a number of client libraries. Kibana is the visualization layer for Elasticsearch and is frequently used for log analytics and monitoring.
ClickHouse uses SQL for querying, with support for SQL joins. ClickHouse integrates with some common tools for visual analytics, including Superset, Grafana and Tableau.
Elasticsearch Performance vs ClickHouse
• Ingests on a per-record or batch basis
• Data latency on a per-record basis is typically 1-2 seconds
Recommends inserting in batches of >1000 rows and <1 insert per second
Update API can update, delete or skip modifying the document. The entire document must be reindexed; in-place updates are not supported
• Writes to immutable files
• Updates rewrite and merge data files asynchronously
• Frequent updates are not recommended due to potential for large rewrites
• Heavily compressed to minimize storage footprint
• Primary indexes use sparse indexing on data ordered by primary key
• Secondary data skipping indexes
50-1000ms queries on 100s of TBs
Sub-100ms to seconds, optimized for large-scale aggregations
Elasticsearch is a search engine that utilizes an inverted index. Although this approach leads to storage amplification, it also enables low-latency queries that demand less computation. Elasticsearch is tailored to accommodate large scale, append-only data such as logs, events, and metrics. To manage frequently updated data, users often utilize the Bulk API to minimize computational costs and ensure consistent query performance.
ClickHouse leverages column orientation and heavy compression for better performance on analytics workloads. It also uses indexing to accelerate queries as well. While ClickHouse use cases often involve streaming data from Kafka, batching data is recommended for efficient ingestion.
Elasticsearch Scalability vs ClickHouse
Manually resize machines
Scale up single-node ClickHouse for vertical scaling
• Elasticsearch is horizontally scalable and can scale by adding nodes to the cluster
• When using managed Elastic, autoscaling policies can be used to self-monitor cluster health and it is the responsibility of the operator to update resource allocations either manually or using APIs. Elasticsearch rebalances the data automatically obeying shard allocation rules
• There are many cluster-level operations that need to be monitored when scaling
• Compute and storage scaled in lockstep
• Data rebalanced to populate newly added nodes
• Cloud offerings automate some of the scaling and rebalancing effort
Elasticsearch is horizontally scalable and can scale by adding more nodes to the cluster. Its tightly coupled architecture means that compute and storage scale together for performance. This often results in resource contention and overprovisioning. Scaling Elasticsearch often requires deep expertise as there are many levels of the system that need to be managed- the server, operating system, network and software.
ClickHouse can be used in both single-node and distributed modes. Tight coupling of compute and storage and the need to rebalance data make scaling out more complex, but cloud versions of ClickHouse help automate this process.