Elasticsearch Reindexing: When to Reindex, Best Practices and Alternatives

October 4, 2023

,

Elasticsearch is a popular technology for efficient and scalable data storage and retrieval. However, maintaining its performance and data integrity requires a crucial practice called reindexing. Indexing is the initial process of adding data to Elasticsearch, while reindexing is essential for maintaining data accuracy and optimizing search performance.

Whether you're a seasoned Elasticsearch user or just beginning your journey, understanding reindexing is important for maintaining an efficient Elasticsearch cluster. In this article, we'll delve into the essentials of Elasticsearch reindexing, answering when it's necessary, how to trigger it, and the best practices to get the most out of your Elasticsearch cluster.

Understanding Elasticsearch reindexing

In Elasticsearch, reindexing helps maintain data integrity and increase performance. Put simply, it's the process of copying data from one index to another. While this might sound straightforward, if not done correctly it can cause issues such as slow data retrieval or even incorrect results.

Imagine your Elasticsearch indices as well-organized libraries. Over time, books might need to be updated, rearranged, or even replaced. Reindexing is akin to rearranging the library shelves or updating the books to keep everything in order. Without it, your library can become disorganized, leading to slower searches and potential inaccuracies in your data.

This analogy underscores the importance of understanding reindexing in Elasticsearch. It's not just about copying data; it's about maintaining the integrity of your "library" for efficient searching and retrieval. Let’s take a look at when reindexing is required and how to keep on top of it.

When is reindexing necessary?

Reindexing becomes essential when changes occur in your Elasticsearch data models or mappings, or when you're seeking performance enhancements. In this section, we’ll look into these scenarios in more detail to understand the nuances around why reindexing is required.

Structural Changes in Data Models

Structural changes in data models refer to modifications in how data is structured within Elasticsearch. These changes can include things like adding or removing new fields or altering data types of existing fields.

Introducing new fields often requires a reindex to ensure Elasticsearch knows how to efficiently search for data stored in that field. Modifying data types requires a new index altogether as you cannot change data types in place. Once the new mapping has been created for the modified data type then the data needs reindexing.

These structural changes require reindexing due to Elasticsearch's schema-on-write approach. Elasticsearch indexes data as it is ingested, and any changes to the data structure can lead to inconsistencies between existing data and data written with the new schema. As a result, without reindexing, search queries may yield unexpected or inaccurate results due to the schema mismatch of data items. This can have an impact on both data accuracy and search performance.

Mapping Updates or Changes

Mappings serve as the blueprint for how data is indexed and queried in Elasticsearch. When these mappings are modified then reindexing is usually required.

Mappings define the data types and properties of fields within Elasticsearch. Any change to these mappings affects how data is indexed, stored, and retrieved. For instance, altering a text field to a date field fundamentally changes how data is processed and queried. Elasticsearch enforces data consistency based on mapping definitions. Changes to mappings can lead to inconsistencies between existing data and the updated schema if the data is not reindexed.

When mappings are modified, particularly if it involves changing data types or field properties, backfilling also becomes important. Backfilling is the process of retroactively populating or updating existing data to align it with a new schema or data structure. This means that the existing data can still be queried efficiently and accurately after the mapping change.

Performance Enhancements and Index Optimizations

Reindexing isn't just a routine maintenance task, it's a powerful tool for optimizing search performance within Elasticsearch. For example, reindexing allows you to modify the number of shards in an index. Adjusting the shard count, or resharding, can distribute data more evenly, preventing uneven workloads on specific nodes to improve search performance.

Reindexing can also be used to consolidate indices together. Let's say you have multiple small indices that share the same data structure and are frequently queried together. Reindexing can consolidate them into a single, larger index. This reduces the overhead of managing numerous small indices which can in turn enhance search speed.

Finally, reindexing can be used to improve routing. By reindexing and applying routing strategies effectively, you can route queries to specific shards, minimizing the number of shards that need to be searched. This targeted approach can significantly speed up search queries if your data is frequently searched by specific keys such as a user ID.

Upgrading Your Cluster

When upgrading from Elasticsearch version 6.X to 8.0 (current major version) and beyond, you may need to reindex any indices that were created in version 6. Elasticsearch's data structures and underlying mechanisms changed significantly between these versions requiring reindexing for compatibility and optimal performance.

The reindexing process ensures that data aligns with the updated structure and new functionality to ensure you can migrate seamlessly from old to new. Elasticsearch recommends using their upgrade assistant to help with this process.

How to Trigger a Reindexing Operation

Reindexing in Elasticsearch is made possible through the Elasticsearch Reindex API. The Reindex API serves as the bridge between your existing index and the new index you want to create or modify. Its primary purpose is to enable the efficient transfer of data from one index to another, on top of this, you can also:

  • Selectively copy documents from the source index to the target index.
  • Apply complex data transformations, such as field renaming or type conversions.
  • Filter data based on specific criteria.
  • Control the indexing process with options like throttling and refresh intervals.

Before using the Reindex API, ensure that the target index, where you want to move or transform your data, is created and properly configured.

To trigger reindexing, you then need to formulate a POST request to the _reindex endpoint, specifying the source and target indices, as well as any desired transformations or filters. An example reindex POST request could look as follows.

POST /_reindex
{
  "source": {
    "index": "source_index"
  },
  "dest": {
    "index": "target_index"
  },
  "script": {
    "source": "ctx._source.new_field = 'transformed value'"
  },
  "query": {
    "term": {
      "category.keyword": "example"
    }
  }
}

Once your request is built you can send the request to Elasticsearch, initiating the reindexing process. Elasticsearch will start copying data from the source index to the target index, following your defined instructions.

Once the reindexing is complete, thoroughly test the data in the target index to ensure it aligns with your expectations. For example, you can compare the field mappings between the source and target indices to confirm that fields were mapped correctly during reindexing. You could also retrieve a sample of documents from both the source and target indices and compare them to verify the data was reindexed accurately.

Best Practices for Reindexing

When reindexing within Elasticsearch, you should look to follow these best practices to ensure the reindexing procedure is smooth with no data loss and little impact on existing cluster operations.

Prioritize Data Backup

Before initiating any reindexing activity, it is important to back up your cluster. This precautionary step acts as a safety net, offering a way to revert to the original state should any unexpected issues arise during the reindexing process.

The source index should still exist after reindexing, however, it's a fundamental principle to always have a reliable copy of your data before making significant changes.

Conduct Reindexing in a Controlled Environment First

To mitigate potential risks and challenges during reindexing, it is advisable to perform the operation in a pre-production environment first. By doing so, you can identify and address any unforeseen issues without affecting the production system. Once the procedure has been completed and verified in the pre-production environment, it can then safely be run in production.

Monitor Resource Usage

It is important to monitor system resources during reindexing to prevent strain on your infrastructure. Reindexing can be resource-intensive, especially for larger datasets. Keeping a close eye on CPU, memory, disk usage, and network activity can help optimize resource allocation, ensuring the process runs efficiently without causing performance bottlenecks. To check resource usage you can use the node stats API.

GET /_nodes/stats

This will return a response that looks as follows.

{
  "_nodes": {
    "total": 2,
    "successful": 2,
    "failed": 0
  },
  "cluster_name": "my_cluster",
  "nodes": {
    "node_id1": {
      "name": "node_name1",
      "process": {
        "cpu": {
          "percent": 30,
        }
      },
      "jvm": {
        "mem": {
          "heap_used_percent": 40.3,
          "heap_used_in_bytes": 123456789,
          "heap_max_in_bytes": 256000000
        }
      }
    },
    "node_id2": {
      "name": "node_name2",
      "process": {
        "cpu": {
          "percent": 50,
        }
      },
      "jvm": {
        "mem": {
          "heap_used_percent": 60.8,
          "heap_used_in_bytes": 210987654,
          "heap_max_in_bytes": 256000000
        }
      }
    }
  }
}

If you find reindexing is too intensive, you can throttle the process by setting the requests_per_second parameter when submitting the reindex request. This will add a sleep between batches for the number of seconds set by the parameter, to provide a cooldown period between batches.

Verify and Validate Results

Once the reindexing is complete you should verify the data in the target index to ensure it looks as expected. This validation process should encompass a variety of tests including document counts, field mappings, and search queries.

Alternative Solutions

Elasticsearch has undoubtedly established itself as a prominent solution in the NoSQL search and analytics space. However, it's worth exploring alternative solutions that offer unique approaches to data indexing and querying, particularly one like Rockset.

Rockset is a cloud-native alternative to Elasticsearch and offers a different perspective on indexing and querying data. Unlike Elasticsearch's schema-on-write approach, Rockset allows schemaless ingestion. Data can be ingested and queried without the need for upfront schema definition, offering more flexibility in handling ever-evolving datasets without the need for reindexing.

In the area of index management, Rockset benefits from its converged indexing model where a row index, a column index, and a search index are all created automatically for the data as it is ingested. This contrasts with Elasticsearch, where indexes are created by users and structural changes often necessitate time-consuming reindexing procedures.

While Elasticsearch remains a robust solution for various use cases, exploring alternatives like Rockset may be useful, especially if you find reindexing in Elasticsearch becoming a frequent activity.

Conclusion

Reindexing is a fundamental process in Elasticsearch and is important for maintaining the efficiency and accuracy of search results as data structures evolve.

If you find that reindexing is becoming a constant time burden for your team it might be worth exploring alternative solutions like Rockset. Rockset offers a more streamlined index management process that enables developers to concentrate on more value-add activities.