Is using a load balancer with ElasticSearch unnecessary?

user2719100 picture user2719100 · Jul 15, 2014 · Viewed 24.1k times · Source

I have a cluster of 3 ElasticSearch nodes running on AWS EC2. These nodes are setup using OpsWorks/Chef. My intent is to design this cluster to be very resilient and elastic (nodes can come in and out when needed).

From everything I've read about ElasticSearch, it seems like no one recommends putting a load balancer in front of the cluster; instead, it seems like the recommendation is to do one of two things:

  1. Point your client at the URL/IP of one node, let ES do the load balancing for you and hope that node never goes down.

  2. Hard-code the URLs/IPs of ALL your nodes into your client app and have the app handle the failover logic.

My background is mostly in web farms where it's just common sense to create a huge pool of autonomous web servers, throw an ELB in front of them and let the load balancer decide what nodes are alive or dead. Why does ES not seem to support this same architecture?

Answer

Manchego picture Manchego · Feb 1, 2015

I believe load balancing an Elasticsearch cluster is a good idea (designing a fault tolerant system, resilient to single node failure.)

To architect your cluster you'll need background on the two primary functions of Elasticsearch: 1. Writing and updating documents and 2. Querying Documents.

Writing / indexing documents in elasticsearch:

  1. When a new document comes into Elasticsearch to be indexed, Elasticsearch determines the "primary shard" the document should be assigned to using the "Shard Routing Algorithm"
  2. The Lucene process associated with the shard "maps" the fields in the document;
  3. The Lucene process adds the document to the shard's Lucene "inverted index"
  4. Any "replica shard(s)" then receive the document; the replica shard "maps" the document and adds the document to the replica shard's Lucene "inverted index"

Querying documents in Elasticsearch:

  1. By default, when a query is sent to Elasticsearch, the query hits a node -- this becomes the "query node" or the "gateway query node" for that query
  2. The node broadcasts the query to every shard in the index (primary & replica)
  3. each shard performs query on the shard's local Lucene inverted index.
  4. each shard returns the top 10 - 20 results to the "gateway query node"
  5. the "gateway query node" then performs a merge-sort on the combined results returned from the other shards,
  6. once the merge-sort is finished, the "gateway query node" and returns results to the client
    • the merge-sort is CPU and Memory resource heavy

Architect a Load Balancer for Writes / Indexing / Updates

Elasticsearch self manages the location of shards on nodes. The "master node" keeps and updates the "shard routing table". The "master node" provides a copy of the shard routing table to other nodes in the cluster.

Generally, you don't want your master node doing much more than health checks for the cluster and updating routing tables, and managing shards.

It's probably best to point the load balancer for writes to the "data nodes" (Data nodes are nodes that contain data = shards) and let the data nodes use their shard routing tables to get the writes to the correct shards.

Architecting for Queries

Elasticsearch has created a special node type: "client node", which contains "no data", and cannot become a "master node". The client node's function is to perform the final resource heavy merge-sort at the end of the query.

For AWS you'd probably use a c3 or c4 instance type as a "client node"

Best practice is to point the load balancer for queries to client nodes.

Cheers!

References:

  1. Elasticsearch Node Types
  2. Elasticsearch: Shard Routing Algorithm
  3. Elasticsearch: Replica Shards
  4. Elasticsearch: Cluster State i.e. the Shard Routing Table
  5. ElasticHQ - Introduction to Elasticsearch Video
  6. Elasticsearch: Shard numbers and Cluster Scaling