how to decide the memory requirement for my elasticsearch server

siva picture siva · May 17, 2017 · Viewed 29.2k times · Source

I have a scenario here,

The Elasticsearch DB with about 1.4 TB of data having,

 _shards": {
     "total": 202,
     "successful": 101,
     "failed": 0
}

Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis.

OS information:

 NAME="Red Hat Enterprise Linux Server"
 VERSION="7.2 (Maipo)"
 ID="rhel"
 ID_LIKE="fedora"
 VERSION_ID="7.2"
 PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"

The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server. But this is not enough for me to query this DB. The server hangs for a single query hit on server.

I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB. also I am planning to increase the memory to 128GB which is an approximate estimation.

Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously?

It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.

Answer

Christian Hubinger picture Christian Hubinger · Sep 12, 2017

You will need to scale using several nodes to stay efficient. Elasticsearch has its per-node memory sweet spot at 64GB with 32GB reserved for ES.

https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html#_memory for more details. The book is a very good read if you are using Elasticsearch for serious stuff