elasticsearch / kibana errors "Data too large, data for [@timestamp] would be larger than limit

spuder picture spuder · Apr 23, 2015 · Viewed 20.9k times · Source

On my test ELK cluster, I'm encountering the following error when trying to see data from the last week.

Data too large, data for [@timestamp] would be larger than limit

The warning about shards failing appears to be misleading because the elasticsearch monitoring tools kopf and head show that all shards are working properly, and the elastic cluster is green.

enter image description here

One user in the google group for elasticsearch suggested increasing ram. I've increased my 3 nodes to 8GB each with a 4.7GB heap, but the issue continues. I'm generating about 5GB to 25GB of data per day, with a 30 day retention.

Answer

spuder picture spuder · Apr 23, 2015

Clearing the cache alleviates the symptoms for now.

http://www.elastic.co/guide/en/elasticsearch/reference/current/indices-clearcache.html

Clear a single index

curl -XPOST 'http://localhost:9200/twitter/_cache/clear'

Clear multiple indicies

curl -XPOST 'http://localhost:9200/kimchy,elasticsearch/_cache/clear'

curl -XPOST 'http://localhost:9200/_cache/clear'

Or as suggested by a user in IRC. This one seems to work the best.

curl -XPOST 'http://localhost:9200/_cache/clear' -d '{ "fielddata": "true" }'

Update: these errors went away as soon as the cluster was moved to a faster hypervisor