Can someone assist please. I need to fix the error so CloudTrail log in S3 can be shipped to Logstash the ES and viewed in Kibana. Can't figure out how to increase the field limit to something higher. My configuration looks like
input {
s3 {
bucket => "sample-s3bucket"
region => "eu-west-1"
type => "cloudtrail"
codec => cloudtrail {}
sincedb_path => "/tmp/logstash/cloudtrail"
exclude_pattern => "/CloudTrail-Digest/"
interval => 300
}
}
filter {
if [type] == "cloudtrail" {
json {
source => "message"
}
geoip {
source => "sourceIPAddress"
target => "geoip"
add_tag => ["cloudtrail-geoip"]
}
}
}
output {
elasticsearch {
hosts => "coordinate_node:9200"
index => 'cloudtrail-%{+YYYY.MM.dd}'
}
stdout {
codec => rubydebug
}
}
Here is what am seeing on my Logstash machine about limit
2018-10-04T17:49:49,883][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"cloudtrail-2018.09.27", :_type=>"doc", :_routing=>nil}, #], :response=>{"index"=>{"_index"=>"cloudtrail-2018.09.27", "_type"=>"doc", "_id"=>"lrMzQGYBOny1_iySNW6G", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Limit of total fields [1000] in index [cloudtrail-2018.09.27] has been exceeded"}}}
Thanks in advance
You can use below template to set the settings for all indices that get added to the cluster. Once you index via logstash, below template will set the field limit to 2000 for all indices that are created.
PUT /_template/Global
{
"index_patterns" : ["*"],
"order" : 0,
"settings" : {
"index.mapping.total_fields.limit" : "2000"
}
}
Note: You can change the pattern to "index_patterns" : ["cloudtrail-*"]
if you want to use the settings for specific indices.
You should also consider restructuring your document mapping when the fields in one document go to such a high number, not all fields are always required in the response. Look into creating relations like Join/parent-child etc. to create smaller documents for efficiency.