I try to parse Check Point firewall Syslog logs with logstash and grok.
Example of a log entry:
<190>2015 Mar 19 12:40:55 fw1 <60031> User admin failed to login (wrong authentication) (Source IP:123.123.123.123 Via:HTTP)
I use this pattern:
<%{POSINT:syslog_pri}>%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:device} <%{POSINT:status}> User %{WORD:account} %{DATA:msg} (?:[(])%{DATA:msg1}(?:[)]) (?:[(])Source IP:%{IPV4:src} Via:%{WORD:protocol}(?:[)])
All fields are parsed well and show up in elasticsearch/kibana. The Grok Debugger works fine with this specific log/pattern combination. However, I keep receiving _grokparsefailure tags. Has anyone a hint how to get rid of them?
UPDATE: Here is my complete logstash configuration (most relevant part is the "Failed login" block):
input {
syslog {
type => "syslog"
port => 514
}
}
filter {
if [type] == "syslog" {
geoip { source => "host" }
# Firewall rule fired
if [message] =~ "packet" {
grok {
match => [ "message", "<%{POSINT:syslog_pri}>%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:device} <%{POSINT:status}> %{WORD:activity} %{DATA:inout} (?:[(])%{DATA:msg}(?:[)]) Src:%{IPV4:src} SPort:%{POSINT:sport} Dst:%{IPV4:dst} DPort:%{POSINT:dport} IPP:%{POSINT:ipp} Rule:%{INT:rule} Interface:%{WORD:iface}" ]
}
}
# Failed login
else if [message] =~ "failed" {
grok {
match => [ "message", "<%{POSINT:syslog_pri}>%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:device} <%{POSINT:status}> User %{WORD:account} %{DATA:msg} (?:[(])%{DATA:msg1}(?:[)]) (?:[(])Source IP:%{IPV4:src} Via:%{WORD:protocol}(?:[)])" ]
}
}
# Successful login/out
else if [message] =~ "logged" {
mutate {
add_field => [ "userlogged", "%{host}" ]
}
grok {
match => [ "message", "<%{POSINT:syslog_pri}>%{YEAR} %{SYSLOGTIMESTAMP:syslog_timestamp} %{DATA:device} <%{POSINT:status}> User %{DATA:account} %{WORD} %{WORD:action} (?:[(])Source IP:%{IPV4:src} Via:%{WORD:protocol}(?:[)])" ]
}
}
else {
grok {
match => [ "message", "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" ]
}
}
}
}
output {
elasticsearch {
host => "localhost"
protocol => "http"
}
}
It seems that the _grokparsefailure is thrown by the input plugin "syslog" which internally also uses grok. After replacing the input block with
input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}
I dind't receive any more the failure messages. This blog post helped me a lot.