We are using a kafka broker setup with a kafka streams application that runs using Spring cloud stream kafka. Although it seems to run fine, we do get the following error statements in our log:
2019-02-21 22:37:20,253 INFO kafka-coordinator-heartbeat-thread | anomaly-timeline org.apache.kafka.clients.FetchSessionHandler [Consumer clientId=anomaly-timeline-56dc4481-3086-4359-a8e8-d2dae12272a2-StreamThread-1-consumer, groupId=anomaly-timeline] Node 2 was unable to process the fetch request with (sessionId=1290440723, epoch=2089): INVALID_FETCH_SESSION_EPOCH.
I searched the internet but there is not much information on this error. I guessed that it could have something to do with a difference in time settings between the broker and the consumer, but both machines have the same timeserver settings.
Any idea how this can be resolved?
There is a concept of fetch session, introduced within KIP-227 since 1.1.0 release: https://cwiki.apache.org/confluence/display/KAFKA/KIP-227%3A+Introduce+Incremental+FetchRequests+to+Increase+Partition+Scalability
Kafka brokers, which are replica followers, fetch messages from the leader. In order to avoid sending full metadata each time for all partitions, only those partitions which changed are sent within the same fetch session.
When we look into Kafka's code, we can see an example, when this is returned:
if (session.epoch != expectedEpoch) {
info(s"Incremental fetch session ${session.id} expected epoch $expectedEpoch, but " +
s"got ${session.epoch}. Possible duplicate request.")
new FetchResponse(Errors.INVALID_FETCH_SESSION_EPOCH, new FetchSession.RESP_MAP, 0, session.id)
} else {
In general, if you don't have thousands of partitions and, at the same time, this doesn't happen very often, then it shouldn't worry you.