UPDATE: It turned out I had an issue with my ports in Docker. Not sure why that fixed this phenomenon.
I believe I have come across a strange error. I am using the Sarama library and am able to create a consumer successfully.
func main() {
config = sarama.NewConfig()
config.ClientID = "go-kafka-consumer"
config.Consumer.Return.Errors = true
// Create new consumer
master, err := sarama.NewConsumer("localhost:9092", config)
if err != nil {
panic(err)
}
defer func() {
if err := master.Close(); err != nil {
panic(err)
}
}()
partitionConsumer, err := master.ConsumePartition("myTopic",0,
sarama.OffsetOldest)
if err != nil {
panic(err)
}
}
As soon as I break this code up and move outside the main routine, I run into the error:
kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
I have split my code up as follows: the previous main() method I have now converted into a consumer package with a method called NewConsumer() and my new main() calls NewConsumer() like so:
c := consumer.NewConsumer()
The panic statement is getting triggered in the line with sarama.NewConsumer
and prints out kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
Why would breaking up my code this way trigger Sarama to fail to make the consumer? Does Sarama need to be run directly from main?
I think you create this way 2 or more consumers that get grouped into a single group (probably go-kafka-consumer
). Your Broker has a Topic with 1 Partition, so one of Group gets assigned, the other one produces this error message. If you would raise the Partitions of that Topic to 2 the error would go away.
But I think your problem is that you somehow have instantiated more consumers than before.
From Kafka in a Nutshell:
Consumers can also be organized into consumer groups for a given topic — each consumer within the group reads from a unique partition and the group as a whole consumes all messages from the entire topic. If you have more consumers than partitions then some consumers will be idle because they have no partitions to read from. If you have more partitions than consumers then consumers will receive messages from multiple partitions. If you have equal numbers of consumers and partitions, each consumer reads messages in order from exactly one partition.
They would not exactly produce an Error, so that would be an issue with Sarama.