Spark: processing multiple kafka topic in parallel

nish picture nish · Dec 23, 2015 · Viewed 14.6k times · Source

I am using spark 1.5.2. I need to run spark streaming job with kafka as the streaming source. I need to read from multiple topics within kafka and process each topic differently.

  1. Is it a good idea to do this in the same job? If so, should I create a single stream with multiple partitions or different streams for each topic?
  2. I am using Kafka direct steam. As far as I know, spark launches long-running receivers for each partition. I have a relatively small cluster, 6 nodes with 4 cores each. If I have many topics and partitions in each topic, would the efficiency be impacted as most executors are busy with long-running receivers? Please correct me if my understanding is wrong here

Answer

nish picture nish · Dec 24, 2015

I made the following observations, in case its helpful for someone:

  1. In kafka direct stream, the receivers are not run as long running tasks. At the beginning of each batch inerval, first the data is read from kafka in executors. Once read, the processing part takes over.
  2. If we create a single stream with multiple topics, the topics are read one after the other. Also, filtering the dstream for applying different processing logic would add another step to the job
  3. Creating multiple streams would help in two ways: 1. You don't need to apply the filter operation to process different topics differently. 2. You can read multiple streams in parallel (as opposed to one by one in case of single stream). To do so, there is an undocumented config parameter spark.streaming.concurrentJobs*. So, I decided to create multiple streams.

    sparkConf.set("spark.streaming.concurrentJobs", "4");