Spark: group concat equivalent in scala rdd

Silverrose picture Silverrose · Dec 8, 2015 · Viewed 7.3k times · Source

I have following DataFrame:

    |-----id-------|----value------|-----desc------|
    |     1        |     v1        |      d1       |
    |     1        |     v2        |      d2       |
    |     2        |     v21       |      d21      |
    |     2        |     v22       |      d22      |
    |--------------|---------------|---------------|

I want to transform it into:

    |-----id-------|----value------|-----desc------|
    |     1        |     v1;v2     |      d1;d2    |
    |     2        |     v21;v22   |      d21;d22  |
    |--------------|---------------|---------------|
  • Is it possible through data frame operations?
  • How would rdd transformation look like in this case?

I presume rdd.reduce is the key, but I have no idea how to adapt it to this scenario.

Answer

Kaushal picture Kaushal · Dec 8, 2015

You can transform your data using spark sql

case class Test(id: Int, value: String, desc: String)
val data = sc.parallelize(Seq((1, "v1", "d1"), (1, "v2", "d2"), (2, "v21", "d21"), (2, "v22", "d22")))
  .map(line => Test(line._1, line._2, line._3))
  .df

data.registerTempTable("data")
val result = sqlContext.sql("select id,concat_ws(';', collect_list(value)),concat_ws(';', collect_list(value)) from data group by id")
result.show