Env : spark 1.6 using Hadoop. Hortonworks Data Platform 2.5
I have a table with 10 billion records and I would like to get 300 million records and move them to a temporary table.
sqlContext.sql("select ....from my_table limit 300000000").repartition(50)
.write.saveAsTable("temporary_table")
I saw that the Limit keyword would actually make spark use only one executor!!! This means moving 300 million records to one node and writing it back to Hadoop. How can I avoid this reduce but still get just 300 million records while having more than one executor. I would like all nodes to write into hadoop.
Can sampling help me with that? If so how?
Sampling can be used in below ways :-
select ....from my_table TABLESAMPLE(.3 PERCENT)
or
select ....from my_table TABLESAMPLE(30M ROWS)