Let's say I have a numpy array a that contains the numbers 1-10. So a is [1 2 3 4 5 6 7 8 9 10].
Now, I also have a Python Spark dataframe to which I want to add my numpy array a. I figure that a column of literals will do the job. So I do the following:
df = df.withColumn("NewColumn", F.lit(a))
This doesn't work. The error is "Unsupported literal type class java.util.ArrayList".
Now, if I try just one element of the array, as follows, it works.
df = df.withColumn("NewColumn", F.lit(a[0]))
Is there a way I can do what I'm trying? I've been working on the task I want to complete for days and this is the closest I've come to finishing it. I have looked at all related Stack Overflow questions but I didn't get quite the answer I was looking for. Any help is appreciated. Thanks.
You can use array
inbuilt function as
a = [1,2,3,4,5,6,7,8,9,10]
df = spark.createDataFrame([['a b c d e f g h i j '],], ['col1'])
df = df.withColumn("NewColumn", F.array([F.lit(x) for x in a]))
df.show(truncate=False)
You should get
+--------------------+-------------------------------+
|col1 |NewColumn |
+--------------------+-------------------------------+
|a b c d e f g h i j |[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]|
+--------------------+-------------------------------+
root
|-- col1: string (nullable = true)
|-- NewColumn: array (nullable = false)
| |-- element: integer (containsNull = false)
#udf function
def arrayUdf():
return a
callArrayUdf = F.udf(arrayUdf, T.ArrayType(T.IntegerType()))
#calling udf function
df = df.withColumn("NewColumn", callArrayUdf())
output is same as with for loop way
Updated
I am pasting @pault's comment given below
You can hide the loop using
map
:df.withColumn("NewColumn", F.array(map(F.lit, a)))