Please suggest pyspark dataframe alternative for Pandas df['col'].unique()
.
I want to list out all the unique values in a pyspark dataframe column.
Not the SQL type way (registertemplate then SQL query for distinct values).
Also I don't need groupby->countDistinct
, instead I want to check distinct VALUES in that column.
This should help to get distinct values of a column:
df.select('column1').distinct().collect()
Note that .collect()
doesn't have any built-in limit on how many values can return so this might be slow -- use .show()
instead or add .limit(20)
before .collect()
to manage this.