Skip to content Skip to sidebar Skip to footer

Best Way To Get Null Counts, Min And Max Values Of Multiple (100+) Columns From A Pyspark Dataframe

Say I have a list of column names and they all exist in the dataframe Cols = ['A', 'B', 'C', 'D'], I am looking for a quick way to get a table/dataframe like NA_counts min

Solution 1:

You can calculate each metric separately and then union all like this:

nulls_cols = [sum(when(col(c).isNull(), lit(1)).otherwise(lit(0))).alias(c) for c in cols]
max_cols = [max(col(c)).alias(c) for c in cols]
min_cols = [min(col(c)).alias(c) for c in cols]

nulls_df = df.select(lit("NA_counts").alias("count"), *nulls_cols)
max_df = df.select(lit("Max").alias("count"), *max_cols)
min_df = df.select(lit("Min").alias("count"), *min_cols)

nulls_df.unionAll(max_df).unionAll(min_df).show()

Output example:

+---------+---+---+----+----+
|    count|  A|  B|   C|   D|
+---------+---+---+----+----+
|NA_counts|  1|  0|   3|   1|
|      Max|  9|  5|Test|2017|
|      Min|  1|  0|Test|2010|
+---------+---+---+----+----+

Post a Comment for "Best Way To Get Null Counts, Min And Max Values Of Multiple (100+) Columns From A Pyspark Dataframe"