Apache spark 生成2个Pyspark数据帧之间不匹配列的报告

Apache spark 生成2个Pyspark数据帧之间不匹配列的报告,apache-spark,pyspark,pyspark-sql,Apache Spark,Pyspark,Pyspark Sql,团队,我们需要根据完全相同结构的2个Pyspark数据帧之间的键字段生成不匹配列的报告 这里是第一个数据帧- >>> df.show() +--------+----+----+----+----+----+----+----+----+ | key|col1|col2|col3|col4|col5|col6|col7|col8| +--------+----+----+----+----+----+----+----+----+ | abcd| 123| xyz

团队,我们需要根据完全相同结构的2个Pyspark数据帧之间的键字段生成不匹配列的报告

这里是第一个数据帧-

>>> df.show()
+--------+----+----+----+----+----+----+----+----+
|     key|col1|col2|col3|col4|col5|col6|col7|col8|
+--------+----+----+----+----+----+----+----+----+
|    abcd| 123| xyz|   a|  ab| abc| def| qew| uvw|
|   abcd1| 123| xyz|   a|  ab| abc| def| qew| uvw|
|  abcd12| 123| xyz|   a|  ab| abc| def| qew| uvw|
| abcd123| 123| xyz|   a|  ab| abc| def| qew| uvw|
|abcd1234| 123| xyz|   a|  ab| abc| def| qew| uvw|
+--------+----+----+----+----+----+----+----+----+
>>> df1.show()
+--------+----+----+----+----+----+----+----+----+
|     key|col1|col2|col3|col4|col5|col6|col7|col8|
+--------+----+----+----+----+----+----+----+----+
|    abcd| 123| xyz|   a|  ab| abc| def| qew| uvw|
|   abcdx| 123| xyz|   a|  ab| abc| def| qew| uvw|
|  abcd12| 123| xyz|   a| abx| abc|defg| qew| uvw|
| abcd123| 123| xyz|   a|  ab| abc|defg| qew| uvw|
|abcd1234| 123| xyz|   a|  ab|abcd|defg| qew| uvw|
+--------+----+----+----+----+----+----+----+----+
这是第二个数据帧-

>>> df.show()
+--------+----+----+----+----+----+----+----+----+
|     key|col1|col2|col3|col4|col5|col6|col7|col8|
+--------+----+----+----+----+----+----+----+----+
|    abcd| 123| xyz|   a|  ab| abc| def| qew| uvw|
|   abcd1| 123| xyz|   a|  ab| abc| def| qew| uvw|
|  abcd12| 123| xyz|   a|  ab| abc| def| qew| uvw|
| abcd123| 123| xyz|   a|  ab| abc| def| qew| uvw|
|abcd1234| 123| xyz|   a|  ab| abc| def| qew| uvw|
+--------+----+----+----+----+----+----+----+----+
>>> df1.show()
+--------+----+----+----+----+----+----+----+----+
|     key|col1|col2|col3|col4|col5|col6|col7|col8|
+--------+----+----+----+----+----+----+----+----+
|    abcd| 123| xyz|   a|  ab| abc| def| qew| uvw|
|   abcdx| 123| xyz|   a|  ab| abc| def| qew| uvw|
|  abcd12| 123| xyz|   a| abx| abc|defg| qew| uvw|
| abcd123| 123| xyz|   a|  ab| abc|defg| qew| uvw|
|abcd1234| 123| xyz|   a|  ab|abcd|defg| qew| uvw|
+--------+----+----+----+----+----+----+----+----+
完全外接给了我这个-

>>> dfFull=df.join(df1,'key','outer')
>>> dfFull.show()
+--------+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+
|     key|col1|col2|col3|col4|col5|col6|col7|col8|col1|col2|col3|col4|col5|col6|col7|col8|
+--------+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+
|  abcd12| 123| xyz|   a|  ab| abc| def| qew| uvw| 123| xyz|   a| abx| abc|defg| qew| uvw|
|   abcd1| 123| xyz|   a|  ab| abc| def| qew| uvw|null|null|null|null|null|null|null|null|
|abcd1234| 123| xyz|   a|  ab| abc| def| qew| uvw| 123| xyz|   a|  ab|abcd|defg| qew| uvw|
| abcd123| 123| xyz|   a|  ab| abc| def| qew| uvw| 123| xyz|   a|  ab| abc|defg| qew| uvw|
|   abcdx|null|null|null|null|null|null|null|null| 123| xyz|   a|  ab| abc| def| qew| uvw|
|    abcd| 123| xyz|   a|  ab| abc| def| qew| uvw| 123| xyz|   a|  ab| abc| def| qew| uvw|
+--------+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+
如果我只看col6,有5个值与“key”字段不匹配(唯一的值匹配是最后一条记录)

我需要为所有列生成这样的报告。不匹配样本可以是来自数据帧的任何记录的值

colName,NumofMismatch,mismatchSampleFromDf,misMatchSamplefromDf1
col6,5,def,defg
col7,2,null,qew
col8,2,null,uvw
col5,3,null,abc
这是一个基于键的列式摘要,表示两个数据帧之间有多少值不匹配

colName,NumofMismatch,mismatchSampleFromDf,misMatchSamplefromDf1
col6,5,def,defg
col7,2,null,qew
col8,2,null,uvw
col5,3,null,abc

Sid

假设两个数据帧是df1和df2,您可以尝试以下操作:

from pyspark.sql.functions import when, array, count, first

# list of columns to be compared
cols = df1.columns[1:]

df_new = (df1.join(df2, "key", "outer")
    .select([ when(~df1[c].eqNullSafe(df2[c]), array(df1[c], df2[c])).alias(c) for c in cols ])
    .selectExpr('stack({},{}) as (colName, mismatch)'.format(len(cols), ','.join('"{0}",`{0}`'.format(c) for c in cols)))
    .filter('mismatch is not NULL'))

df_new.show(10)
+-------+-----------+                                                           
|colName|   mismatch|
+-------+-----------+
|   col4|  [ab, abx]|
|   col6|[def, defg]|
|   col6|[def, defg]|
|   col5|[abc, abcd]|
|   col6|[def, defg]|
|   col1|    [, 123]|
|   col2|    [, xyz]|
|   col3|      [, a]|
|   col4|     [, ab]|
|   col5|    [, abc]|
+-------+-----------+
+ df1[c] != df2[c]
+ df1[c] is NULL or df2[c] is NULL but not both
注意:(1)用于查找不匹配的条件
~df1[c].eqNullSafe(df2[c])
满足以下任一条件:

from pyspark.sql.functions import when, array, count, first

# list of columns to be compared
cols = df1.columns[1:]

df_new = (df1.join(df2, "key", "outer")
    .select([ when(~df1[c].eqNullSafe(df2[c]), array(df1[c], df2[c])).alias(c) for c in cols ])
    .selectExpr('stack({},{}) as (colName, mismatch)'.format(len(cols), ','.join('"{0}",`{0}`'.format(c) for c in cols)))
    .filter('mismatch is not NULL'))

df_new.show(10)
+-------+-----------+                                                           
|colName|   mismatch|
+-------+-----------+
|   col4|  [ab, abx]|
|   col6|[def, defg]|
|   col6|[def, defg]|
|   col5|[abc, abcd]|
|   col6|[def, defg]|
|   col1|    [, 123]|
|   col2|    [, xyz]|
|   col3|      [, a]|
|   col4|     [, ab]|
|   col5|    [, abc]|
+-------+-----------+
+ df1[c] != df2[c]
+ df1[c] is NULL or df2[c] is NULL but not both
(2) 不匹配项(如果存在)保存为ArrayType列,其中第一项来自df1,第二项来自df2。如果没有不匹配项,则返回NULL,然后过滤掉

(3) Python格式函数动态生成的stack()函数如下:

stack(8,"col1",`col1`,"col2",`col2`,"col3",`col3`,"col4",`col4`,"col5",`col5`,"col6",`col6`,"col7",`col7`,"col8",`col8`) as (colName, mismatch)
在我们有了df_new之后,我们可以进行groupby+聚合:

df_new.groupby('colName') \
    .agg(count('mismatch').alias('NumOfMismatch'), first('mismatch').alias('mismatch')) \
    .selectExpr('colName', 'NumOfMismatch', 'mismatch[0] as misMatchFromdf1', 'mismatch[1] as misMatchFromdf2')
    .show()
+-------+-------------+---------------+---------------+
|colName|NumOfMismatch|misMatchFromdf1|misMatchFromdf2|
+-------+-------------+---------------+---------------+
|   col8|            2|           null|            uvw|
|   col3|            2|           null|              a|
|   col4|            3|             ab|            abx|
|   col1|            2|           null|            123|
|   col6|            5|            def|           defg|
|   col5|            3|            abc|           abcd|
|   col2|            2|           null|            xyz|
|   col7|            2|           null|            qew|
+-------+-------------+---------------+---------------+