Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/spring-mvc/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在pyspark dataframe中将左联接操作的输出中的null值替换为0?_Dataframe_Apache Spark_Pyspark_Apache Spark Sql - Fatal编程技术网

如何在pyspark dataframe中将左联接操作的输出中的null值替换为0?

如何在pyspark dataframe中将左联接操作的输出中的null值替换为0?,dataframe,apache-spark,pyspark,apache-spark-sql,Dataframe,Apache Spark,Pyspark,Apache Spark Sql,我有一个简单的PySpark数据帧,df1- df1 = spark.createDataFrame([ ("u1", 1), ("u1", 2), ("u2", 3), ("u3", 4), ], ['user_id', 'var1']) print(df1.printSchema()) df1.show(truncate=False) 输出- root |-

我有一个简单的PySpark数据帧,df1-

df1 = spark.createDataFrame([
    ("u1", 1),
    ("u1", 2),
    ("u2", 3),
    ("u3", 4),

    ],
    ['user_id', 'var1'])

print(df1.printSchema())
df1.show(truncate=False)
输出-

root
 |-- user_id: string (nullable = true)
 |-- var1: long (nullable = true)

None
+-------+----+
|user_id|var1|
+-------+----+
|u1     |1   |
|u1     |2   |
|u2     |3   |
|u3     |4   |
+-------+----+
root
 |-- var1: long (nullable = true)
 |-- var2: string (nullable = true)

None
+----+----+
|var1|var2|
+----+----+
|1   |f1  |
|2   |f2  |
+----+----+
+-------+----+----+----+
|user_id|var1|var1|var2|
+-------+----+----+----+
|     u1|   1|   1|  f1|
|     u1|   2|   2|  f2|
|     u2|   3|null|null|
|     u3|   4|null|null|
+-------+----+----+----+
我有另一个PySpark数据帧df2-

df2 = spark.createDataFrame([
    (1, 'f1'),
    (2, 'f2'),

    ],
    ['var1', 'var2'])

print(df2.printSchema())
df2.show(truncate=False)
输出-

root
 |-- user_id: string (nullable = true)
 |-- var1: long (nullable = true)

None
+-------+----+
|user_id|var1|
+-------+----+
|u1     |1   |
|u1     |2   |
|u2     |3   |
|u3     |4   |
+-------+----+
root
 |-- var1: long (nullable = true)
 |-- var2: string (nullable = true)

None
+----+----+
|var1|var2|
+----+----+
|1   |f1  |
|2   |f2  |
+----+----+
+-------+----+----+----+
|user_id|var1|var1|var2|
+-------+----+----+----+
|     u1|   1|   1|  f1|
|     u1|   2|   2|  f2|
|     u2|   3|null|null|
|     u3|   4|null|null|
+-------+----+----+----+
我必须通过对上面提到的两个数据帧使用左连接操作来连接它们-

df1.join(df2, df1.var1==df2.var1, 'left').show()
输出-

root
 |-- user_id: string (nullable = true)
 |-- var1: long (nullable = true)

None
+-------+----+
|user_id|var1|
+-------+----+
|u1     |1   |
|u1     |2   |
|u2     |3   |
|u3     |4   |
+-------+----+
root
 |-- var1: long (nullable = true)
 |-- var2: string (nullable = true)

None
+----+----+
|var1|var2|
+----+----+
|1   |f1  |
|2   |f2  |
+----+----+
+-------+----+----+----+
|user_id|var1|var1|var2|
+-------+----+----+----+
|     u1|   1|   1|  f1|
|     u1|   2|   2|  f2|
|     u2|   3|null|null|
|     u3|   4|null|null|
+-------+----+----+----+
但正如您所看到的,我在两个表不匹配的行中得到了空值。
如何将所有空值替换为0?

您可以使用
fillna
。整数列和字符串列需要两个FILLNA

df1.join(df2, df1.var1==df2.var1, 'left').fillna(0).fillna("0")

您可以在
join
之后重命名列(否则会得到同名列),并使用字典指定如何填充缺少的值:

f1.join(df2, df1.var1 == df2.var1, 'left').select(
    *[df1['user_id'], df1['var1'], df2['var1'].alias('df2_var1'), df2['var2'].alias('df2_var2')]
).fillna({'df2_var1': 0, 'df2_var2': '0'}).show()
输出:

+-------+----+--------+--------+
|user_id|var1|df2_var1|df2_var2|
+-------+----+--------+--------+
|     u1|   1|       1|      f1|
|     u2|   3|       0|       0|
|     u1|   2|       2|      f2|
|     u3|   4|       0|       0|
+-------+----+--------+--------+

我已经尝试过这个解决方案。但它似乎对我不起作用。我不知道为什么会这样。@n0obcoder有两个fillna可以工作…?如果您有非数字/字符串的列(例如array/struct/map),那么它们将不会被填充。
df.filter(df['col'].isNull())
您需要使用
coalesce
,因为
fillna
不支持数组。