Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Pyspark:基于使用udf的另一个数据帧中的值向数据帧添加新列_Python_Apache Spark_Pyspark_Apache Spark Sql - Fatal编程技术网

Python Pyspark:基于使用udf的另一个数据帧中的值向数据帧添加新列

Python Pyspark:基于使用udf的另一个数据帧中的值向数据帧添加新列,python,apache-spark,pyspark,apache-spark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,我有两个pyspark数据帧,我正试图根据dataframe_1的值向dataframe_2(df_2)添加一个新列。Dataframe_2列val_1和val_2应该是Dataframe_1的行和列位置 数据帧_1 df_1 = sqlContext.createDataFrame([(0.78, 0.79, 0.45, 0.67, 0.88), (0.77, 0.79, 0.81, 0.82, 0.66), (0.99, 0.92, 0.94, 0.95, 0.91), ( 0.75

我有两个pyspark数据帧,我正试图根据dataframe_1的值向dataframe_2(
df_2
)添加一个新列。Dataframe_2列
val_1
val_2
应该是Dataframe_1的行和列位置

数据帧_1

df_1 = sqlContext.createDataFrame([(0.78, 0.79, 0.45, 0.67, 0.88), (0.77, 0.79, 0.81, 0.82, 0.66), (0.99, 0.92, 0.94, 0.95, 0.91), (
    0.75, 0.53, 0.83, 0.73, 0.56), (0.77, 0.78, 0.99, 0.34, 0.67)], ["col_1", "col_2", "col_3", "col_4", "col_5"])

df_1.show()
+-----+-----+-----+-----+-----+
|col_1|col_2|col_3|col_4|col_5|
+-----+-----+-----+-----+-----+
| 0.78| 0.79| 0.45| 0.67| 0.88|
| 0.77| 0.79| 0.81| 0.82| 0.66|
| 0.99| 0.92| 0.94| 0.95| 0.91|
| 0.75| 0.53| 0.83| 0.73| 0.56|
| 0.77| 0.78| 0.99| 0.34| 0.67|
+-----+-----+-----+-----+-----+
数据帧_2

df_2 = sqlContext.createDataFrame([(34563, 435353424, 1, 2 ), (23524, 466344656, 2, 1), (52452, 263637236, 2, 5), (
   52334, 466633353, 2, 3), (66334, 563555578, 5, 4), (42552, 123445563, 5, 3), (72331, 413555213, 4, 3), (82311, 52355563, 2, 2)], ["id", "col_A", "val_1", "val_2"])
df_2.show()
+-----+---------+-----+-----+
|   id|    col_A|val_1|val_2|
+-----+---------+-----+-----+
|34563|435353424|    1|    2|
|23524|466344656|    2|    1|
|52452|263637236|    2|    5|
|52334|466633353|    2|    3|
|66334|563555578|    5|    4|
|42552|123445563|    5|    3|
|72331|413555213|    4|    3|
|82311| 52355563|    2|    2|
+-----+---------+-----+-----+
目标:根据
df_1

我尝试使用创建udf,但出现了一个错误

预期输出:

+-----+---------+-----+-----+---------------+
|   id|    col_A|val_1|val_2|value_from_df_1|
+-----+---------+-----+-----+---------------+
|34563|435353424|    1|    2|           0.79|
|23524|466344656|    2|    1|           0.77|
|52452|263637236|    2|    5|           0.66|
|52334|466633353|    2|    3|           0.94|
|66334|563555578|    5|    4|           0.34|
|42552|123445563|    5|    3|           0.99|
|72331|413555213|    4|    3|           0.83|
|82311| 52355563|    2|    2|           0.79|
+-----+---------+-----+-----+---------------+

from pyspark.sql import functions as F
import pyspark.sql.types as t

def add_data_to_table(table, value_1, value_2):
    return float(table.collect()[value_1-1][value_2-1])

select_data_from_table = F.udf(add_data_to_table, t.FloatType())
result_df = df_2.withColumn('value_from_df_1', select_data_from_table(df_1, df_2.val_1, df_2.val_2 ))
result_df.show()
我的代码:

+-----+---------+-----+-----+---------------+
|   id|    col_A|val_1|val_2|value_from_df_1|
+-----+---------+-----+-----+---------------+
|34563|435353424|    1|    2|           0.79|
|23524|466344656|    2|    1|           0.77|
|52452|263637236|    2|    5|           0.66|
|52334|466633353|    2|    3|           0.94|
|66334|563555578|    5|    4|           0.34|
|42552|123445563|    5|    3|           0.99|
|72331|413555213|    4|    3|           0.83|
|82311| 52355563|    2|    2|           0.79|
+-----+---------+-----+-----+---------------+

from pyspark.sql import functions as F
import pyspark.sql.types as t

def add_data_to_table(table, value_1, value_2):
    return float(table.collect()[value_1-1][value_2-1])

select_data_from_table = F.udf(add_data_to_table, t.FloatType())
result_df = df_2.withColumn('value_from_df_1', select_data_from_table(df_1, df_2.val_1, df_2.val_2 ))
result_df.show()

如果有人能帮忙,我真的很感激。谢谢。

与熊猫不同,Spark没有索引的概念,所以您需要手动添加索引。UDF在这里不合适,因为UDF是按行操作的,而不是在整个数据帧上

from pyspark.sql import functions as F, Window

df_1_id = df_1.withColumn(
    'row',
    F.row_number().over(Window.orderBy(F.monotonically_increasing_id()))
).select(
    'row',
    F.posexplode(F.array(*df_1.columns))
)

result = df_2.withColumn(
    'rowid',
    F.monotonically_increasing_id()
).join(
    df_1_id,
    (df_1_id.row == df_2.val_1) & (df_1_id.pos + 1 == df_2.val_2)
).orderBy('rowid').drop('rowid', 'row', 'pos')

result.show()
+-----+---------+-----+-----+----+
|   id|    col_A|val_1|val_2| col|
+-----+---------+-----+-----+----+
|34563|435353424|    1|    2|0.79|
|23524|466344656|    2|    1|0.77|
|52452|263637236|    2|    5|0.66|
|52334|466633353|    2|    3|0.81|
|66334|563555578|    5|    4|0.34|
|42552|123445563|    5|    3|0.99|
|72331|413555213|    4|    3|0.83|
|82311| 52355563|    2|    2|0.79|
+-----+---------+-----+-----+----+