Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/306.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/visual-studio-2010/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 在PySpark中与查找表联接_Python_Apache Spark_Pyspark_Apache Spark Sql_Pyspark Sql - Fatal编程技术网

Python 在PySpark中与查找表联接

Python 在PySpark中与查找表联接,python,apache-spark,pyspark,apache-spark-sql,pyspark-sql,Python,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Sql,我有两个表:表'A'和表'Lookup' 表A: ID Day A 1 B 1 C 2 D 4 查找表具有每个ID日组合的百分比值 查表: ID 1 2 3 4 A 20 10 50 30 B 0 50 0 50 C 50 10 10 30 D 10 25 25 40 我的预期输出是在表“A”中有一个名为“Percent”的附加字段,其中包

我有两个表:表'A'和表'Lookup'

表A:

ID  Day 

A     1 
B     1
C     2 
D     4
查找表具有每个ID日组合的百分比值

查表:

ID     1    2    3    4

A     20   10   50   30
B      0   50    0   50
C     50   10   10   30
D     10   25   25   40
我的预期输出是在表“A”中有一个名为“Percent”的附加字段,其中包含从查找表中填入的值:

ID  Day  Percent

A     1       20
B     1        0
C     2       10
D     4       40

因为这两个表都很大,所以我不想透视其中任何一个表。

我已经用scala编写了代码。对于python,您可以参考相同的方法

    scala> TableA.show()
    +---+---+
    | ID|Day|
    +---+---+
    |  A|  1|
    |  B|  1|
    |  C|  2|
    |  D|  4|
    +---+---+


    scala> lookup.show()
    +---+---+---+---+---+
    | ID|  1|  2|  3|  4|
    +---+---+---+---+---+
    |  A| 20| 10| 50| 30|
    |  B|  0| 50|  0| 50|
    |  C| 50| 10| 10| 30|
    |  D| 10| 25| 25| 40|
    +---+---+---+---+---+

    //UDF Functon to retrieve data from lookup table
    val lookupUDF = (r:Row, s:String) => {
          r.getAs(s).toString}

    //Join over Key column "ID"
    val joindf  = TableA.join(lookup,"ID")

    //final output DataFrame creation
    val final_df = joindf.map(x => (x.getAs("ID").toString, x.getAs("Day").toString, lookupUDF(x,x.getAs("Day")))).toDF("ID","Day","Percentage")

     final_df.show()
     +---+---+----------+
     | ID|Day|Percentage|
     +---+---+----------+
     |  A|  1|        20|
     |  B|  1|         0|
     |  C|  2|        10|
     |  D|  4|        40|
     +---+---+----------+
(在我发布问题一天后发布我的答案)

我能够通过将表转换为数据帧来解决这个问题

from pyspark.sql.types import *

schema = StructType([StructField("id", StringType())\
                   ,StructField("day", StringType())\
                   ,StructField("1", IntegerType())\
                   ,StructField("2", IntegerType())\
                   ,StructField("3", IntegerType())\
                   ,StructField("4", IntegerType())])

# Day field is String type

data = [['A', 1, 20, 10, 50, 30], ['B', 1, 0, 50, 0, 50], ['C', 2, 50, 10, 10, 30], ['D', 4, 10, 25, 25, 40]]
df = spark.createDataFrame(data,schema=schema)
df.show()

# After joining the 2 tables on "id", the tables would look like this:
+---+---+---+---+---+---+
| id|day|  1|  2|  3|  4|
+---+---+---+---+---+---+
|  A|  1| 20| 10| 50| 30|
|  B|  1|  0| 50|  0| 50|
|  C|  2| 50| 10| 10| 30|
|  D|  4| 10| 25| 25| 40|
+---+---+---+---+---+---+

# Converting to a pandas dataframe
pandas_df = df.toPandas()

  id  day   1   2   3   4
   A   1   20  10  50  30
   B   1    0  50   0  50
   C   2   50  10  10  30
   D   4   10  25  25  40

# UDF:
def udf(x):
     return x[x['day']]

pandas_df['percent'] = pandas_df.apply(udf, axis=1)

# Converting back to a Spark DF:
spark_df = sqlContext.createDataFrame(pandas_df)

+---+---+---+---+---+---+---+
| id|day|  1|  2|  3|  4|new|
+---+---+---+---+---+---+---+
|  A|  1| 20| 10| 50| 30| 20|
|  B|  1|  0| 50|  0| 50|  0|
|  C|  2| 50| 10| 10| 30| 10|
|  D|  4| 10| 25| 25| 40| 40|
+---+---+---+---+---+---+---+

spark_df.select("id", "day", "percent").show()

+---+---+-------+
| id|day|percent|
+---+---+-------+
|  A|  1|     20|
|  B|  1|      0|
|  C|  2|     10|
|  D|  4|     40|
+---+---+-------+

如果有人在没有转换的情况下在PySpark中发布答案,我将不胜感激

按ID连接并迭代查找表中的列,以字符串形式与当天进行比较请发布您尝试的内容和失败的地方的代码…我已发布我的代码作为答案。转换为pandas dataframe解决了这个问题,但我正在Pyspark中寻找更有效的方法来解决这个问题。