Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/278.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 从两列中创建一个元组-PySpark_Python_Apache Spark_Zip_Pyspark - Fatal编程技术网

Python 从两列中创建一个元组-PySpark

Python 从两列中创建一个元组-PySpark,python,apache-spark,zip,pyspark,Python,Apache Spark,Zip,Pyspark,我的问题是基于这里类似的问题,不同的是我有一个值列表,而不是每列一个值。例如: from pyspark.sql import Row df = sqlContext.createDataFrame([Row(v1=[u'2.0', u'1.0', u'9.0'], v2=[u'9.0', u'7.0', u'2.0']),Row(v1=[u'4.0', u'8.0', u'9.0'], v2=[u'1.0', u'1.0', u'2.0'])]) +---------------+-

我的问题是基于这里类似的问题,不同的是我有一个值列表,而不是每列一个值。例如:

from pyspark.sql import Row
df = sqlContext.createDataFrame([Row(v1=[u'2.0', u'1.0', u'9.0'], v2=[u'9.0', u'7.0', u'2.0']),Row(v1=[u'4.0', u'8.0', u'9.0'], v2=[u'1.0', u'1.0', u'2.0'])])

    +---------------+---------------+
    |             v1|             v2|
    +---------------+---------------+
    |[2.0, 1.0, 9.0]|[9.0, 7.0, 2.0]|
    |[2.0, 1.0, 9.0]|[9.0, 7.0, 2.0]|
    +---------------+---------------+
我想得到的是类似于每行列表的zip元素,但在pyspark 1.6中我无法理解:

+---------------+---------------+--------------------+
|             v1|             v2|             v_tuple|
+---------------+---------------+--------------------+
|[2.0, 1.0, 9.0]|[9.0, 7.0, 2.0]|[(2.0,9.0), (1.0,...|
|[4.0, 8.0, 9.0]|[1.0, 1.0, 2.0]|[(4.0,1.0), (8.0,...|
+---------------+---------------+--------------------+

注意:数组的大小可能因行而异,但同一行的列大小始终相同。

如果数组的大小因行而异,则需要使用UDF:

from pyspark.sql.functions import udf

@udf("array<struct<_1:double,_2:double>>")
def zip_(xs, ys):
    return list(zip(xs, ys))

df.withColumn("v_tuple", zip_("v1", "v2"))
from pyspark.sql.types import *

zip_ = udf(
    lambda xs, ys: list(zip(xs, ys)),
    ArrayType(StructType([StructField("_1", DoubleType()), StructField("_2", DoubleType())])))