Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将Dataframe中的列值转换为列表_Python_Apache Spark_Pyspark - Fatal编程技术网

Python 将Dataframe中的列值转换为列表

Python 将Dataframe中的列值转换为列表,python,apache-spark,pyspark,Python,Apache Spark,Pyspark,我有以下源文件。我的文件中有一个名为“john”的名字,我想把它拆分成['j'、'o'、'h'、'n']。请按如下方式查找个人档案 源文件: id,name,class,start_data,end_date 1,john,xii,20170909,20210909 from pyspark.sql import SparkSession def main(): spark = SparkSession.builder.appName("PersonProcessing").getO

我有以下源文件。我的文件中有一个名为“
john
”的名字,我想把它拆分成
['j'、'o'、'h'、'n']
。请按如下方式查找个人档案

源文件:

id,name,class,start_data,end_date
1,john,xii,20170909,20210909
from pyspark.sql import SparkSession

def main():
    spark = SparkSession.builder.appName("PersonProcessing").getOrCreate()

    df = spark.read.csv('person.txt', header=True)
    nameList = [x['name'] for x in df.rdd.collect()]
    print(list(nameList))
    df.show()

if __name__ == '__main__':
    main()
[u'john']
代码:

id,name,class,start_data,end_date
1,john,xii,20170909,20210909
from pyspark.sql import SparkSession

def main():
    spark = SparkSession.builder.appName("PersonProcessing").getOrCreate()

    df = spark.read.csv('person.txt', header=True)
    nameList = [x['name'] for x in df.rdd.collect()]
    print(list(nameList))
    df.show()

if __name__ == '__main__':
    main()
[u'john']
实际输出:

id,name,class,start_data,end_date
1,john,xii,20170909,20210909
from pyspark.sql import SparkSession

def main():
    spark = SparkSession.builder.appName("PersonProcessing").getOrCreate()

    df = spark.read.csv('person.txt', header=True)
    nameList = [x['name'] for x in df.rdd.collect()]
    print(list(nameList))
    df.show()

if __name__ == '__main__':
    main()
[u'john']
所需输出:

['j','o','h','n']

如果要在python中使用,请执行以下操作:

nameList = [c  for x in df.rdd.collect() for c in x['name']]
或者,如果您想在spark中执行此操作:

from pyspark.sql import functions as F

df.withColumn('name', F.split(F.col('name'), '')).show()
结果:

+---+--------------+-----+----------+--------+
| id|          name|class|start_data|end_date|
+---+--------------+-----+----------+--------+
|  1|[j, o, h, n, ]|  xii|  20170909|20210909|
+---+--------------+-----+----------+--------+
.tolist()将pandas系列转换为python列表,因此您应该首先从数据创建一个列表,然后在创建的列表上循环

namelist=df['name'].tolist()
for x in namelist:
    print(x)

如果您在spark scala中执行此操作(spark 2.3.1和scala-2.11.8) 下面的代码起作用。 我们将获得一个额外的记录,该记录的名称为空,因此将对其进行筛选

导入spark.implicits_ val classDF=spark.sparkContext.parallelize(Seq((1,“John”,“Xii”,“20170909”,“20210909”)) .toDF(“ID”、“名称”、“类别”、“开始日期”、“结束日期”)


它不是一个数据帧。