Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/arrays/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Pyspark 使用Spark将列名追加到列值_Pyspark_Apache Spark Sql_Azure Databricks_Fpgrowth - Fatal编程技术网

Pyspark 使用Spark将列名追加到列值

Pyspark 使用Spark将列名追加到列值,pyspark,apache-spark-sql,azure-databricks,fpgrowth,Pyspark,Apache Spark Sql,Azure Databricks,Fpgrowth,我有逗号分隔文件中的数据,我已将其加载到spark数据框中: 数据如下所示: A B C 1 2 3 4 5 6 7 8 9 我想使用pyspark转换spark中的上述数据帧: A B C A_1 B_2 C_3 A_4 B_5 C_6 -------------- [[ A_1 , B_2 , C_3],[A_4 , B_5 , C_6]] 然后使用pyspark将其转换为列表列表: A B C A_1 B_2

我有逗号分隔文件中的数据,我已将其加载到spark数据框中: 数据如下所示:

  A B C
  1 2 3
  4 5 6
  7 8 9
我想使用pyspark转换spark中的上述数据帧:

   A    B   C
  A_1  B_2  C_3
  A_4  B_5  C_6
  --------------
[[ A_1 , B_2 , C_3],[A_4 , B_5 , C_6]]
然后使用pyspark将其转换为列表列表:

   A    B   C
  A_1  B_2  C_3
  A_4  B_5  C_6
  --------------
[[ A_1 , B_2 , C_3],[A_4 , B_5 , C_6]]
然后在上述数据集上使用pyspark运行FP-Growth算法

我尝试过的代码如下:

from pyspark.sql.functions import col, size
from pyspark.sql.functions import *
import pyspark.sql.functions as func
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
from pyspark.ml.fpm import FPGrowth
from pyspark.sql import Row
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark import SparkConf
from pyspark.sql.types import StringType
from pyspark import SQLContext

sqlContext = SQLContext(sc)
df = spark.read.format("csv").option("header", "true").load("dbfs:/FileStore/tables/data.csv")

 names=df.schema.names
然后我想到在for循环内部做一些事情:

 for name in names:
      -----
      ------
在此之后,我将使用fpgrowth:

df = spark.createDataFrame([
    (0, [ A_1 , B_2 , C_3]),
    (1, [A_4 , B_5 , C_6]),)], ["id", "items"])

fpGrowth = FPGrowth(itemsCol="items", minSupport=0.5, minConfidence=0.6)
model = fpGrowth.fit(df)

这里为那些通常使用Scala的人提供了一些概念,展示了如何使用pyspark。虽然有些不同,但可以肯定的是,对多少人来说是个大问题。我当然从PypSpark和zipWithIndex身上学到了一点。无论如何

第一部分是将内容转换成所需格式,可能也会导入,但保持原样:

from functools import reduce
from pyspark.sql.functions import lower, col, lit, concat, split
from pyspark.sql.types import * 
from pyspark.sql import Row
from pyspark.sql import functions as f

source_df = spark.createDataFrame(
   [
    (1, 11, 111),
    (2, 22, 222)
   ],
   ["colA", "colB", "colC"]
                                 )

intermediate_df = (reduce(
                    lambda df, col_name: df.withColumn(col_name, concat(lit(col_name), lit("_"), col(col_name))),
                    source_df.columns,
                    source_df
                   )     )

allCols = [x for x in intermediate_df.columns]
result_df = intermediate_df.select(f.concat_ws(',', *allCols).alias('CONCAT_COLS'))

result_df = result_df.select(split(col("CONCAT_COLS"), ",\s*").alias("ARRAY_COLS"))

# Add 0,1,2,3, ... with zipWithIndex, we add it at back, but that does not matter, you can move it around.
# Get new Structure, the fields (one in this case but done flexibly, plus zipWithIndex value.
schema = StructType(result_df.schema.fields[:] + [StructField("index", LongType(), True)])

# Need this dict approach with pyspark, different to Scala.
rdd = result_df.rdd.zipWithIndex()
rdd1 = rdd.map(
               lambda row: tuple(row[0].asDict()[c] for c in schema.fieldNames()[:-1]) + (row[1],)
              )

final_result_df = spark.createDataFrame(rdd1, schema)
final_result_df.show(truncate=False)
返回:

 +---------------------------+-----+
 |ARRAY_COLS                 |index|
 +---------------------------+-----+
 |[colA_1, colB_11, colC_111]|0    |
 |[colA_2, colB_22, colC_222]|1    |
 +---------------------------+-----+
第二部分是旧的zipWithIndex,如果需要0,1,。。与鳞片相比疼痛

通常,在Scala中更容易求解


对性能不确定,不是foldLeft,很有趣。其实我觉得还可以

您可以使用lit函数,然后在scala中使用collect_list函数:
df.withColumn(“a”),concat(lit(“a”),df(“a”))
Pyspark应该是类似的东西Hard yakka,更新的答案。您是否需要0、1、2?这实际上是一个很好的问题,将许多概念结合在一起。现在如何转换成这种格式:(0,[A_1,B_2,C_3]),(1,[A_4,B_5,C_6]),[“id”,“items”])我正在考虑这个问题,但遇到了一些pyspark问题,也许我可以解决,我更擅长scala,但我想知道我们该怎么做-这就是为什么我添加了我最后的评论,我认为我可能有它,我也需要索引。