Python 通过在空白处拆分值来创建新的pyspark dataframe列

Python 通过在空白处拆分值来创建新的pyspark dataframe列,python,pyspark,pyspark-sql,pyspark-dataframes,Python,Pyspark,Pyspark Sql,Pyspark Dataframes,我有一个pyspark数据框,如下面的输入数据。我想在空白处拆分productname列中的值。然后我想用前3个值创建新列。我有下面的示例输入和输出数据。有人能建议如何使用pyspark吗 输入数据: +------+-------------------+ |id |productname | +------+-------------------+ |235832|EXTREME BERRY Sweet| |419736|BLUE CHASER

我有一个pyspark数据框,如下面的输入数据。我想在空白处拆分productname列中的值。然后我想用前3个值创建新列。我有下面的示例输入和输出数据。有人能建议如何使用pyspark吗

输入数据:

+------+-------------------+
|id    |productname        |
+------+-------------------+
|235832|EXTREME BERRY Sweet|             
|419736|BLUE CHASER SAUCE  |             
|124513|LAAVA C2L5         |
+------+-------------------+
输出:

+------+-------------------+-------------+-------------+-------------+
|id    |productname        |product1     |product2     |product3     |
+------+-------------------+-------------+-------------+-------------+
|235832|EXTREME BERRY Sweet|EXTREME      |BERRY        |Sweet        |
|419736|BLUE CHASER SAUCE  |BLUE         |CHASER       |SAUCE        |
|124513|LAAVA C2L5         |LAAVA        |C2L5         |             |
+------+-------------------+-------------+-------------+-------------+

拆分
产品名称列,然后使用(或)索引值上的
.getItem()
元素创建新列

df.withColumn("tmp",split(col("productname"),"\s+")).\
withColumn("product1",element_at(col("tmp"),1)).\
withColumn("product2",element_at(col("tmp"),2)).\
withColumn("product3",coalesce(element_at(col("tmp"),3),lit(""))).drop("tmp").show()

#or

df.withColumn("tmp",split(col("productname"),"\s+")).\
withColumn("product1",col("tmp").getItem(0)).\
withColumn("product2",col("tmp").getItem(1)).\
withColumn("product3",coalesce(col("tmp").getItem(2),lit(""))).drop("tmp").show()
#+------+-------------------+--------+--------+--------+
#|    id|        productname|product1|product2|product3|
#+------+-------------------+--------+--------+--------+
#|235832|EXTREME BERRY Sweet| EXTREME|   BERRY|   Sweet|
#|     4|  BLUE CHASER SAUCE|    BLUE|  CHASER|   SAUCE|
#|     1|         LAAVA C2L5|   LAAVA|    C2L5|        |
#+------+-------------------+--------+--------+--------+

以更动态的方式执行:

df.show()
#+------+-------------------+
#|    id|        productname|
#+------+-------------------+
#|235832|EXTREME BERRY Sweet|
#|     4|  BLUE CHASER SAUCE|
#|     1|         LAAVA C2L5|
#+------+-------------------+
#caluculate array max size and store into variable
arr=int(df.select(size(split(col("productname"),"\s+")).alias("size")).orderBy(desc("size")).collect()[0][0])

#loop through arr variable and add the columns replace null with ""
(df.withColumn('temp', split('productname', '\s+')).select("*",*(coalesce(col('temp').getItem(i),lit("")).alias('product{}'.format(i+1)) for i in range(arr))).drop("temp").show())

#+------+-------------------+--------+--------+--------+
#|    id|        productname|product1|product2|product3|
#+------+-------------------+--------+--------+--------+
#|235832|EXTREME BERRY Sweet| EXTREME|   BERRY|   Sweet|
#|     4|  BLUE CHASER SAUCE|    BLUE|  CHASER|   SAUCE|
#|     1|         LAAVA C2L5|   LAAVA|    C2L5|        |
#+------+-------------------+--------+--------+--------+

您可以使用
split
element\u at
when/othery
子句和
array\u union
放置空字符串

from pyspark.sql import functions as F
from pyspark.sql.functions import when
df.withColumn("array", F.split("productname","\ "))\
  .withColumn("array", F.when(F.size("array")==2, F.array_union(F.col("array"),F.array(F.lit(""))))\
                        .when(F.size("array")==1, F.array_union(F.col("array"),F.array(F.lit(" "),F.lit(""))))\
                        .otherwise(F.col("array")))\
  .withColumn("product1", F.element_at("array",1))\
  .withColumn("product2", F.element_at("array",2))\
  .withColumn("product3", F.element_at("array",3)).drop("array")\
  .show(truncate=False)

+------+-------------------+--------+--------+--------+
|id    |productname        |product1|product2|product3|
+------+-------------------+--------+--------+--------+
|235832|EXTREME BERRY Sweet|EXTREME |BERRY   |Sweet   |
|419736|BLUE CHASER SAUCE  |BLUE    |CHASER  |SAUCE   |
|124513|LAAVA C2L5         |LAAVA   |C2L5    |        |
|123455|LAVA               |LAVA    |        |        |
+------+-------------------+--------+--------+--------+

我们可以假设您只需要再增加三列吗?(1、2、3),或者根据产品名称是否可以增加更多列?谢谢,我喜欢动态版本。我确实必须使用orderBy编辑该部件,尽管它是.orderBy(“size”,升序=False)”。我不知道这是否与我正在使用的pyspark版本有关。@user3476463,
。orderBy(desc(“size”)
也在做同样的事情。。就像我们在按降序排序一样!