Scala 如何使用另一列中特定行的值填充新的spark dataframe列。需要建议吗
我的问题是:Scala 如何使用另一列中特定行的值填充新的spark dataframe列。需要建议吗,scala,apache-spark,Scala,Apache Spark,我的问题是: I have a spark dataframe that looks like this +-----------+---------------+ | id| name| +-----------+---------------+ | 1| Total:| | 2| Male:| | 3| Under 5 years| | 4| 5
I have a spark dataframe that looks like this
+-----------+---------------+
| id| name|
+-----------+---------------+
| 1| Total:|
| 2| Male:|
| 3| Under 5 years|
| 4| 5 to 9 years|
| 5| 10 to 14 years|
| 6| Female:|
| 7| Under 5 years|
| 8| 5 to 9 years|
| 9| 10 to 14 years|
+-----------+---------------+
I want to create a new DF with an added column that will look like this:
+-----------+---------------+---------------------+
| id| name| new_name|
+-----------+---------------+---------------------+
| 1| Total:| Total:|
| 2| Male:| Male:|
| 3| Under 5 years| Male: Under 5 years|
| 4| 5 to 9 years| Male: Under 5 years|
| 5| 10 to 14 years| Male: Under 5 years|
| 6| Female:| Female:|
| 7| Under 5 years|Female: Under 5 years|
| 8| 5 to 9 years|Female: Under 5 years|
| 9| 10 to 14 years|Female: Under 5 years|
+-----------+---------------+---------------------+
我没有任何值得展示的代码,我正在寻找解决问题的方法。我想应该是这样的:
val dfB = dfA.withColum(row => aUDF(row))
我假设解决方案需要某种UDF。我假设它需要循环或映射并在任何时候在名称字段中找到带有“:”的行时更新“prefix”val。但我不知道该怎么做。任何想法都将不胜感激
Spark 2.4.3您可以通过使用分割和最后一个窗口功能来实现这一点
我认为这就是你想要实现的,如果它解决了你的问题,请接受答案
scala> import org.apache.spark.sql.expressions.Window
scala> var df = spark.createDataFrame(Seq((1,"Total:"), (2,"Male:"),(3, "Under 5 years"),(4,"5 to 9 years"),(5, "10 to 14 years"),(6,"Female:"),(7,"Under 5 years"),(8,"5 to 9 years"),(9, "10 to 14 years"))).toDF("id","name")
scala> df.show
+---+--------------+
| id| name|
+---+--------------+
| 1| Total:|
| 2| Male:|
| 3| Under 5 years|
| 4| 5 to 9 years|
| 5|10 to 14 years|
| 6| Female:|
| 7| Under 5 years|
| 8| 5 to 9 years|
| 9|10 to 14 years|
+---+--------------+
scala> var win =Window.orderBy(col("id"))
scala> var df2 =df.withColumn("name_1",last(when(split($"name",":")(1) ==="",$"name"),true).over(win))
scala> df2.withColumn("name",when($"name"===$"name_1",$"name").otherwise(concat($"name_1",$"name"))).drop($"name_1").show(false)
+---+---------------------+
|id |name |
+---+---------------------+
|1 |Total: |
|2 |Male: |
|3 |Male:Under 5 years |
|4 |Male:5 to 9 years |
|5 |Male:10 to 14 years |
|6 |Female: |
|7 |Female:Under 5 years |
|8 |Female:5 to 9 years |
|9 |Female:10 to 14 years|
+---+---------------------+