Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 如何使用withColumn创建新列以将两个数字列集中为字符串?_Scala_Apache Spark_Apache Spark Sql - Fatal编程技术网

Scala 如何使用withColumn创建新列以将两个数字列集中为字符串?

Scala 如何使用withColumn创建新列以将两个数字列集中为字符串?,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我的数据框架如下 val employees = sc.parallelize(Array[(String, Int, BigInt)]( ("Rafferty", 31, 222222222), ("Jones", 33, 111111111), ("Heisenberg", 33, 222222222), ("Robinson", 34, 111111111), ("Smith", 34, 333333333), ("Williams", 15, 222222222) )).toDF("

我的数据框架如下

val employees = sc.parallelize(Array[(String, Int, BigInt)](
  ("Rafferty", 31, 222222222), ("Jones", 33, 111111111), ("Heisenberg", 33, 222222222), ("Robinson", 34, 111111111), ("Smith", 34, 333333333), ("Williams", 15, 222222222)
)).toDF("LastName", "DepartmentID", "Code")

employees.show()

 +----------+------------+---------+
|  LastName|DepartmentID|     Code|
+----------+------------+---------+
|  Rafferty|          31|222222222|
|     Jones|          33|111111111|
|Heisenberg|          33|222222222|
|  Robinson|          34|111111111|
|     Smith|          34|333333333|
|  Williams|          15|222222222|
+----------+------------+---------+
我想创建另一个列作为personal_id作为central DepartmentId和Code。示例:Rafferty=>3122222

因此,我编写如下代码:

val anotherdf = employees.withColumn("personal_id", $"DepartmentID".cast("String") + $"Code".cast("String"))


 +----------+------------+---------+------------+
|  LastName|DepartmentID|     Code| personal_id|
+----------+------------+---------+------------+
|  Rafferty|          31|222222222|2.22222253E8|
|     Jones|          33|111111111|1.11111144E8|
|Heisenberg|          33|222222222|2.22222255E8|
|  Robinson|          34|111111111|1.11111145E8|
|     Smith|          34|333333333|3.33333367E8|
|  Williams|          15|222222222|2.22222237E8|
+----------+------------+---------+------------+
但我在双人房拿到了个人身份证

anotherdf.printSchema

root
 |-- LastName: string (nullable = true)
 |-- DepartmentID: integer (nullable = false)
 |-- Code: decimal(38,0) (nullable = true)
 |-- personal_id: double (nullable = true) 

我应该使用
concat

import org.apache.spark.sql.functions.concat
val anotherdf2 = employees.withColumn("personal_id", concat($"DepartmentID".cast("String"), $"Code".cast("String")))


 +----------+------------+---------+-----------+
|  LastName|DepartmentID|     Code|personal_id|
+----------+------------+---------+-----------+
|  Rafferty|          31|222222222|31222222222|
|     Jones|          33|111111111|33111111111|
|Heisenberg|          33|222222222|33222222222|
|  Robinson|          34|111111111|34111111111|
|     Smith|          34|333333333|34333333333|
|  Williams|          15|222222222|15222222222|
+----------+------------+---------+-----------+