基于Scala数组筛选或标记行
有没有基于Scala数组筛选或标记行的方法 请记住,在现实中,行的数量要大得多 样本数据基于Scala数组筛选或标记行,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,有没有基于Scala数组筛选或标记行的方法 请记住,在现实中,行的数量要大得多 样本数据 val clients= List(List("1", "67") ,List("2", "77") ,List("3", "56"),List("4","90")).map(x =>(x(0), x(1))) val df = clients.toDF("soc","ages") +---+----+ |soc|ages| +---+----+ | 1| 67| | 2| 77| | 3
val clients= List(List("1", "67") ,List("2", "77") ,List("3", "56"),List("4","90")).map(x =>(x(0), x(1)))
val df = clients.toDF("soc","ages")
+---+----+
|soc|ages|
+---+----+
| 1| 67|
| 2| 77|
| 3| 56|
| 4| 90|
| ..| ..|
+---+----+
我想过滤Scala数组中的所有年龄,比如说
var z = Array(90, 56,67).
df.where(($"ages" IN z)
或
df..withColumn(“flag”),当($“ages”>=30,1)
。否则(当($“ages”时,一个选项是自定义项
scala> val df1 = Seq((1, 67), (2, 77), (3, 56), (4, 90)).toDF("soc", "ages")
df1: org.apache.spark.sql.DataFrame = [soc: int, ages: int]
scala> df1.show
+---+----+
|soc|ages|
+---+----+
| 1| 67|
| 2| 77|
| 3| 56|
| 4| 90|
+---+----+
scala> val scalaAgesArray = Array(90, 56,67)
scalaAgesArray: Array[Int] = Array(90, 56, 67)
scala> val containsAgeUdf = udf((x: Int) => scalaAgesArray.contains(x))
containsAgeUdf: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,BooleanType,Some(List(IntegerType)))
scala> val outputDF = df1.withColumn("flag", containsAgeUdf($"ages"))
outputDF: org.apache.spark.sql.DataFrame = [soc: int, ages: int ... 1 more field]
scala> outputDF.show(false)
+---+----+-----+
|soc|ages|flag |
+---+----+-----+
|1 |67 |true |
|2 |77 |false|
|3 |56 |true |
|4 |90 |true |
+---+----+-----+
scala>valdf1=Seq((1,67),(2,77),(3,56),(4,90)).toDF(“soc”,“ages”)
df1:org.apache.spark.sql.DataFrame=[soc:int,ages:int]
scala>df1.show
+---+----+
|soc |年龄|
+---+----+
| 1| 67|
| 2| 77|
| 3| 56|
| 4| 90|
+---+----+
scala>val scalaAgesArray=Array(90,56,67)
scalaAgesArray:Array[Int]=数组(90,56,67)
scala>val containsAgeUdf=udf((x:Int)=>scalaAgesArray.contains(x))
containsAgeUdf:org.apache.spark.sql.expressions.UserDefinedFunction=UserDefinedFunction(,BooleanType,Some(List(IntegerType)))
scala>val outputDF=df1.withColumn(“flag”,containsAgeUdf($“ages”))
outputDF:org.apache.spark.sql.DataFrame=[soc:int,ages:int…1更多字段]
scala>outputDF.show(false)
+---+----+-----+
|soc |年龄|旗|
+---+----+-----+
|1 | 67 |正确|
|2 | 77 |错误|
|3 | 56 |正确|
|4 | 90 |正确|
+---+----+-----+
您还可以使用数组的.*
运算符将每个元素作为参数传递
然后使用isin编写一个案例
Ex:
val df1 = Seq((1, 67), (2, 77), (3, 56), (4, 90)).toDF("soc", "ages")
val z = Array(90, 56,67)
df1.withColumn("flag",
when('ages.isin(z: _*), "in Z array")
.otherwise("not in Z array"))
.show(false)
+---+----+--------------+
|soc|ages|flag |
+---+----+--------------+
|1 |67 |in Z array |
|2 |77 |not in Z array|
|3 |56 |in Z array |
|4 |90 |in Z array |
+---+----+--------------+
val df1 = Seq((1, 67), (2, 77), (3, 56), (4, 90)).toDF("soc", "ages")
val z = Array(90, 56,67)
df1.withColumn("flag",
when('ages.isin(z: _*), "in Z array")
.otherwise("not in Z array"))
.show(false)
+---+----+--------------+
|soc|ages|flag |
+---+----+--------------+
|1 |67 |in Z array |
|2 |77 |not in Z array|
|3 |56 |in Z array |
|4 |90 |in Z array |
+---+----+--------------+