Scala-使用“过滤数据帧”;endsWith";

Scala-使用“过滤数据帧”;endsWith";,scala,apache-spark,dataframe,apache-spark-sql,Scala,Apache Spark,Dataframe,Apache Spark Sql,给定一个数据帧: val df = sc.parallelize(List(("Mike","1986","1976"), ("Andre","1980","1966"), ("Pedro","1989","2000"))) .toDF("info", "year1", "year2") df.show +-----+-----+-----+ | info|year1|year2| +-----+-----+-----+ | Mike| 1986| 1976| |And

给定一个数据帧:

 val df = sc.parallelize(List(("Mike","1986","1976"), ("Andre","1980","1966"), ("Pedro","1989","2000")))
      .toDF("info", "year1", "year2")
df.show

 +-----+-----+-----+
 | info|year1|year2|
 +-----+-----+-----+
 | Mike| 1986| 1976|
 |Andre| 1980| 1966|
 |Pedro| 1989| 2000|
 +-----+-----+-----+
我尝试过滤所有以
6
结尾的
df
值,但得到异常。 我试过:

  val filtered = df.filter(df.col("*").endsWith("6"))
  org.apache.spark.sql.catalyst.analysis.UnresolvedException: Invalid call to dataType on unresolved object, tree: ResolvedStar(info#20, year1#21, year2#22)
我也试过这个:

val filtered = df.select(df.col("*")).filter(_ endsWith("6"))
error: missing parameter type for expanded function ((x$1) => x$1.endsWith("6"))

如何解决这个问题?谢谢

我不太确定您想做什么,但根据我的理解:

val df = sc.parallelize(List(("Mike","1986","1976"), ("Andre","1980","1966"), ("Pedro","1989","2000"))).toDF("info", "year1", "year2")
df.show 
# +-----+-----+-----+
# | info|year1|year2|
# +-----+-----+-----+
# | Mike| 1986| 1976|
# |Andre| 1980| 1966|
# |Pedro| 1989| 2000|
# +-----+-----+-----+

val conditions = df.columns.map(df(_).endsWith("6")).reduce(_ or _)
df.withColumn("condition", conditions).filter($"condition" === true).drop("condition").show
# +-----+-----+-----+
# | info|year1|year2|
# +-----+-----+-----+
# |Andre| 1980| 1966|
# | Mike| 1986| 1976|
# +-----+-----+-----+

谢谢你的快速回答。我想过滤
df
值(所有值均为
6
),然后从
df
创建
rdd
。reduce(u)或u)代表什么?它是如何工作的?reduce(u或u)将使用or语句创建筛选器。通过这种方式,您将使用year1…n中包含6的任何值筛选行。我不确定是否仍然理解您的问题。你能通过编辑你的问题举例说明吗?