Scala 在spark中,遍历每列并找到最大长度

Scala 在spark中,遍历每列并找到最大长度,scala,apache-spark,aggregation,maxlength,Scala,Apache Spark,Aggregation,Maxlength,我是spark scala的新手,我有以下情况 我在集群上有一个表“TEST_table”(可以是配置单元表) 我正在将其转换为数据帧 作为: 现在可以将DF视为 scala> testDF.show() COL1|COL2|COL3 ---------------- abc|abcd|abcdef a|BCBDFG|qddfde MN|1234B678|sd 我想要一个如下的输出 COLUMN_NAME|MAX_LENGTH COL1|3 COL2

我是spark scala的新手,我有以下情况 我在集群上有一个表“TEST_table”(可以是配置单元表) 我正在将其转换为数据帧 作为:

现在可以将DF视为

scala> testDF.show()

COL1|COL2|COL3  
----------------
abc|abcd|abcdef 
a|BCBDFG|qddfde 
MN|1234B678|sd
我想要一个如下的输出

COLUMN_NAME|MAX_LENGTH
       COL1|3
       COL2|8
       COL3|6
在spark scala中这样做可行吗?

简单明了:

import org.apache.spark.sql.functions._

val df = spark.table("TEST_TABLE")
df.select(df.columns.map(c => max(length(col(c)))): _*)

您可以通过以下方式进行尝试:

import org.apache.spark.sql.functions.{length, max}
import spark.implicits._

val df = Seq(("abc","abcd","abcdef"),
          ("a","BCBDFG","qddfde"),
          ("MN","1234B678","sd"),
          (null,"","sd")).toDF("COL1","COL2","COL3")
df.cache()
val output = df.columns.map(c => (c, df.agg(max(length(df(s"$c")))).as[Int].first())).toSeq.toDF("COLUMN_NAME", "MAX_LENGTH")
        +-----------+----------+
        |COLUMN_NAME|MAX_LENGTH|
        +-----------+----------+
        |       COL1|         3|
        |       COL2|         8|
        |       COL3|         6|
        +-----------+----------+

我认为最好缓存输入数据帧
df
,以加快计算速度

下面是另一种获取垂直列名称报告的方法

scala> val df = Seq(("abc","abcd","abcdef"),("a","BCBDFG","qddfde"),("MN","1234B678","sd")).toDF("COL1","COL2","COL3")
df: org.apache.spark.sql.DataFrame = [COL1: string, COL2: string ... 1 more field]

scala> df.show(false)
+----+--------+------+
|COL1|COL2    |COL3  |
+----+--------+------+
|abc |abcd    |abcdef|
|a   |BCBDFG  |qddfde|
|MN  |1234B678|sd    |
+----+--------+------+

scala> val columns = df.columns
columns: Array[String] = Array(COL1, COL2, COL3)

scala> val df2 = columns.foldLeft(df) { (acc,x) => acc.withColumn(x,length(col(x))) }
df2: org.apache.spark.sql.DataFrame = [COL1: int, COL2: int ... 1 more field]

scala> df2.select( columns.map(x => max(col(x))):_* ).show(false)
+---------+---------+---------+
|max(COL1)|max(COL2)|max(COL3)|
+---------+---------+---------+
|3        |8        |6        |
+---------+---------+---------+


scala> df3.flatMap( r => { (0 until r.length).map( i => (columns(i),r.getInt(i)) ) } ).show(false)
+----+---+
|_1  |_2 |
+----+---+
|COL1|3  |
|COL2|8  |
|COL3|6  |
+----+---+


scala>
要将结果放入Scala集合,请使用Map()


很好的解决方案,但是,我的实际数据有null和空格,所以“.as[Int]”会抛出错误,如果我删除“.as[Int]”,那么它会请求“编码器”错误:java.lang.UnsupportedOperationException:org.apache.spark.sql.Row-字段(类:“org.apache.spark.sql.Row”,名称:“_2”)-根类:“scala.Tuple2”谢谢,我正在尝试重新生成您面临的问题。我已经编辑了我的答案,插入了一行null和空白,但没有显示任何错误。您能否提供示例行,以便我可以重新生成问题?您好,我只需将.as[Int]更改为.as[String]你能接受这个答案吗?:)你能把这个也转换成PySpark吗?谢谢
scala> val df = Seq(("abc","abcd","abcdef"),("a","BCBDFG","qddfde"),("MN","1234B678","sd")).toDF("COL1","COL2","COL3")
df: org.apache.spark.sql.DataFrame = [COL1: string, COL2: string ... 1 more field]

scala> df.show(false)
+----+--------+------+
|COL1|COL2    |COL3  |
+----+--------+------+
|abc |abcd    |abcdef|
|a   |BCBDFG  |qddfde|
|MN  |1234B678|sd    |
+----+--------+------+

scala> val columns = df.columns
columns: Array[String] = Array(COL1, COL2, COL3)

scala> val df2 = columns.foldLeft(df) { (acc,x) => acc.withColumn(x,length(col(x))) }
df2: org.apache.spark.sql.DataFrame = [COL1: int, COL2: int ... 1 more field]

scala> df2.select( columns.map(x => max(col(x))):_* ).show(false)
+---------+---------+---------+
|max(COL1)|max(COL2)|max(COL3)|
+---------+---------+---------+
|3        |8        |6        |
+---------+---------+---------+


scala> df3.flatMap( r => { (0 until r.length).map( i => (columns(i),r.getInt(i)) ) } ).show(false)
+----+---+
|_1  |_2 |
+----+---+
|COL1|3  |
|COL2|8  |
|COL3|6  |
+----+---+


scala>
scala> val result = df3.flatMap( r => { (0 until r.length).map( i => (columns(i),r.getInt(i)) ) } ).as[(String,Int)].collect.toMap
result: scala.collection.immutable.Map[String,Int] = Map(COL1 -> 3, COL2 -> 8, COL3 -> 6)

scala> result
res47: scala.collection.immutable.Map[String,Int] = Map(COL1 -> 3, COL2 -> 8, COL3 -> 6)

scala>