Spark Scala CSV列名小写

Spark Scala CSV列名小写,scala,csv,apache-spark,apache-spark-sql,Scala,Csv,Apache Spark,Apache Spark Sql,请查找下面的代码,并让我知道如何将列名更改为小写。我尝试了ColumnRename,但我必须对每一列进行重命名,并键入所有列名。我只想在列上这样做,所以我不想提及所有的列名,因为它们太多了 Scala版本:2.11 火花:2.2 import org.apache.spark.sql.SparkSession import org.apache.log4j.{Level, Logger} import com.datastax import org.apache.spark.SparkCon

请查找下面的代码,并让我知道如何将列名更改为小写。我尝试了ColumnRename,但我必须对每一列进行重命名,并键入所有列名。我只想在列上这样做,所以我不想提及所有的列名,因为它们太多了

Scala版本:2.11 火花:2.2

import org.apache.spark.sql.SparkSession
import org.apache.log4j.{Level, Logger}
import com.datastax


import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import org.apache.spark.sql._

object dataframeset {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf().setAppName("Sample1").setMaster("local[*]")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val rdd1 = sc.cassandraTable("tdata", "map3")
    Logger.getLogger("org").setLevel(Level.ERROR)
    Logger.getLogger("akka").setLevel(Level.ERROR)
    val spark1 = org.apache.spark.sql.SparkSession.builder().master("local").config("spark.cassandra.connection.host","127.0.0.1")
      .appName("Spark SQL basic example").getOrCreate()

    val df = spark1.read.format("csv").option("header","true").option("inferschema", "true").load("/Users/Desktop/del2.csv")
    import spark1.implicits._
    println("\nTop Records are:")
    df.show(1)


    val dfprev1 = df.select(col = "sno", "year", "StateAbbr")

    dfprev1.show(1)
}
}
所需输出:

| sno | year | stateabbr | statedesc | cityname | geographicalevel
所有列的名称都应该用小写。
实际产量:

顶级记录包括:
+---+----+---------+-------------+--------+---------------+----------+----------+--------+--------------------+---------------+---------------+--------------------+----------+--------------------+---------------------+--------------------------+-------------------+---------------+-----------+----------+---------+--------+---------+-------------------+
|sno |年份|州名称|州名称|城市名称|地理位置|数据源|类别|唯一ID |度量|数据|值|单位|数据值类型ID |数据值类型数据值低置信度限高置信度限数据值脚注符号数据值脚注人口计数地理位置分类度量城市地图短问题文本|
+---+----+---------+-------------+--------+---------------+----------+----------+--------+--------------------+---------------+---------------+--------------------+----------+--------------------+---------------------+--------------------------+-------------------+---------------+-----------+----------+---------+--------+---------+-------------------+
|1 | 2014 |美国|美国| BRFSS |预防| 59 |目前缺乏h.|%| AgeAdjPrv |年龄调整前…| 14.9 | 14.6 | 15.2 |零|零| 308745538 |零|预防|访问2 |零|健康保险|
+---+----+---------+-------------+--------+---------------+----------+----------+--------+--------------------+---------------+---------------+--------------------+----------+--------------------+---------------------+--------------------------+-------------------+---------------+-----------+----------+---------+--------+---------+-------------------+
仅显示前1行
+---+----+---------+
|斯诺|年|州缩写|
+---+----+---------+
|1 | 2014 |美国|
+---+----+---------+
仅显示前1行

只需使用
toDF

df.toDF(df.columns map(_.toLowerCase): _*)

另一种实现方法是使用FoldLeft方法

val myDFcolNames = myDF.columns.toList
val rdoDenormDF = myDFcolNames.foldLeft(myDF)((myDF, c) =>
    myDF.withColumnRenamed(c.toString.split(",")(0), c.toString.toLowerCase()))

我得到了它。谢谢你。