如何处理数据集Spark/Scala中的缺失值输入

如何处理数据集Spark/Scala中的缺失值输入,scala,apache-spark,spark-dataframe,Scala,Apache Spark,Spark Dataframe,我有这样的数据 8213034705_cst,95,2.927373,jake7870,0,95,117.5,xbox,3 ,10,0.18669,parakeet2004,5,1,120,xbox,3 8213060420_gfd,26,0.249757,bluebubbles_1,25,1,120,xbox,3 8213060420_xcv,80,0.59059,sa4741,3,1,120,xbox,3 ,75,0.657384,jhnsn2273,51,1,120,xbox,3 我试图

我有这样的数据

8213034705_cst,95,2.927373,jake7870,0,95,117.5,xbox,3
,10,0.18669,parakeet2004,5,1,120,xbox,3
8213060420_gfd,26,0.249757,bluebubbles_1,25,1,120,xbox,3
8213060420_xcv,80,0.59059,sa4741,3,1,120,xbox,3
,75,0.657384,jhnsn2273,51,1,120,xbox,3
我试图将“缺少的值”放在缺少记录的第一列(或完全删除它们)。我试图执行以下代码,但它给我错误

import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.spark.sql._
import org.apache.log4j._
import org.apache.spark.sql.functions
import java.lang.String
import org.apache.spark.sql.functions.udf
//import spark.implicits._


object DocParser2
{

 case class Auction(auctionid:Option[String], bid:Double, bidtime:Double, bidder:String, bidderrate:Integer, openbid:Double, price:Double, item:String, daystolive:Integer)

 def readint(ip:Option[String]):String = ip match
{

  case Some(ip) => ip.split("_")(0)
  case None => "missing value"

}





 def main(args:Array[String]) =
 {

   val spark=SparkSession.builder.appName("DocParser").master("local[*]").getOrCreate()

   import spark.implicits._





   val  intUDF  =   udf(readint _)

   val lines=spark.read.format("csv").option("header","false").option("inferSchema", true).load("data/auction2.csv").toDF("auctionid","bid","bidtime","bidder","bidderrate","openbid","price","item","daystolive")

   val recordsDS=lines.as[Auction]

   recordsDS.printSchema()



   println("splitting auction id into String and Int")

   // recordsDS.withColumn("auctionid_int",java.lang.String.split('auctionid,"_")).show() some error with the split method

   val auctionidcol=recordsDS.col("auctionid")

   recordsDS.withColumn("auctionid_int",intUDF('auctionid)).show() 

   spark.stop()

 }

}
但它是通过以下运行时错误实现的

无法将java.lang.String强制转换为val intUDF行中的Scala.option =自定义项(readint ux)

你能帮我找出错误吗


谢谢

一个UDF从来没有一个
选项作为输入,而是需要传递实际的类型。对于
字符串
您可以在您的UDF中执行空检查,对于不能为空的基本类型(Int、Double等),还有其他解决方案…

UDF从来没有
选项
作为输入,而是需要传递实际类型。对于
字符串
您可以在您的UDF中执行空检查,对于不能为空的基本类型(Int、Double等),还有其他解决方案…

您可以使用
spark.read.csv
读取
csv文件,并使用
na.drop()
删除包含缺失值的记录,在spark 2.0.2上测试:

val df = spark.read.option("header", "false").option("inferSchema", "true").csv("Path to Csv file")

df.show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|          null| 10| 0.18669| parakeet2004|  5|  1|120.0|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
|          null| 75|0.657384|    jhnsn2273| 51|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+

df.na.drop().show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+

您可以使用
spark.read.csv
读取
csv
文件,并使用
na.drop()
删除在spark 2.0.2上测试的包含缺失值的记录:

val df = spark.read.option("header", "false").option("inferSchema", "true").csv("Path to Csv file")

df.show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|          null| 10| 0.18669| parakeet2004|  5|  1|120.0|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
|          null| 75|0.657384|    jhnsn2273| 51|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+

df.na.drop().show
+--------------+---+--------+-------------+---+---+-----+----+---+
|           _c0|_c1|     _c2|          _c3|_c4|_c5|  _c6| _c7|_c8|
+--------------+---+--------+-------------+---+---+-----+----+---+
|8213034705_cst| 95|2.927373|     jake7870|  0| 95|117.5|xbox|  3|
|8213060420_gfd| 26|0.249757|bluebubbles_1| 25|  1|120.0|xbox|  3|
|8213060420_xcv| 80| 0.59059|       sa4741|  3|  1|120.0|xbox|  3|
+--------------+---+--------+-------------+---+---+-----+----+---+

你知道删除缺失值记录的方法吗?虽然我不确定spark 2.0,但在1.6中我会这样做:
recordsDS.where(col(auctionId).isNotNull)。withColumn(“auctionId_int”,intUDF('auctionId)).show()
你知道删除缺失值记录的方法吗?尽管我不确定spark 2.0,在1.6中,我会这样做:
recordsDS.where(col(auctionId).isNotNull)。withColumn(“auctionId_int”,intUDF('auctionId)).show()
您使用的是哪个spark版本?您使用的是哪个spark版本?