Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/design-patterns/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/ionic-framework/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 启用多行选项时Spark 2的CSV解析器上的奇怪行为_Apache Spark_Apache Spark Sql_Spark Csv_Apache Spark 2.2 - Fatal编程技术网

Apache spark 启用多行选项时Spark 2的CSV解析器上的奇怪行为

Apache spark 启用多行选项时Spark 2的CSV解析器上的奇怪行为,apache-spark,apache-spark-sql,spark-csv,apache-spark-2.2,Apache Spark,Apache Spark Sql,Spark Csv,Apache Spark 2.2,从CSV文件创建数据帧时,如果启用了多行选项,则某些文件列的分析不正确 下面是代码执行。我将尝试在代码运行时显示奇怪的行为 首先,我用两个变量加载文件:df_ok加载文件时不使用multiLine选项,而df_ko加载文件时启用了multiLine选项。我正在使用的示例文件将\r\n作为EOL,它以UTF-8编码,并使用管道作为列分隔符 val df_ok = spark.read.format("csv").option("header", "true").option("delimiter"

从CSV文件创建数据帧时,如果启用了
多行
选项,则某些文件列的分析不正确

下面是代码执行。我将尝试在代码运行时显示奇怪的行为

首先,我用两个变量加载文件:
df_ok
加载文件时不使用
multiLine
选项,而
df_ko
加载文件时启用了
multiLine
选项。我正在使用的示例文件将
\r\n
作为EOL,它以UTF-8编码,并使用管道作为列分隔符

val df_ok = spark.read.format("csv").option("header", "true").option("delimiter", "|").load("/.../20180423_LSV.csv")
val df_ko = spark.read.format("csv").option("header", "true").option("delimiter", "|").option("multiLine", "true").load("/.../20180423_LSV.csv")

df_ok.printSchema
root
 |-- FILIALE: string (nullable = true)
 |-- IDMDB: string (nullable = true)
 |-- FIMAGIC: string (nullable = true)
 ...
 |-- STATUS: string (nullable = true)
 |-- LSV_TYPE: string (nullable = true)

df_ko.printSchema
root
 |-- FILIALE: string (nullable = true)
 |-- IDMDB: string (nullable = true)
 |-- FIMAGIC: string (nullable = true)
 ...
 |-- STATUS: string (nullable = true)
 : string (nullable = true)

 df_ko.columns
 res0: Array[String] = Array(FILIALE, IDMDB, FIMAGIC, FIVEHMAGIC, SITE, VEHICLE, DESC001, DESC002, LOCN, PROGRESS, CHASSIS, REGN, CUSDATE, INVDATE, STYPE, TARMAGIC, VEHMAGIC, COST, REGDATE, DELDATE, EXEC001, INVOICE, INVTOT, STATUS, "LSV")YPE
我注意到的第一件事是,当使用
多行
时,
LSV_TYPE
列消失。相反,它显示
:字符串(nullable=true)
。当使用
df_ko.columns
时,它会显示一些非常奇怪的信息:
…“LSV”)YPE
。当我仅使用
\n
作为下线时,不会发生这种情况。设置
quote
选项不会更改任何内容

接下来我测试的是选择列
FILIALE

df_ok.select($"FILIALE")
res1: org.apache.spark.sql.DataFrame = [FILIALE: string]

df_ok.columns.contains("FILIALE")
res2: Boolean = true

df_ko.select($"FILIALE")
org.apache.spark.sql.AnalysisException: cannot resolve '`FILIALE`' given input columns: [INVOICE, DESC001, COST, STYPE, PROGRESS, INVTOT, VEHICLE, REGDATE, TARMAGIC, STATUS, CUSDATE, LOCN, INVDATE, SITE, DELDATE, REGN, EXEC001, VEHMAGIC,, DESC002, FIVEHMAGIC, CHASSIS, FIMAGIC, FILIALE];;
因此,我认为这不仅仅是他专栏的名字:

df_ko.columns.head.toArray
res68: Array[Char] = Array(, F, I, L, I, A, L, E)

df_ko.columns.head.toArray.foreach(c => println(c.toInt))
65279
70
73
76
73
65
76
69
然后我以不同的方式检查了文件:

spark.read.text("/.../20180423_LSV.csv").collect.head.getString(0).toArray.foreach(c => println(c.toInt))
70
73
76
73
65
76
然后我确信问题来自CSV解析。添加
charset
选项或使用univocity作为
parseLib
选项的值不会改变任何内容

我正在使用的文件:

FILIALE|IDMDB|FIMAGIC|FIVEHMAGIC|SITE|VEHICLE|DESC001|DESC002|LOCN|PROGRESS|CHASSIS|REGN|CUSDATE|INVDATE|STYPE|TARMAGIC|VEHMAGIC|COST|REGDATE|DELDATE|EXEC001|INVOICE|INVTOT|STATUS|LSV_TYPE
XXXXXXXX|XX696209|XX696209|XX0|73|100284|XXXXXXXXXXXXX45XXXXXXXX|X|73X|X|XXX4503321X2361427|73XX100284||24X10X2005|X|696209|0|9592X7||22X10X2005|XXXX73|500228|10841X24|X|XX
XXXXXXXX|XX1454353|XX1454353|XX959136|73|100212|XXXXXXXXXXXXX45XXXXXXXXXXXXXXXXXX|XXXXXXXXXXXXX45XXXXXXXXXXXXXXXXXX|73X|X|XXX4503321X2096859|73XX100212||08X09X2005|X|1454353|959136|0|||XXXX73|500205|0|X|XX
XXXXXXXX|XX607020|XX607020|XX0|73|100097|XXXXXXXXXXXX50XXXXXXXXXXXXXX|X|73X|X|XXX4540001X0540628|8232XX33||17X02X2005|X|607020|0|10868X34|||XXXX73|500025|11750|X|XX
XXXXXXXX|XX1796002|XX1796002|XX0|73|100091|XXXXXXXXXXXX70XXXXXXXXXXX|X|73X|X|XXX4540011X072541X|73XX100091||21X01X2005|X|1796002|0|12457X44||19X01X2005|XXXX73|500010|13616X9|X|XX
XXXXXXXX|XX728637|XX728637|XX0|73|100046|XXXXXXXXXXXXX55XXXXXXXXXX|X|73X|X|XXX4503331X1326935|4059XX33||25X11X2005|X|728637|0|14425X76|22X02X2005|24X11X2005|XXXX73|500244|17500|X|XX
XXXXXXXX|XX555718|XX555718|XX0|73|100020|XXXXXXXXXXXXXXX45XXX|X|73X|X|XXX4524321X0392633|73XX100020||01X08X2005|X|555718|0|12897||29X07X2005|XXXX73|500173|17446X39|X|XX
XXXXXXXX|XX589182|XX589182|XX0|73|100270|XXXXXXXX1X1XXXXXXXX|X|73X|X|XXX4540301X0461656|73XX100270||19X09X2005|X|589182|0|13112X6||16X09X2005|XXXX73|500201|14998|X|XX
XXXXXXXX|XX1796399|XX1796399|XX0|73|100362|XXXXXXXXXXXXX45XXXXXXXXXXXXXX|XXXXXXXXXXXXX45XXXXXXXXXXXXXX|73X|X|XXX4503321X2775489|73XX100362||22X05X2006|X|1796399|0|10783X11||17X05X2006|XXXX73|500087|11976X24|X|XX
XXXXXXXX|XX1796399|XX1796399|XX0|73|100337|XXXXXXXXXXXXX45XXXXXXXXXXXXXX|XXXXXXXXXXXXX45XXXXXXXXXXXXXX|73X|X|XXX4503321X2654339|73XX100337||22X05X2006|X|1796399|0|10672X11||17X05X2006|XXXX73|500086|11976X24|X|XX
XXXXXXXX|XX340211|XX340211|XX0|73|100383|XXXXXXXX50XXXXXXXXXXX|X|73X|X|XXX4540001X0839774|2724XX33||05X06X2006|X|340211|0|13321|23X05X2006|02X06X2006|XXXX73|500099|10999X99|X|XX
我用Spark 2.2.0在HDP2.6.4上执行代码


是否有人对正在发生的事情有解决办法或想法?

考虑添加
.option(“ignoreTrailingWhiteSpace”,true)


我遇到了同样的问题,它为我解决了。

我建议添加一个内容示例,以便其他人可以试用它。