Can';t在pyspark中显示CSV文件(ValueError:某些类型无法通过前100行确定,请使用采样重试)

Can';t在pyspark中显示CSV文件(ValueError:某些类型无法通过前100行确定,请使用采样重试),csv,pyspark,databricks,azure-databricks,Csv,Pyspark,Databricks,Azure Databricks,通过Pyspark显示CSV文件时出错。我已经附上了我使用的PySpark代码和CSV文件 从pyspark.sql导入* spark.conf.set(“fs.azure.account.key.xxocxxxxxx”,“xxxxx”) 时间\u在\u站点上\u表路径=”wasbs://dwpocblob@dwadfpoc.blob.core.windows.net/time\u on\u site.csv“ 站点上的时间=spark.read.format(“csv”).options(he

通过Pyspark显示CSV文件时出错。我已经附上了我使用的PySpark代码和CSV文件

从pyspark.sql导入*
spark.conf.set(“fs.azure.account.key.xxocxxxxxx”,“xxxxx”)
时间\u在\u站点上\u表路径=”wasbs://dwpocblob@dwadfpoc.blob.core.windows.net/time\u on\u site.csv“
站点上的时间=spark.read.format(“csv”).options(header='true',inferSchema='true').load(站点上的时间\u tablepath)
显示(站点上的时间头(50))
错误如下所示

ValueError:某些类型无法由前100行确定,请使用采样重试
CSV文件格式附在下面

time_on_site:pyspark.sql.dataframe.DataFrame

next_eventdate:timestamp
barcode:integer
eventdate:timestamp
sno:integer
eventaction:string
next_action:string
next_deviceid:integer
next_device:string
type_flag:string
site:string
location:string
flag_perimeter:integer
deviceid:integer
device:string
tran_text:string
flag:integer
timespent_sec:integer
gg:integer
next_eventdate,barcode,eventdate,sno,eventaction,next_action,next_deviceid,next_device,type_flag,site,location,flag_perimeter,deviceid,device,tran_text,flag,timespent_sec,gg
2018-03-16 05:23:34.000,1998296,2018-03-14 18:50:29.000,1,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,124385,0
2018-03-17 07:22:16.000,1998296,2018-03-16 18:41:09.000,3,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,45667,0
2018-03-19 07:23:55.000,1998296,2018-03-17 18:36:17.000,6,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,1,132458,1
2018-03-21 07:25:04.000,1998296,2018-03-19 18:23:26.000,8,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,133298,0
2018-03-24 07:33:38.000,1998296,2018-03-23 18:39:04.000,10,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,46474,0
CSV文件数据附在下面

time_on_site:pyspark.sql.dataframe.DataFrame

next_eventdate:timestamp
barcode:integer
eventdate:timestamp
sno:integer
eventaction:string
next_action:string
next_deviceid:integer
next_device:string
type_flag:string
site:string
location:string
flag_perimeter:integer
deviceid:integer
device:string
tran_text:string
flag:integer
timespent_sec:integer
gg:integer
next_eventdate,barcode,eventdate,sno,eventaction,next_action,next_deviceid,next_device,type_flag,site,location,flag_perimeter,deviceid,device,tran_text,flag,timespent_sec,gg
2018-03-16 05:23:34.000,1998296,2018-03-14 18:50:29.000,1,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,124385,0
2018-03-17 07:22:16.000,1998296,2018-03-16 18:41:09.000,3,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,45667,0
2018-03-19 07:23:55.000,1998296,2018-03-17 18:36:17.000,6,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,1,132458,1
2018-03-21 07:25:04.000,1998296,2018-03-19 18:23:26.000,8,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,133298,0
2018-03-24 07:33:38.000,1998296,2018-03-23 18:39:04.000,10,IN,OUT,2,AGATE-R02-AP-Vehicle_Exit,,NULL,NULL,1,1,AGATE-R01-AP-Vehicle_Entry,Access Granted,0,46474,0

如何才能成功加载CSV文件?

语法没有问题,工作正常。 问题出在CSV文件的数据中,其中名为
type\u flag
的列只有None(null)值,因此无法推断其数据类型

所以,这里有两个选择

  • 您可以在不使用head()的情况下显示数据。喜欢
    显示(时间在站点上)

  • 如果要使用
    head()
    ,则需要替换空值,在这里,我将其替换为空字符串(“”)

    time_on_site=time_on_site.fillna(“”)
    显示(站点上的时间头(50))