Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/date/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Date 数据框中的日期类型空值未存储在cassandra中_Date_Apache Spark_Dataframe_Cassandra - Fatal编程技术网

Date 数据框中的日期类型空值未存储在cassandra中

Date 数据框中的日期类型空值未存储在cassandra中,date,apache-spark,dataframe,cassandra,Date,Apache Spark,Dataframe,Cassandra,我在Apache Spark 1.6.0中工作。我有一个280列的数据帧,其中一些列是timestamp类型的。时间戳字段的一些值为空。当我试图将相同的数据帧写入cassandra时,我得到了一个IllegalArgumentException 该列看起来像- +------------------------+ | LoginDate| +-------------------------+ | null| | 20

我在Apache Spark 1.6.0中工作。我有一个280列的数据帧,其中一些列是timestamp类型的。时间戳字段的一些值为空。当我试图将相同的数据帧写入cassandra时,我得到了一个IllegalArgumentException

该列看起来像-

+------------------------+
|                LoginDate|
+-------------------------+
|                     null|
|     2014-06-25T12:27:...|
|     2014-06-25T12:27:...|
|                     null|
|     2014-06-25T12:27:...|
|     2014-06-25T12:27:...|
|                     null|
|                     null|
|     2014-06-25T12:27:...|
|     2014-06-25T12:27:...|
+-------------------------+
当我试图将整个数据帧保存到cassandra时,会出现错误-

05:39:22 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 106.0 (TID 5136,): java.lang.IllegalArgumentException: Invalid date: 
    at com.datastax.spark.connector.types.TimestampParser$.parse(TimestampParser.scala:50)
    at com.datastax.spark.connector.types.TypeConverter$DateConverter$$anonfun$convertPF$13.applyOrElse(TypeConverter.scala:323)
    at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43)
    at com.datastax.spark.connector.types.TypeConverter$DateConverter$.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:313)
    at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56)
    at com.datastax.spark.connector.types.TypeConverter$DateConverter$.convert(TypeConverter.scala:313)
    at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter$$anonfun$convertPF$31.applyOrElse(TypeConverter.scala:812)
    at com.datastax.spark.connector.types.TypeConverter$class.convert(TypeConverter.scala:43)
    at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.com$datastax$spark$connector$types$NullableTypeConverter$$super$convert(TypeConverter.scala:795)
    at com.datastax.spark.connector.types.NullableTypeConverter$class.convert(TypeConverter.scala:56)
    at com.datastax.spark.connector.types.TypeConverter$OptionToNullConverter.convert(TypeConverter.scala:795)
    at com.datastax.spark.connector.writer.SqlRowWriter$$anonfun$readColumnValues$1.apply$mcVI$sp(SqlRowWriter.scala:26)
    at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
    at com.datastax.spark.connector.writer.SqlRowWriter.readColumnValues(SqlRowWriter.scala:24)
    at com.datastax.spark.connector.writer.SqlRowWriter.readColumnValues(SqlRowWriter.scala:12)
    at com.datastax.spark.connector.writer.BoundStatementBuilder.bind(BoundStatementBuilder.scala:100)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:106)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.next(GroupingBatchBuilder.scala:31)
    at scala.collection.Iterator$class.foreach(Iterator.scala:727)
    at com.datastax.spark.connector.writer.GroupingBatchBuilder.foreach(GroupingBatchBuilder.scala:31)
    at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:157)
    at com.datastax.spark.connector.writer.TableWriter$$anonfun$write$1.apply(TableWriter.scala:134)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:110)
    at com.datastax.spark.connector.cql.CassandraConnector$$anonfun$withSessionDo$1.apply(CassandraConnector.scala:109)
    at com.datastax.spark.connector.cql.CassandraConnector.closeResourceAfterUse(CassandraConnector.scala:139)
    at com.datastax.spark.connector.cql.CassandraConnector.withSessionDo(CassandraConnector.scala:109)
    at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:134)
    at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
    at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:37)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
cassandra中相应字段的类型为时间戳类型


任何人都可以帮助解决此问题?

将以下参数添加到spark Cassandra连接设置中

spark.cassandra.output.ignoreNulls=true


它将忽略输入中的空值,还可以避免在Cassandra中创建相应的墓碑列。

将以下参数添加到spark Cassandra连接设置中

spark.cassandra.output.ignoreNulls=true


它将忽略输入中的空值,还可以避免在Cassandra中创建相应的墓碑列。

Nephilim:提供的答案中是否还需要澄清?如果答案解决了您的问题,请接受答案(勾号)Nephilim:所提供的答案中是否还需要澄清?如果答案解决了您的问题,请接受答案(打勾)