Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 带S3的Spark结构化流式处理失败_Apache Spark_Amazon S3_Spark Structured Streaming - Fatal编程技术网

Apache spark 带S3的Spark结构化流式处理失败

Apache spark 带S3的Spark结构化流式处理失败,apache-spark,amazon-s3,spark-structured-streaming,Apache Spark,Amazon S3,Spark Structured Streaming,我正在AWS上运行的Spark 2.2群集上运行结构化流媒体作业。我在eu-central-1中使用一个S3桶进行检查。 工人的某些提交操作似乎随机失败,出现以下错误: 17/10/04 13:20:34 WARN TaskSetManager: Lost task 62.0 in stage 19.0 (TID 1946, 0.0.0.0, executor 0): java.lang.IllegalStateException: Error committing version 1 into

我正在AWS上运行的Spark 2.2群集上运行结构化流媒体作业。我在eu-central-1中使用一个S3桶进行检查。 工人的某些提交操作似乎随机失败,出现以下错误:

17/10/04 13:20:34 WARN TaskSetManager: Lost task 62.0 in stage 19.0 (TID 1946, 0.0.0.0, executor 0): java.lang.IllegalStateException: Error committing version 1 into HDFSStateStore[id=(op=0,part=62),dir=s3a://bucket/job/query/state/0/62]
at org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider$HDFSBackedStateStore.commit(HDFSBackedStateStoreProvider.scala:198)
at org.apache.spark.sql.execution.streaming.StateStoreSaveExec$$anonfun$doExecute$3$$anon$1.hasNext(statefulOperators.scala:230)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:99)
at org.apache.spark.sql.execution.aggregate.HashAggregateExec$$anonfun$doExecute$1$$anonfun$4.apply(HashAggregateExec.scala:97)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:797)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:108)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: XXXXXXXXXXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: abcdef==
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
... 3 more
使用以下选项子提交作业,以允许eu-central-1存储桶:

--packages org.apache.hadoop:hadoop-aws:2.7.4
--conf spark.hadoop.fs.s3a.endpoint=s3.eu-central-1.amazonaws.com
--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
--conf spark.executor.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.driver.extraJavaOptions=-Dcom.amazonaws.services.s3.enableV4=true
--conf spark.hadoop.fs.s3a.access.key=xxxxx
--conf spark.hadoop.fs.s3a.secret.key=xxxxx
我已经尝试过在没有特殊字符的情况下生成一个访问密钥并使用实例策略,这两种方法都有相同的效果。

您的日志显示:

原因:com.amazonaws.services.s3.model.amazons3异常:状态 代码:403,AWS服务:Amazon S3,AWS请求ID:xxxxxxxxxx,AWS 错误代码:SignatureDesNotMatch,AWS错误消息:请求 我们计算的签名与您提供的签名不匹配。 检查您的密钥和签名方法,S3扩展请求ID:abcdef==

该错误意味着凭据不正确

 val credentials = new com.amazonaws.auth.BasicAWSCredentials(
  "ACCESS_KEY_ID", 
  "SECRET_ACCESS_KEY"
  )
用于调试目的

1) 访问密钥/密钥都是有效的

2) Bucket名称是否正确

3) 在CLI中打开日志记录,并将其与SDK进行比较

4) 按此处所述启用SDK日志记录:

您需要提供log4jjar和示例log4j.properties文件


Hadoop团队经常发生这种情况


但正如Yuval所说:直接提交到S3太危险了,而且创建的数据越多,速度就越慢,列出不一致的风险意味着有时数据会丢失,至少在Apache Hadoop 2.6-2.8版本的S3A中,不要使用S3进行检查点。由于S3只在写后读时提供最终一致性,因此无法保证当
HDFSBackedStateStore
列出文件或尝试重命名文件时,它将存在于S3存储桶中,即使它之前刚刚写入。我还可以使用什么?当使用HDFS时,最终更改日志变得非常大,以至于无法启动HDFS。我们谈论的是哪一个更改日志?namenode上的HDFS更改日志确保您有活动的,它应该压缩并统一编辑文件日志,这样启动namenode就不会花费很多时间。是的,我已经阅读了很多,但问题并不总是发生。所以我猜是一个文件夹或文件因为最终的一致性还没有出现。不,不是这样:它会作为FileNotFoundException出现。这是身份验证,不容易追踪,特别是因为出于安全原因,代码不敢记录有用的信息,比如使用的特定机密。如果只是针对法兰克福,可能是v4 api问题