Apache spark 谷歌云存储上的Dataproc Spark Streaming Kafka检查点警告

Apache spark 谷歌云存储上的Dataproc Spark Streaming Kafka检查点警告,apache-spark,apache-kafka,google-cloud-storage,spark-streaming,google-cloud-dataproc,Apache Spark,Apache Kafka,Google Cloud Storage,Spark Streaming,Google Cloud Dataproc,在Google云存储上使用Dataproc 1.1(Spark 2.0.2)和Kafka检查点时,我得到了很多警告。我得到以下警告: 16/12/11 01:36:02 WARN HttpTransport: exception thrown while executing request java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method)

在Google云存储上使用Dataproc 1.1(Spark 2.0.2)和Kafka检查点时,我得到了很多警告。我得到以下警告:

16/12/11 01:36:02 WARN HttpTransport: exception thrown while executing request
java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
at sun.security.ssl.InputRecord.read(InputRecord.java:503)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:973)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:930)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:704)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:647)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1569)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:480)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:338)
at com.google.api.client.http.javanet.NetHttpResponse.<init>(NetHttpResponse.java:37)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:94)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.listStorageObjectsAndPrefixes(GoogleCloudStorageImpl.java:1069)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.listObjectNames(GoogleCloudStorageImpl.java:1173)
at com.google.cloud.hadoop.gcsio.ForwardingGoogleCloudStorage.listObjectNames(ForwardingGoogleCloudStorage.java:182)
at com.google.cloud.hadoop.gcsio.CacheSupplementedGoogleCloudStorage.listObjectNames(CacheSupplementedGoogleCloudStorage.java:381)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getInferredItemInfo(GoogleCloudStorageFileSystem.java:1286)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getInferredItemInfos(GoogleCloudStorageFileSystem.java:1311)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfos(GoogleCloudStorageFileSystem.java:1212)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.rename(GoogleCloudStorageFileSystem.java:640)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.rename(GoogleHadoopFileSystemBase.java:1091)
at org.apache.spark.streaming.CheckpointWriter$CheckpointWriteHandler.run(Checkpoint.scala:241)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
有人有同样的问题吗


关于这一点,

我最近在检查谷歌存储时遇到了类似的错误。我开始在dataproc中设置hdfs检查点,而不是google存储作为临时解决方案。

事实上,我们也在这样做。但这样做,我们并没有从跨Dataproc集群的Google云存储数据检查点中获益。例如,我正在考虑更新或移动集群中的作业。我会尽快联系支持人员。我刚刚在这里发布了这篇文章,以防有人找到了一个解决方案来直接使用谷歌云存储。试着把你的bucket切换到regional。使用区域存储桶比使用多区域存储桶的性能更好。感谢您的回复。我们已经陷入了一个区域性的困境。我试着联系谷歌的支持人员。他们说这是谷歌云存储客户端的一部分,但我只是使用dataproc集群中提供的客户端,没有其他设置,只是指向一个gs bucket。你解决了这个问题吗?我正面临着完全相同的问题。我使用区域性扣带和切换,不经常面对这个问题。
16/12/10 18:05:23 WARN ReceivedBlockTracker: Exception thrown while writing record: BatchCleanupEvent(ArrayBuffer()) to the WriteAheadLog.
org.apache.spark.SparkException: Exception thrown in awaitResult: 
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:194)
at org.apache.spark.streaming.util.BatchedWriteAheadLog.write(BatchedWriteAheadLog.scala:83)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.writeToLog(ReceivedBlockTracker.scala:234)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.cleanupOldBatches(ReceivedBlockTracker.scala:171)
at org.apache.spark.streaming.scheduler.ReceiverTracker.cleanupOldBlocksAndBatches(ReceiverTracker.scala:226)
at org.apache.spark.streaming.scheduler.JobGenerator.clearCheckpointData(JobGenerator.scala:287)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:187)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [5000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:190)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.result(package.scala:190)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:190)
... 9 more
16/12/10 18:05:23 WARN ReceivedBlockTracker: Failed to acknowledge batch clean up in the Write Ahead Log.