Hadoop Spark History服务器无法启动

Hadoop Spark History服务器无法启动,hadoop,apache-spark,yarn,cloudera-cdh,hortonworks-data-platform,Hadoop,Apache Spark,Yarn,Cloudera Cdh,Hortonworks Data Platform,重新启动Spark HistoryServer后,它无法启动,我们正在使用CDH 5.3.1和Spark 1.2 我检查了Spark HistoryServer的日志,发现以下消息: 2015-05-21 11:38:03,790 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files. 2015-05-21 11:38:52,319 INFO org.apache.

重新启动Spark HistoryServer后,它无法启动,我们正在使用CDH 5.3.1和Spark 1.2 我检查了Spark HistoryServer的日志,发现以下消息:

 2015-05-21 11:38:03,790 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files. 
2015-05-21 11:38:52,319 INFO org.apache.spark.deploy.history.HistoryServer: Registered signal handlers for [TERM, HUP, INT] 
2015-05-21 11:38:52,328 WARN org.apache.spark.deploy.history.HistoryServerArguments: Setting log directory through the command line is deprecated as of Spark 1.1.0. Please set this through spark.history.fs.logDirectory instead. 
2015-05-21 11:38:52,461 INFO org.apache.spark.SecurityManager: Changing view acls to: spark 
2015-05-21 11:38:52,462 INFO org.apache.spark.SecurityManager: Changing modify acls to: spark 
2015-05-21 11:38:52,463 INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 
2015-05-21 11:41:24,893 ERROR org.apache.spark.deploy.history.HistoryServer: RECEIVED SIGNAL 15: SIGTERM 
2015-05-21 11:41:33,439 INFO org.apache.spark.deploy.history.HistoryServer: Registered signal handlers for [TERM, HUP, INT] 
2015-05-21 11:41:33,447 WARN org.apache.spark.deploy.history.HistoryServerArguments: Setting log directory through the command line is deprecated as of Spark 1.1.0. Please set this through spark.history.fs.logDirectory instead. 
2015-05-21 11:41:33,578 INFO org.apache.spark.SecurityManager: Changing view acls to: spark 
2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: Changing modify acls to: spark 
2015-05-21 11:41:33,579 INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(spark); users with modify permissions: Set(spark) 
2015-05-21 11:44:07,147 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader. 
java.io.EOFException: Premature EOF: no length prefix available 
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) 
    at java.io.DataInputStream.read(DataInputStream.java:149) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) 
2015-05-21 11:44:07,151 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.1.1.253:50010 for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available 
java.io.EOFException: Premature EOF: no length prefix available 
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) 
    at java.io.DataInputStream.read(DataInputStream.java:149) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) 
2015-05-21 11:44:07,161 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /10.1.1.190:50010 for BP-1877157801-10.1.1.42-1366756660926:blk_1104141398_1099642456200 
2015-05-21 11:44:19,946 WARN org.apache.hadoop.hdfs.BlockReaderFactory: I/O error constructing remote block reader. 
java.io.EOFException: Premature EOF: no length prefix available 
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) 
    at java.io.DataInputStream.read(DataInputStream.java:149) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) 
2015-05-21 11:44:19,947 WARN org.apache.hadoop.hdfs.DFSClient: Failed to connect to /10.1.1.253:50010 for block, add to deadNodes and continue. java.io.EOFException: Premature EOF: no length prefix available 
java.io.EOFException: Premature EOF: no length prefix available 
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2109) 
    at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663) 
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327) 
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574) 
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797) 
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844) 
    at java.io.DataInputStream.read(DataInputStream.java:149) 
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:273) 
    at java.io.BufferedInputStream.read(BufferedInputStream.java:334) 
    at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) 
    at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) 
    at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) 
    at java.io.InputStreamReader.read(InputStreamReader.java:184) 
    at java.io.BufferedReader.fill(BufferedReader.java:154) 
    at java.io.BufferedReader.readLine(BufferedReader.java:317) 
    at java.io.BufferedReader.readLine(BufferedReader.java:382) 
    at scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67) 
    at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69) 
    at org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) 
2015-05-21 11:44:19,950 INFO org.apache.hadoop.hdfs.DFSClient: Successfully connected to /10.1.1.35:50010 for BP-1877157801-10.1.1.42-1366756660926:blk_1104192564_1099642507371 
2015-05-21 11:38:03790 WARN org.apache.spark.scheduler.ReplayListenerBus:提供的日志路径不包含日志文件。
2015-05-21 11:38:52319 INFO org.apache.spark.deploy.history.HistoryServer:[TERM,HUP,INT]的注册信号处理程序
2015-05-21 11:38:52328警告org.apache.spark.deploy.history.HistoryServerArguments:从spark 1.1.0开始,不推荐通过命令行设置日志目录。请改为通过spark.history.fs.logDirectory进行设置。
2015-05-21 11:38:52461 INFO org.apache.spark.SecurityManager:将视图ACL更改为:spark
2015-05-21 11:38:52462 INFO org.apache.spark.SecurityManager:将修改ACL更改为:spark
2015-05-21 11:38:52463 INFO org.apache.spark.SecurityManager:SecurityManager:authentication disabled;ui ACL被禁用;具有查看权限的用户:Set(spark);具有修改权限的用户:设置(spark)
2015-05-21 11:41:24893错误org.apache.spark.deploy.history.history服务器:收到信号15:SIGTERM
2015-05-21 11:41:33439 INFO org.apache.spark.deploy.history.HistoryServer:为[TERM,HUP,INT]注册的信号处理程序
2015-05-21 11:41:33447警告org.apache.spark.deploy.history.HistoryServerArguments:从spark 1.1.0开始,不推荐通过命令行设置日志目录。请改为通过spark.history.fs.logDirectory进行设置。
2015-05-21 11:41:33578 INFO org.apache.spark.SecurityManager:将视图ACL更改为:spark
2015-05-21 11:41:33579 INFO org.apache.spark.SecurityManager:将修改ACL更改为:spark
2015-05-21 11:41:33579 INFO org.apache.spark.SecurityManager:SecurityManager:authentication disabled;ui ACL被禁用;具有查看权限的用户:Set(spark);具有修改权限的用户:设置(spark)
2015-05-21 11:44:07147警告org.apache.hadoop.hdfs.blockreader工厂:构建远程块读取器时发生I/O错误。
java.io.EOFEException:过早EOF:没有可用的长度前缀
位于org.apache.hadoop.hdfs.protocolPB.PBHelper.vintprefix(PBHelper.java:2109)
位于org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)
位于org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:785)
位于org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:663)
位于org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:327)
位于org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:574)
位于org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:797)
位于org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:844)
读取(DataInputStream.java:149)
位于java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
在java.io.BufferedInputStream.read处(BufferedInputStream.java:334)
位于sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
位于sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
位于sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
位于java.io.InputStreamReader.read(InputStreamReader.java:184)
在java.io.BufferedReader.fill中(BufferedReader.java:154)
位于java.io.BufferedReader.readLine(BufferedReader.java:317)
位于java.io.BufferedReader.readLine(BufferedReader.java:382)
位于scala.io.BufferedSource$BufferedLineIterator.hasNext(BufferedSource.scala:67)
位于scala.collection.Iterator$class.foreach(Iterator.scala:727)
位于scala.collection.AbstractIterator.foreach(迭代器.scala:1157)
在org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:69)
在org.apache.spark.scheduler.ReplayListenerBus$$anonfun$replay$2.apply(ReplayListenerBus.scala:55)
在scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
位于scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
位于org.apache.spark.scheduler.ReplayListenerBus.replay(ReplayListenerBus.scala:55)
位于org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:175)
位于org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
位于scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
在scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
位于scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
位于scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
位于scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
位于org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172)
位于org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108)
位于org.apache.spark.deploy.history.FsHistoryProvider。(FsHistoryProvider.scala:91)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance0(本机方法)
位于sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
位于java.lang.reflect.Constructor.newInstance(Constructor.java:526)
位于org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184)
位于org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)
2015-05-21 11:44:07151警告org.apache.hadoop.hdfs.DFSClient:无法连接到块的/10.1.1.253:50010,请添加到deadNodes并继续。java.io.EOFEException:过早EOF:没有可用的长度前缀
java.io.EOFException:过早
2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files. 
2015-05-21 11:38:03,789 WARN org.apache.spark.scheduler.ReplayListenerBus: Log path provided contains no log files. 
2015-05-21 11:38:03,789 ERROR org.apache.spark.deploy.history.FsHistoryProvider: Exception in accessing modification time of hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_11706 
java.io.IOException: Filesystem closed 
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765) 
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900) 
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$getModificationTime(FsHistoryProvider.scala:236) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:182) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala) 
2015-05-21 11:38:03,790 ERROR org.apache.spark.scheduler.EventLoggingListener: Exception in parsing logging info from directory hdfs://my-hadoop/user/spark/applicationHistory/application_1431044565372_12048 
java.io.IOException: Filesystem closed 
    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:765) 
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1900) 
    at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1885) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:654) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:104) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:716) 
    at org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712) 
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:712) 
    at org.apache.spark.scheduler.EventLoggingListener$.parseLoggingInfo(EventLoggingListener.scala:199) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$createReplayBus(FsHistoryProvider.scala:226) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:174) 
    at org.apache.spark.deploy.history.FsHistoryProvider$$anonfun$5.apply(FsHistoryProvider.scala:172) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251) 
    at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) 
    at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34) 
    at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251) 
    at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105) 
    at org.apache.spark.deploy.history.FsHistoryProvider.org$apache$spark$deploy$history$FsHistoryProvider$$checkForLogs(FsHistoryProvider.scala:172) 
    at org.apache.spark.deploy.history.FsHistoryProvider.initialize(FsHistoryProvider.scala:108) 
    at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:91) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
    at org.apache.spark.deploy.history.HistoryServer$.main(HistoryServer.scala:184) 
    at org.apache.spark.deploy.history.HistoryServer.main(HistoryServer.scala)