Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/neo4j/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
hadoop协议消息太大。可能是恶意的。使用CodedInputStream.setSizeLimit()增加大小限制_Hadoop_Hadoop2 - Fatal编程技术网

hadoop协议消息太大。可能是恶意的。使用CodedInputStream.setSizeLimit()增加大小限制

hadoop协议消息太大。可能是恶意的。使用CodedInputStream.setSizeLimit()增加大小限制,hadoop,hadoop2,Hadoop,Hadoop2,我在datanodes的日志中看到了这一点。发生这种情况可能是因为我正在将500万个文件复制到hdfs中: java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size li

我在datanodes的日志中看到了这一点。发生这种情况可能是因为我正在将500万个文件复制到hdfs中:

java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:507)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:738)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:874)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at com.google.protobuf.CodedInputStream.readSInt64(CodedInputStream.java:363)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:326)
... 7 more
我只是在使用hadoop fs-put。。。。将文件复制到hdfs。最近,我开始在客户端收到这样的消息:

15/06/30 15:00:58信息hdfs.DFSClient:无法完成/pdf nxml/file1.nxml。正在复制重试。。。 15/06/30 15:01:05 INFO hdfs.DFSClient:无法完成/pdf nxml/2014 full/file2.nxml.COPYING重试

我得到一个味精像上面说的每分钟3次。但在datanodes上,异常更为频繁

我怎样才能解决这个问题

-----编辑---- 我不得不重新启动hadoop,但现在它无法正常启动,每个datanode的日志文件中都有以下内容:

2015-07-01 06:20:35,748 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Unsuccessfully sent block report 0x2ac82e1cf6e64,  containing 1 storage report(s), of which we sent 0. The reports had 6342936 total blocks and used 0 RPC(s). This took 542 msec to generate and 240 msecs for RPC and NN processing. Got back no commands.
2015-07-01 06:20:35,748 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in BPOfferService for Block pool BP-1043486900-10.0.1.42-1434126972501 (Datanode Uuid d5dcf9a0-c82d-49d8-8162-af5910c3e3fe) service to cruncher02/10.0.1.42:8020
java.lang.IllegalStateException: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:332)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:310)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder.getBlockListAsLongs(BlockListAsLongs.java:288)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.blockReport(DatanodeProtocolClientSideTranslatorPB.java:190)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.blockReport(BPServiceActor.java:507)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:738)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:874)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
at com.google.protobuf.CodedInputStream.readRawByte(CodedInputStream.java:769)
at com.google.protobuf.CodedInputStream.readRawVarint64(CodedInputStream.java:462)
at com.google.protobuf.CodedInputStream.readSInt64(CodedInputStream.java:363)
at org.apache.hadoop.hdfs.protocol.BlockListAsLongs$BufferDecoder$1.next(BlockListAsLongs.java:326)
... 7 more

该问题的答案已在评论中提供:

我的hadoop 2.7.0群集没有启动。我不得不重新编译 protobuf-2.5.0,正在更改
com.google.protobuf.CodedInputStream#默认(大小)限制为64请尝试以下3个步骤,应该可以。像冠军一样为我工作

  • 请在protobuf-java-2.5.0.jar的类CodedInputStream中更改默认的_大小限制,如下所示

private static final int DEFAULT_SIZE_LIMIT=64是否“使用CodedInputStream.setSizeLimit()增加大小限制”?我不知道这到底是什么,但它似乎解决了问题。如果有,请告诉我们发生了什么。好吧,这是hadoop日志,我没有运行任何代码。甚至在hadoop集群启动时也会发生这种情况。我的hadoop 2.7.0集群没有启动。我不得不重新编译protobuf-2.5.0,将com.google.protobuf.CodedInputStream#DEFAULT(默认)SIZE(大小)限制更改为64。这将在感谢大会上讨论。我将DEFAULT_SIZE_LIMIT设置为Integer.MAX_值,现在可以工作了。您可以在此处下载修补的protobuf-java-2.5.0.jar:
$HADOOP_HOME/share/hadoop/common/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/httpfs/tomcat/webapps/webhdfs/WEB-INF/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/tools/lib/protobuf-java-2.5.0.jar
$HADOOP_HOME/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar