Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 如何设置Hbase表列族的列族大小?_Hadoop_Hbase_Apache Pig_Column Family_Bigdata - Fatal编程技术网

Hadoop 如何设置Hbase表列族的列族大小?

Hadoop 如何设置Hbase表列族的列族大小?,hadoop,hbase,apache-pig,column-family,bigdata,Hadoop,Hbase,Apache Pig,Column Family,Bigdata,我正在尝试将数据从CSV文件导入HBase表。但在导入过程中,我遇到了如下所示的异常 Error: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large. May be malicious. Use CodedInputStream.setSizeLimit() to increase the size limit. at com.google.protobuf.I

我正在尝试将数据从CSV文件导入HBase表。但在导入过程中,我遇到了如下所示的异常

Error: com.google.protobuf.InvalidProtocolBufferException: Protocol message was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to increase the size limit.
        at com.google.protobuf.InvalidProtocolBufferException.sizeLimitExceeded(InvalidProtocolBufferException.java:110)
        at com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
        at com.google.protobuf.CodedInputStream.isAtEnd(CodedInputStream.java:701)
        at com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue.<init>(ClientProtos.java:8599)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue.<init>(ClientProtos.java:8563)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue$1.parsePartialFrom(ClientProtos.java:8672)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue$1.parsePartialFrom(ClientProtos.java:8667)
        at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue.<init>(ClientProtos.java:8462)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue.<init>(ClientProtos.java:8404)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:8498)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:8493)
        at com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.<init>(ClientProtos.java:7959)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.<init>(ClientProtos.java:7890)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:8045)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:8040)
        at com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
        at com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
        at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
        at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
        at com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.parseDelimitedFrom(ClientProtos.java:10468)
        at org.apache.hadoop.hbase.mapreduce.MutationSerialization$MutationDeserializer.deserialize(MutationSerialization.java:60)
        at org.apache.hadoop.hbase.mapreduce.MutationSerialization$MutationDeserializer.deserialize(MutationSerialization.java:50)
        at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:146)
        at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
        at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302)
        at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
        at org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1651)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
错误:com.google.protobuf.InvalidProtocolBufferException:协议消息太大。可能是恶意的。使用CodedInputStream.setSizeLimit()增加大小限制。
位于com.google.protobuf.InvalidProtocolBufferException.SizeLimitExceped(InvalidProtocolBufferException.java:110)
位于com.google.protobuf.CodedInputStream.refillBuffer(CodedInputStream.java:755)
位于com.google.protobuf.CodedInputStream.isattend(CodedInputStream.java:701)
位于com.google.protobuf.CodedInputStream.readTag(CodedInputStream.java:99)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue。(ClientProtos.java:8599)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue。(ClientProtos.java:8563)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue$1.parsePartialFrom(ClientProtos.java:8672)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$QualifierValue$1.parsePartialFrom(ClientProtos.java:8667)
位于com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue。(ClientProtos.java:8462)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue。(ClientProtos.java:8404)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:8498)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$ColumnValue$1.parsePartialFrom(ClientProtos.java:8493)
位于com.google.protobuf.CodedInputStream.readMessage(CodedInputStream.java:309)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:7959)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.(ClientProtos.java:7890)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:8045)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto$1.parsePartialFrom(ClientProtos.java:8040)
位于com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:200)
位于com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
位于com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
位于com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
位于com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
位于org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutationProto.parseDelimitedFrom(ClientProtos.java:10468)
位于org.apache.hadoop.hbase.mapreduce.MutationSerialization$MutationDeserializer.deserialize(MutationSerialization.java:60)
位于org.apache.hadoop.hbase.mapreduce.MutationSerialization$MutationDeserializer.deserialize(MutationSerialization.java:50)
位于org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:146)
位于org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
位于org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302)
位于org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
位于org.apache.hadoop.mapred.Task$NewCombinerRunner.combine(Task.java:1651)
位于org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1611)
位于org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462)
位于org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700)
位于org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
位于org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
位于org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
位于org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)

我认为这与数据长度高于默认大小有关。如何通过终端增加HBase列的列族大小?任何帮助都将不胜感激。

更改柱族块大小

alter 'my_table', {NAME => 'my_cf', BLOCKSIZE => '1048756'}

然后调用
描述“我的表”
查看表元信息并验证其是否有效。

更改列族块大小

alter 'my_table', {NAME => 'my_cf', BLOCKSIZE => '1048756'}

然后调用
description'my_table'
查看表元信息并验证它是否工作。

如果它有一个单列族,请使用下面的命令更改它的单元格编号
更改“表”名称=>“列族”,版本=>number

如果它只有一个列族,请使用下面的命令更改它的单元格编号
alter'table'NAME=>'column family',VERSIONS=>number

谢谢,我可以这样设置块大小。但我还是得到了上面显示的异常。还有其他尺码需要我改吗?有什么想法吗(@mayooran似乎你达到了默认的64Mb限制。这里有一个关于同一主题的问题,它说版本1.02修复了这个问题。我使用的是0.98。猜测将必须更新。谢谢你!:)谢谢你,我可以这样设置块大小。但我还是得到了上面显示的异常。还有其他尺码需要我改吗?有什么想法吗(@mayooran似乎您达到了默认的64Mb限制。这里有一个关于同一主题的问题,它说版本1.02修复了这个问题。我使用的是什么