Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/wordpress/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无法从HBase导出表_Hbase - Fatal编程技术网

无法从HBase导出表

无法从HBase导出表,hbase,Hbase,我无法将表从HBase导出到HDFS。下面是错误跟踪。它很大。还有其他出口方式吗 我使用下面的命令导出。我增加了rpc超时,但作业仍然失败 sudo -u hdfs hbase -Dhbase.rpc.timeout=1000000 org.apache.hadoop.hbase.mapreduce.Export My_Table /hdfs_path 15/05/05 08:50:27 INFO mapreduce.Job: map 0% reduce 0% 15/05/05 08:50:

我无法将表从HBase导出到HDFS。下面是错误跟踪。它很大。还有其他出口方式吗

我使用下面的命令导出。我增加了rpc超时,但作业仍然失败

sudo -u hdfs hbase -Dhbase.rpc.timeout=1000000 org.apache.hadoop.hbase.mapreduce.Export My_Table /hdfs_path

15/05/05 08:50:27 INFO mapreduce.Job:  map 0% reduce 0%
15/05/05 08:50:55 INFO mapreduce.Job: Task Id : attempt_1424936551928_0234_m_000001_0, Status : FAILED
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Failed after retry of OutOfOrderScannerNextException: was there a rpc timeout?
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:410)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:230)
        at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:138)
        at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:553)
        at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 229 number_of_rows: 100 close_scanner: false next_call_seq: 0
        at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
        at java.lang.Thread.run(Thread.java:745)

        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
        at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
        at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:304)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:204)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:59)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:114)
        at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:90)
        at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:355)
        ... 13 more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException): org.apache.hadoop.hbase.exceptions.OutOfOrderScannerNextException: Expected nextCallSeq: 1 But the nextCallSeq got from client: 0; request=scanner_id: 229 number_of_rows: 100 close_scanner: false next_call_seq: 0
        at org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3198)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29925)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2031)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:116)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:96)
        at java.lang.Thread.run(Thread.java:745)

        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1457)
        at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1661)
        at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1719)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:30328)
        at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:174)
        ... 17 more
我建议查看代码并进行阶段性导出。 如果表非常大,下面是一些提示,您可以通过查看
Export
命令的代码来尝试这些提示 您可以调整缓存大小,应用扫描过滤器

请参见下面的
从hbase导出用法

  • 在1.5版之前
  • 在2.0版之后
请参阅usage命令:它提供了更多选项

根据我的经验,
cachesize
(不是批量大小=当时的列数)和或
自定义筛选条件应该适合您。 例如:如果密钥以0开头,其中0是区域名称,则首先通过指定筛选器导出这些行 然后是下一个区域数据。。。等等。下面是ExportFilter代码段,通过它您可以了解它是如何工作的

  private static Filter getExportFilter(String[] args) { 
138     Filter exportFilter = null; 
139     String filterCriteria = (args.length > 5) ? args[5]: null; 
140     if (filterCriteria == null) return null; 
141     if (filterCriteria.startsWith("^")) { 
142       String regexPattern = filterCriteria.substring(1, filterCriteria.length()); 
143       exportFilter = new RowFilter(CompareOp.EQUAL, new RegexStringComparator(regexPattern)); 
144     } else { 
145       exportFilter = new PrefixFilter(Bytes.toBytesBinary(filterCriteria)); 
146     } 
147     return exportFilter; 
148   } 

/* 
151    * @param errorMsg Error message.  Can be null. 
152    */ 
153   private static void usage(final String errorMsg) { 
154     if (errorMsg != null && errorMsg.length() > 0) { 
155       System.err.println("ERROR: " + errorMsg); 
156     } 
157     System.err.println("Usage: Export [-D <property=value>]* <tablename> <outputdir> [<versions> " + 
158       "[<starttime> [<endtime>]] [^[regex pattern] or [Prefix] to filter]]\n"); 
159     System.err.println("  Note: -D properties will be applied to the conf used. "); 
160     System.err.println("  For example: "); 
161     System.err.println("   -D mapreduce.output.fileoutputformat.compress=true"); 
162     System.err.println("   -D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.GzipCodec"); 
163     System.err.println("   -D mapreduce.output.fileoutputformat.compress.type=BLOCK"); 
164     System.err.println("  Additionally, the following SCAN properties can be specified"); 
165     System.err.println("  to control/limit what is exported.."); 
166     System.err.println("   -D " + TableInputFormat.SCAN_COLUMN_FAMILY + "=<familyName>"); 
167     System.err.println("   -D " + RAW_SCAN + "=true"); 
168     System.err.println("   -D " + TableInputFormat.SCAN_ROW_START + "=<ROWSTART>"); 
169     System.err.println("   -D " + TableInputFormat.SCAN_ROW_STOP + "=<ROWSTOP>"); 
170     System.err.println("   -D " + JOB_NAME_CONF_KEY 
171         + "=jobName - use the specified mapreduce job name for the export"); 
172     System.err.println("For performance consider the following properties:\n" 
173         + "   -Dhbase.client.scanner.caching=100\n" 
174         + "   -Dmapreduce.map.speculative=false\n" 
175         + "   -Dmapreduce.reduce.speculative=false"); 
176     System.err.println("For tables with very wide rows consider setting the batch size as below:\n" 
177         + "   -D" + EXPORT_BATCHING + "=10"); 
178   } 
私有静态筛选器getExportFilter(字符串[]args){
138过滤器exportFilter=null;
139字符串筛选准则=(args.length>5)?args[5]:空;
140如果(filterCriteria==null)返回null;
141如果(filterCriteria.startsWith(“^”){
142字符串regexpatern=filterCriteria.substring(1,filterCriteria.length());
143 exportFilter=新的行过滤器(CompareOp.EQUAL,新的RegexStringComparator(regexpatern));
144}其他{
145 exportFilter=新的PrefixFilter(Bytes.tobytes二进制(filterCriteria));
146     } 
147返回滤波器;
148   } 
/* 
151*@param errorMsg错误消息。可以为空。
152    */ 
153私有静态无效用法(最终字符串errorMsg){
154如果(errorMsg!=null&&errorMsg.length()>0){
155系统错误println(“错误:+errorMsg”);
156     } 
157 System.err.println(“用法:导出[-D]*[”+
158“[[]][^[regex模式]或[Prefix]过滤器]]\n”);
159 System.err.println(“注意:-D属性将应用于所使用的配置。”);
160 System.err.println(“例如:”);
161 System.err.println(“-D mapreduce.output.fileoutputformat.compress=true”);
162 System.err.println(“-D mapreduce.output.fileoutputformat.compress.codec=org.apache.hadoop.io.compress.gzip代码”);
163 System.err.println(“-D mapreduce.output.fileoutputformat.compress.type=BLOCK”);
164 System.err.println(“另外,可以指定以下扫描属性”);
165系统错误println(“控制/限制输出内容”);
166 System.err.println(“-D”+TableInputFormat.SCAN\u COLUMN\u FAMILY+“=”);
167系统错误println(“-D”+原始扫描+”=true”);
168 System.err.println(“-D”+TableInputFormat.SCAN\u ROW\u START+”=”);
169系统.err.println(“-D”+TableInputFormat.SCAN\u ROW\u STOP+“=”);
170 System.err.println(“-D”+作业名称配置密钥
171+“=作业名称-使用指定的mapreduce作业名称进行导出”);
172系统.rr.PrtLn(“用于性能考虑以下属性:\n”)
173+“-Dhbase.client.scanner.caching=100\n”
174+“-Dmapreduce.map.Prospective=false\n”
175+“-Dmapreduce.reduce.Projective=false”);
176系统.rr.PrtLn(“对于具有非常宽行的表考虑将批大小设置如下:\n”
177+“-D”+出口配料+”=10”);
178   } 

这更像是网络问题,对吧?我也有同样的问题。您知道如何减少导出批处理的负载,使其不会超时吗?我已尝试设置mapreduce.job.maps,但hbase导出似乎忽略了它。