Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 为什么我需要在hdfs中保留hbase/lib文件夹?_Hadoop_Hbase - Fatal编程技术网

Hadoop 为什么我需要在hdfs中保留hbase/lib文件夹?

Hadoop 为什么我需要在hdfs中保留hbase/lib文件夹?,hadoop,hbase,Hadoop,Hbase,我有一个主集群,它在Hbase中有一些数据,我想复制它。我已经创建了一个备份集群,并创建了要复制的表的快照。我正在尝试将快照从源群集导出到目标群集,但遇到一些错误。 我正在执行 ./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot mySnap -copy-to hdfs://198.58.88.11:9000/hbase 作为处决的结果,我得到了 SLF4J: Class path contains multipl

我有一个主集群,它在Hbase中有一些数据,我想复制它。我已经创建了一个备份集群,并创建了要复制的表的快照。我正在尝试将快照从源群集导出到目标群集,但遇到一些错误。 我正在执行

./hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot mySnap -copy-to hdfs://198.58.88.11:9000/hbase
作为处决的结果,我得到了

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/vagrant/hbase/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/vagrant/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-03-05 10:58:43,155 INFO  [main] snapshot.ExportSnapshot: Copy Snapshot Manifest
2015-03-05 10:58:43,596 INFO  [main] Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2015-03-05 10:58:43,597 INFO  [main] jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
2015-03-05 10:58:43,890 INFO  [main] mapreduce.JobSubmitter: Cleaning up the staging area file:/home/vagrant/hadoop/hadoop-datastore/mapred/staging/vagrant1489762780/.staging/job_local1489762780_0001
2015-03-05 10:58:43,892 ERROR [main] snapshot.ExportSnapshot: Snapshot export failed
java.io.FileNotFoundException: File does not exist: hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072)
    at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
    at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
    at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.runCopyJob(ExportSnapshot.java:775)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:934)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:1008)
    at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:1012)
所以,据我所知,它试图找到
base-client-1.0.0.jar
但是正在查看
hdfs://namenode:9000/home/vagrant/hbase/lib/hbase-client-1.0.0.jar
,不在本地存储中。
你知道为什么会这样吗?

在我的案例中,问题的原因是纱线和贴图的配置错误。在正确配置它们之后,我能够毫无问题地导出快照

使您的
mapred site.xml
如下所示

<configuration>
   <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
   </property>
   <property>
      <name>mapreduce.jobtracker.address</name>
      <value>cluster2.master:8021</value>
   </property>
</configuration>

cluster2.master
应根据您的设置进行更改。

我在Cloudera 5的HBase 1.0和HDFS 2.6.0中偶然发现了这一点

我使用的解决方法是……将这些JAR复制到HDF中。我知道这很难看,但它很有效,所以最好什么都不做

首先:

export CLUSTER_NAME=<your_hdfs_cluster_name>
…并使用以下内容复制所有罐子:

hdfs dfs -cp file:///usr/lib/hbase/lib/*.jar hdfs://$CLUSTER_NAME/usr/lib/hbase/lib/
hdfs dfs -cp file:///usr/lib/zookeeper/*.jar hdfs://$CLUSTER_NAME/usr/lib/zookeeper
hdfs dfs -cp file:///usr/lib/hadoop-mapreduce/*.jar hdfs://$CLUSTER_NAME/usr/lib/hadoop-mapreduce
hdfs dfs -cp file:///usr/lib/hadoop/*.jar hdfs://$CLUSTER_NAME/usr/lib/hadoop

有答案吗?这对我会有很大帮助。@FabioMoreira,我在使用Ubuntu14.04时遇到了这个问题,我使用了Ubuntu12.04,它运行正常。在我的情况下,更改OS不是问题,但是如果是在你的情况下,考虑检查HadoP的Hadoop JAR版本和实际Hadoop版本是相同的。确保类路径包含指向此JAR的路径。
hdfs dfs -mkdir -p hdfs://$CLUSTER_NAME/usr/lib/hbase/lib/
hdfs dfs -mkdir -p hdfs://$CLUSTER_NAME/usr/lib/zookeeper
hdfs dfs -mkdir -p hdfs://$CLUSTER_NAME/usr/lib/hadoop-mapreduce
hdfs dfs -mkdir -p hdfs://$CLUSTER_NAME/usr/lib/hadoop
hdfs dfs -cp file:///usr/lib/hbase/lib/*.jar hdfs://$CLUSTER_NAME/usr/lib/hbase/lib/
hdfs dfs -cp file:///usr/lib/zookeeper/*.jar hdfs://$CLUSTER_NAME/usr/lib/zookeeper
hdfs dfs -cp file:///usr/lib/hadoop-mapreduce/*.jar hdfs://$CLUSTER_NAME/usr/lib/hadoop-mapreduce
hdfs dfs -cp file:///usr/lib/hadoop/*.jar hdfs://$CLUSTER_NAME/usr/lib/hadoop