Hadoop&;Windows上单节点群集中的Hbase安装和配置问题

Hadoop&;Windows上单节点群集中的Hbase安装和配置问题,hadoop,hbase,apache-zookeeper,Hadoop,Hbase,Apache Zookeeper,我最近开始研究NoSql和大数据,并决定继续研究它们。几天来,我一直在尝试在我的win2008 R2 64位机器上安装和配置Hadoop和Hbase。但不幸的是,我没有成功,我在安装的每个阶段都有不同的错误。在这方面,我遵循下面提到的教程 对于Hadoop= 对于Hbase= 首先,当我在/usr/local/hadoop目录中运行jps命令时,我在那里看不到datanode,这些值仅在那里: $jps 3984名称节点 6864日元 5972工作追踪者 但是,当我导航到此地址127.0.0.1

我最近开始研究NoSql和大数据,并决定继续研究它们。几天来,我一直在尝试在我的win2008 R2 64位机器上安装和配置Hadoop和Hbase。但不幸的是,我没有成功,我在安装的每个阶段都有不同的错误。在这方面,我遵循下面提到的教程

对于Hadoop= 对于Hbase=

首先,当我在/usr/local/hadoop目录中运行jps命令时,我在那里看不到datanode,这些值仅在那里:

$jps
3984名称节点
6864日元
5972工作追踪者

但是,当我导航到此地址127.0.0.1:50070时,它运行正常。但当我尝试运行TestWordCount示例作业时,它在下面提到的位置停留了很长时间,我必须重新启动cygwin终端:

11/06/13 13:43:01信息映射。作业客户端:正在运行的作业:作业\u 201005081732\u 0001 11/06/13 13:43:02信息映射。作业客户端:映射0%减少0%

此外,我只是忽略了它,转而在Hadoop上安装和配置Hbase,安装进行得很顺利,但现在当我在Hbase shell中运行不同的命令时,我收到不同的错误,例如,如果我运行“list”命令,我会得到错误:
org.apache.Hadoop.Hbase.MasterNotRunningException:重试7次 如果我运行Scan'test'命令,我会得到错误:
org.apache.hadoop.hbase.client.NoServerForRegionException:无法找到测试区域,尝试7次后,9999999999999

我真的不知道该怎么办,我已经搜索了几天了,但仍然找不到解决我错误的确切方法

为了成功配置Hadoop和Hbase,我真的需要你们专家在这方面的帮助

这是我的数据节点日志:

2013-06-11 14:21:16,703 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_3811235227329042813_1246 src: /127.0.0.1:51511 dest: /127.0.0.1:50010
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51511, dest: /127.0.0.1:50010, bytes: 142452, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_3811235227329042813_1246, duration: 8188439
2013-06-11 14:21:16,721 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_3811235227329042813_1246 terminating
2013-06-11 14:21:17,024 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-7864325777801075696_1247 src: /127.0.0.1:51512 dest: /127.0.0.1:50010
2013-06-11 14:21:17,034 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51512, dest: /127.0.0.1:50010, bytes: 368, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_-7864325777801075696_1247, duration: 1775491
2013-06-11 14:21:17,035 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-7864325777801075696_1247 terminating
2013-06-11 14:21:17,135 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_8363548489446884759_1248 src: /127.0.0.1:51513 dest: /127.0.0.1:50010
2013-06-11 14:21:17,145 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51513, dest: /127.0.0.1:50010, bytes: 77, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 1461072
2013-06-11 14:21:17,146 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_8363548489446884759_1248 terminating
2013-06-11 14:21:17,481 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_2254833662532666780_1249 src: /127.0.0.1:51514 dest: /127.0.0.1:50010
2013-06-11 14:21:17,493 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51514, dest: /127.0.0.1:50010, bytes: 20596, op: HDFS_WRITE, cliID: DFSClient_1741700406, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 2206535
2013-06-11 14:21:17,494 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_2254833662532666780_1249 terminating
2013-06-11 14:21:17,861 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51516, bytes: 20760, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_2254833662532666780_1249, duration: 3906454
2013-06-11 14:21:18,234 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving block blk_-2949992568769351385_1250 src: /127.0.0.1:51518 dest: /127.0.0.1:50010
2013-06-11 14:21:18,244 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:51518, dest: /127.0.0.1:50010, bytes: 106, op: HDFS_WRITE, cliID: DFSClient_-163790033, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_-2949992568769351385_1250, duration: 1404625
2013-06-11 14:21:18,245 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder 0 for block blk_-2949992568769351385_1250 terminating
2013-06-11 14:21:18,290 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /127.0.0.1:50010, dest: /127.0.0.1:51519, bytes: 81, op: HDFS_READ, cliID: DFSClient_-1869746926, offset: 0, srvID: DS-2012389790-192.168.168.63-50010-1370448134624, blockid: blk_8363548489446884759_1248, duration: 694149
2013-06-11 14:22:00,557 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification succeeded for blk_3811235227329042813_1246

TaskTrakers Log:

2013-06-11 12:33:27,223 INFO org.apache.hadoop.mapred.TaskTracker: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting TaskTracker
STARTUP_MSG:   host = WIN-UHHLG0L1912/192.168.168.63
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.4
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012
************************************************************/
2013-06-11 12:33:27,676 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2013-06-11 12:33:27,812 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2013-06-11 12:33:27,815 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: TaskTracker metrics system started
2013-06-11 12:33:28,402 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2013-06-11 12:33:28,411 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-06-11 12:33:28,697 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2013-06-11 12:33:28,852 INFO org.apache.hadoop.http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2013-06-11 12:33:28,954 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-06-11 12:33:28,963 INFO org.apache.hadoop.mapred.TaskTracker: Starting tasktracker with owner as cyg_server
2013-06-11 12:33:28,965 INFO org.apache.hadoop.mapred.TaskTracker: Good mapred local directories are: /tmp/hadoop-cyg_server/mapred/local
2013-06-11 12:33:28,982 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-06-11 12:33:28,984 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Failed to set permissions of path: \tmp\hadoop-cyg_server\mapred\local\taskTracker to 0755
    at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:689)
    at org.apache.hadoop.fs.FileUtil.setPermission(FileUtil.java:670)
    at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:509)
    at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:344)
    at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
    at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:723)
    at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1459)
    at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3742)

2013-06-11 12:33:28,986 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down TaskTracker at WIN-UHHLG0L1912/192.168.168.63
************************************************************/
2013-06-11 14:21:16703 INFO org.apache.hadoop.hdfs.server.datanode.datanode:接收块blk_3811235227329042813_1246 src:/127.0.0.1:51511 dest:/127.0.0.1:50010
2013-06-11 14:21:16721 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:51511,dest:/127.0.0.1:50010,字节:142452,op:hdfs_WRITE,cliID:DFSClient_1741700406,偏移量:0,srvID:DS-2012389790-192.168.168.63-50010-1370448134624,blockid:BLK38112352273290813,持续时间:12488439
2013-06-11 14:21:16721 INFO org.apache.hadoop.hdfs.server.datanode.datanode:block blk_3811235227329042813_1246的PacketResponder 0终止
2013-06-11 14:21:17024 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Receiving block blk_64325;-78777801075696_1247;src:/127.0.0.1:51512 dest:/127.0.0.1:50010
2013-06-11 14:21:17034 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:51512,dest:/127.0.0.1:50010,字节:368,op:hdfs_WRITE,cliID:DFSClient_1741700406,偏移量:0,srvID:DS-2012389790-192.168.63-50010-1370448134624
2013-06-11 14:21:17035 INFO org.apache.hadoop.hdfs.server.datanode.datanode:block blk_的PacketResponder 0-7864325777801075696_1247终止
2013-06-11 14:21:17135 INFO org.apache.hadoop.hdfs.server.datanode.datanode:接收块blk_8363548489446884759_1248 src:/127.0.0.1:51513 dest:/127.0.0.1:50010
2013-06-11 14:21:17145 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:51513,dest:/127.0.0.1:50010,字节:77,op:hdfs_WRITE,cliID:DFSClient_1741700406,偏移量:0,srvID:DS-2012389790-192.168.63-50010-1370448134624
2013-06-11 14:21:17146 INFO org.apache.hadoop.hdfs.server.datanode.datanode:block blk_8363548489446884759_1248的PacketResponder 0终止
2013-06-11 14:21:17481 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Receiving block blk_2254833366253666780_1249 src:/127.0.0.1:51514 dest:/127.0.0.1:50010
2013-06-11 14:21:17493 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:51514,dest:/127.0.0.1:50010,字节:20596,op:hdfs_WRITE,cliID:DFSClient_1741700406,偏移量:0,srvID:DS-2012389790-192.168.168.63-50010-1370448134624,blockid:blk_225483; 3662532656780;,持续时间:12435
2013-06-11 14:21:17494 INFO org.apache.hadoop.hdfs.server.datanode.datanode:block blk_225483; 225483362532666780_1249的PacketResponder 0终止
2013-06-11 14:21:17861 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:50010,dest:/127.0.0.1:5151516,字节数:20760,op:hdfs_读取,cliID:DFSClient_uu-1869746926,偏移量:0,srvID:DS-2012389790-192.168.168.63-50010-1370448134624,blockid:blk_22548366253366780,持续时间:12454
2013-06-11 14:21:18234 INFO org.apache.hadoop.hdfs.server.datanode.datanode:Receiving block blk_385;-2949992568769351385_1250 src:/127.0.0.1:51518 dest:/127.0.0.1:50010
2013-06-11 14:21:18244 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:51518,dest:/127.0.0.1:50010,字节:106,op:hdfs_WRITE,cliID:DFSClient_163790033,偏移量:0,srvID:DS-2012389790-192.168.63-50010-1370448134624
2013-06-11 14:21:18245 INFO org.apache.hadoop.hdfs.server.datanode.datanode:block blk_的PacketResponder 0-2949992568769351385_1250终止
2013-06-11 14:21:18290 INFO org.apache.hadoop.hdfs.server.datanode.datanode.clienttrace:src:/127.0.1:50010,dest:/127.0.0.1:5151519,字节:81,op:hdfs_READ,cliID:DFSClient_uu-1869746926,偏移量:0,srvID:DS-2012389790-192.168.168.63-50010-1370448134624,blockid:BLK836354848484687591248,持续时间:694149
2013-06-11 14:22:00557 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner:blk_3811235227329042813_1246验证成功
TaskTrakers日志:
2013-06-11 12:33:27223 INFO org.apache.hadoop.mapred.TaskTracke
<property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
 </property>
<property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>

  <property>
    <name>dfs.name.dir</name>
    <value>/home/hadoop/workspace/name_dir</value>
  </property>

  <property>
    <name>dfs.data.dir</name>
    <value>/home/hadoop/workspace/data_dir</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>