Hadoop ConnectException:连接错误:尝试连接到';50010';在hbase上使用importtsv

Hadoop ConnectException:连接错误:尝试连接到';50010';在hbase上使用importtsv,hadoop,hbase,hadoop2,Hadoop,Hbase,Hadoop2,我在hdfs-site.xml和hbase-site.xml上都配置了短路设置。我在hbase上运行importtsv,将数据从HDFS导入hbase集群上的hbase。我查看了每个datanode的日志,所有datanode都有我对标题所说的ConnectException 2017-03-31 21:59:01,273 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error creating Dom

我在hdfs-site.xml和hbase-site.xml上都配置了短路设置。我在hbase上运行importtsv,将数据从HDFS导入hbase集群上的hbase。我查看了每个datanode的日志,所有datanode都有我对标题所说的ConnectException

2017-03-31 21:59:01,273 WARN [main] org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: error creating DomainSocket
java.net.ConnectException: connect(2) error: No such file or directory when trying to connect to '50010'
    at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
    at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:250)
    at org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:164)
    at org.apache.hadoop.hdfs.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:753)
    at org.apache.hadoop.hdfs.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:469)
    at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:783)
    at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:717)
    at org.apache.hadoop.hdfs.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:421)
    at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:332)
    at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:617)
    at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:841)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:889)
    at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:696)
    at java.io.DataInputStream.readByte(DataInputStream.java:265)
    at org.apache.hadoop.io.WritableUtils.readVLong(WritableUtils.java:308)
    at org.apache.hadoop.io.WritableUtils.readVIntInRange(WritableUtils.java:348)
    at org.apache.hadoop.io.Text.readString(Text.java:471)
    at org.apache.hadoop.io.Text.readString(Text.java:464)
    at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:358)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:751)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2017-03-31 21:59:01,277 WARN [main] org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache: ShortCircuitCache(0x34f7234e): failed to load 1073750370_BP-642933002-"IP_ADDRESS"-1490774107737
编辑

hadoop 2.6.4 hbase 1.2.3

hdfs-site.xml

<property>
    <name>dfs.namenode.dir</name>
    <value>/home/hadoop/hdfs/nn</value>
</property>
<property>
    <name>dfs.namenode.checkpoint.dir</name>
    <value>/home/hadoop/hdfs/snn</value>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>file:///home/hadoop/hdfs/dn</value>
</property>
<property>
    <name>dfs.namenode.http-address</name>
    <value>hadoop1:50070</value>
</property>
<property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop1:50090</value>
</property>
<property>
    <name>dfs.namenode.rpc-address</name>
    <value>hadoop1:8020</value>
</property>
<property>
    <name>dfs.namenode.handler.count</name>
    <value>50</value>
</property>
<property>
    <name>dfs.datanode.handler.count</name>
    <value>50</value>
</property>
<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
</property>
<property>
    <name>dfs.block.local-path-access.user</name>
    <value>hbase</value>
</property>
<property>
    <name>dfs.datanode.data.dir.perm</name>
    <value>775</value>
</property>
<property>
    <name>dfs.domain.socket.path</name>
    <value>_PORT</value>
</property>
<property>
    <name>dfs.client.domain.socket.traffic</name>
    <value>true</value>
</property>

dfs.namenode.dir
/home/hadoop/hdfs/nn
dfs.namenode.checkpoint.dir
/home/hadoop/hdfs/snn
dfs.datanode.data.dir
file:///home/hadoop/hdfs/dn
dfs.namenode.http-address
hadoop1:50070
dfs.namenode.secondary.http-address
Hadoop 1:50090
dfs.namenode.rpc-address
hadoop1:8020
dfs.namenode.handler.count
50
dfs.datanode.handler.count
50
dfs.client.read.shortcircuit
真的
dfs.block.local-path-access.user
数据库
dfs.datanode.data.dir.perm
775
dfs.domain.socket.path
_港口
dfs.client.domain.socket.traffic
真的
hbase-site.xml

<property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop1/hbase</value>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
        <value>hadoop1,hadoop2,hadoop3,hadoop4,hadoop5,hadoop6,hadoop7,hadoop8</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
</property>
<property>
    <name>hbase.regionserver.handler.count</name>
    <value>50</value>
</property>
<property>
    <name>hfile.block.cache.size</name>
    <value>0.5</value>
</property>
<property>
    <name>hbase.regionserver.global.memstore.size</name>
    <value>0.3</value>
</property>
<property>
    <name>hbase.regionserver.global.memstore.size.lower.limit</name>
    <value>0.65</value>
</property>
<property>
    <name>dfs.domain.socket.path</name>
    <value>_PORT</value>
</property>

hbase.rootdir
hdfs://hadoop1/hbase
hbase.zookeeper.quorum
hadoop1,hadoop2,hadoop3,hadoop4,hadoop5,hadoop6,hadoop7,hadoop8
hbase.cluster.distributed
真的
dfs.client.read.shortcircuit
真的
hbase.regionserver.handler.count
50
hfile.block.cache.size
0.5
hbase.regionserver.global.memstore.size
0.3
hbase.regionserver.global.memstore.size.lower.limit
0.65
dfs.domain.socket.path
_港口

短路读取使用UNIX域套接字。这是文件系统中的一个特殊路径,允许客户端和数据节点通信。您需要设置此套接字的路径(而不是端口)。DataNode应该能够创建此路径

mkdir /var/lib/hadoop-hdfs/
chown hdfs_user:hdfs_user /var/lib/hadoop-hdfs/
chmod 750 /var/lib/hadoop-hdfs/
路径值的父目录(例如:
/var/lib/hadoop hdfs/
)必须存在,并且应由hadoop超级用户拥有。还要确保除HDFS用户或
root用户之外的任何用户都无权访问此路径

mkdir /var/lib/hadoop-hdfs/
chown hdfs_user:hdfs_user /var/lib/hadoop-hdfs/
chmod 750 /var/lib/hadoop-hdfs/
将此属性添加到所有数据节点和客户端上的
hdfs site.xml

<property>
  <name>dfs.domain.socket.path</name>
  <value>/var/lib/hadoop-hdfs/dn_socket</value>
</property>

dfs.domain.socket.path
/var/lib/hadoop hdfs/dn\u套接字
进行更改后重新启动服务


注意:通常使用
/var/run
/var/lib
下的路径。

显示您的
hdfs站点。xml
@franklinsijo I添加了配置