Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/sockets/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Sockets 在CentOS上的Hadoop datanode上打开套接字连接_Sockets_Hadoop_Centos_Cloudera - Fatal编程技术网

Sockets 在CentOS上的Hadoop datanode上打开套接字连接

Sockets 在CentOS上的Hadoop datanode上打开套接字连接,sockets,hadoop,centos,cloudera,Sockets,Hadoop,Centos,Cloudera,我正在centos 6.2.64计算机上运行一个hadoop示例作业进行调试 hadoop jar hadoop-examples-0.20.2-cdh3u3.jar randomtextwriter o 而且,在作业完成后,与数据节点的连接似乎仍然保持不变 java 8979 username 51u IPv6 326596025 0t0 TCP localhost:50010->localhost:56126 (ES

我正在centos 6.2.64计算机上运行一个hadoop示例作业进行调试

hadoop jar hadoop-examples-0.20.2-cdh3u3.jar randomtextwriter o
而且,在作业完成后,与数据节点的连接似乎仍然保持不变

java       8979 username   51u     IPv6          326596025        0t0       TCP localhost:50010->localhost:56126 (ESTABLISHED)
java       8979 username   54u     IPv6          326621990        0t0       TCP localhost:50010->localhost:56394 (ESTABLISHED)
java       8979 username   59u     IPv6          326578719        0t0       TCP *:50010 (LISTEN)
java       8979 username   75u     IPv6          326596390        0t0       TCP localhost:50010->localhost:56131 (ESTABLISHED)
java       8979 username   84u     IPv6          326621621        0t0       TCP localhost:50010->localhost:56388 (ESTABLISHED)
java       8979 username   85u     IPv6          326622171        0t0       TCP localhost:50010->localhost:56395 (ESTABLISHED)
java       9276 username   77u     IPv6          326621714        0t0       TCP localhost:56388->localhost:50010 (ESTABLISHED)
java       9276 username   78u     IPv6          326596118        0t0       TCP localhost:56126->localhost:50010 (ESTABLISHED)
java       9408 username   75u     IPv6          326596482        0t0       TCP localhost:56131->localhost:50010 (ESTABLISHED)
java       9408 username   76u     IPv6          326622170        0t0       TCP localhost:56394->localhost:50010 (ESTABLISHED)
java       9408 username   77u     IPv6          326622930        0t0       TCP localhost:56395->localhost:50010 (ESTABLISHED)
最终,过了一段时间,我在datanode日志中发现了这个错误

2012-04-12 15:56:29,151 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(127.0.0.1:50010, storageID=DS-591618896-176.9.25.36-50010-1333654003291, infoPort=50075, ipcPort=50020):DataXceiver
java.io.FileNotFoundException: /tmp/hadoop-serendio/dfs/data/current/subdir4/blk_-4401902756916730461_31251.meta (Too many open files)
        at java.io.FileInputStream.open(Native Method)
        at java.io.FileInputStream.<init>(FileInputStream.java:137)
        at org.apache.hadoop.hdfs.server.datanode.FSDataset.getMetaDataInputStream(FSDataset.java:996)
        at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:125)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:258)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)
2012-04-12 15:56:29151错误org.apache.hadoop.hdfs.server.datanode.datanode:datanoderRegistration(127.0.0.1:50010,storageID=DS-591618896-176.9.25.36-50010-133365403291,infoPort=50075,ipcPort=50020):DataXceiver
java.io.FileNotFoundException:/tmp/hadoop serendio/dfs/data/current/subdir4/blk\u4401902756916730461\u 31251.meta(打开的文件太多)
在java.io.FileInputStream.open(本机方法)
位于java.io.FileInputStream。(FileInputStream.java:137)
位于org.apache.hadoop.hdfs.server.datanode.FSDataset.getMetaDataInputStream(FSDataset.java:996)
位于org.apache.hadoop.hdfs.server.datanode.BlockSender.(BlockSender.java:125)
位于org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:258)
位于org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)
这会导致生产系统中出现问题,即datanode的Xciever不足。 这种行为似乎不会发生在我的Ubuntu开发盒上。我们正在使用cloudera hadoop-0.20.2-cdh3u3


是否有解决此问题的指针?

如果尚未指定,请在hdfs-site.xml中添加:

<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>

尝试增加ulimit值
 # of xcievers = (( # of storfiles + # of regions * 4 + # of regioServer * 2 ) / # of datanodes)+reserves(20%)