Hadoop 运行TeraSort时Datanode未启动

Hadoop 运行TeraSort时Datanode未启动,hadoop,mapreduce,hdfs,bigdata,master-slave,Hadoop,Mapreduce,Hdfs,Bigdata,Master Slave,我有4个奴隶(包括主人)。当我运行TeraSort时,我的一个奴隶出现了这个错误。DataNodes在运行前已启动,但当我运行一个DataNodes时,其中一个死亡,计算由其余3个从属节点完成: INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-5677299757617064640_1010 received exception java.io.IOException: Connection reset

我有4个奴隶(包括主人)。当我运行TeraSort时,我的一个奴隶出现了这个错误。DataNodes在运行前已启动,但当我运行一个DataNodes时,其中一个死亡,计算由其余3个从属节点完成:

INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_-5677299757617064640_1010 received exception java.io.IOException: Connection reset by peer

2015-03-12 16:42:06,835 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020):DataXceiver

java.io.IOException: Connection reset by peer (this is one error same log same run )

2015-03-12 16:42:09,809 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.0.115:50010, storageID=DS-518613992-192.168.0.115-50010-1426203432424, infoPort=50075, ipcPort=50020): Exception writing block blk_2791945666924613489_1015 to mirror 192.168.0.112:50010

java.io.IOException: Broken pipe(Second error)

2015-03-12 16:42:09,824 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock blk_2791945666924613489_1015 received exception java.io.EOFException: while trying to read 65557 bytes(third error same run)
我被困在这里面了。感谢您的帮助

任务跟踪日志:

 WARN org.apache.hadoop.mapred.TaskTracker: Failed validating JVM
java.io.IOException: JvmValidate Failed. Ignoring request from task: attempt_201503121637_0001_m_000040_0, with JvmId: jvm_201503121637_0001_m_-2136609016
        at org.apache.hadoop.mapred.TaskTracker.validateJVM(TaskTracker.java:3278)
        at org.apache.hadoop.mapred.TaskTracker.statusUpdate(TaskTracker.java:3348)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
2015-03-12 16:43:02,577 WARN org.apache.hadoop.mapred.DefaultTaskController: Exit code from task is : 143
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.DefaultTaskController: Output from DefaultTaskController's launchTask follows:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.TaskController:
2015-03-12 16:43:02,577 INFO org.apache.hadoop.mapred.JvmManager: JVM : jvm_201503121637_0001_m_1555953113 exited with exit code 143. Number of tasks it ran: 1
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: LaunchTaskAction (registerTask): attempt_201503121637_0001_m_000054_0 task's state:UNASSIGNED
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Received commit task action for attempt_201503121637_0001_m_000048_0
2015-03-12 16:43:02,599 INFO org.apache.hadoop.mapred.TaskTracker: Trying to launch : attempt_201503121637_0001_m_000054_0 which needs 1 slots
2015-03-12 16:43:02,600 INFO org.apache.hadoop.mapred.TaskTracker: TaskLauncher : Waiting for 1 to launch attempt_201503121637_0001_m_000054_0, currently we have 0 free slots
2015-03-12 16:43:03,618 INFO org.apache.hadoop.mapred.TaskTracker: JVM with ID: jvm_201503121637_0001_m_1496188144 given task: attempt_201503121637_0001_m_000051_0

TaskTracker日志更具描述性。你能告诉我日志里有什么吗

还要检查服务器是否在正确的端口上运行

您可以尝试这样做,将hadoop核心jar从工作的datanode复制并替换到发生故障的datanode,然后重新启动mapreduce服务

还有一件事需要检查,在工作的datanode上执行netstat以查看tasktracker服务器在哪个端口上运行,然后检查tasktracker服务是否在故障节点的同一端口上运行

我猜tasktracker的默认端口是50060


因此,由于端口正常,当reduce任务发出的请求未得到满足或结果被截断时,会发生由对等方重置连接的情况,如果找不到合适的文件,也可能会发生这种情况(由于权限的原因也可能发生)。

我解决了这个问题。问题是我正在通过根目录对我的从属服务器执行SSH,作业跟踪器和任务跟踪器之间的通信过于频繁,因此出现了问题(对等方拒绝连接)。我在主服务器和从服务器之间设置了一个无密码SSH连接,现在可以正常工作了。(您需要通过hduser或在hadoop组中创建的用户进行SSH)

谢谢Sahitya的时间和帮助!谢谢你


-Vinod

它运行正常,我对错误没有任何线索:(谢谢,这提供了信息。但当你说它在设置无密码ssh后起作用时,你不是在安装hadoop之前就这么做了吗?