Jakarta ee 运行hadoop pi示例时的java.lang.runtimeexception java.net.connectexception

Jakarta ee 运行hadoop pi示例时的java.lang.runtimeexception java.net.connectexception,jakarta-ee,hadoop,mapreduce,hdfs,Jakarta Ee,Hadoop,Mapreduce,Hdfs,我已经在两台机器上配置了hadoop。我可以使用ssh访问这两台计算机,无需密码。我已使用以下命令成功格式化了namenode:-- 然后我尝试运行hadoop.tar附带的pi示例 sandip@master:~/hadoop-1.0.4$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 5 500 Number of Maps = 5 Samples per Map = 500 13/04/14 04:13:04 INFO ipc.Client: R

我已经在两台机器上配置了hadoop。我可以使用ssh访问这两台计算机,无需密码。我已使用以下命令成功格式化了namenode:--

然后我尝试运行hadoop.tar附带的pi示例

sandip@master:~/hadoop-1.0.4$ bin/hadoop jar hadoop-examples-1.0.4.jar pi 5 500
Number of Maps  = 5
Samples per Map = 500
13/04/14 04:13:04 INFO ipc.Client: Retrying connect to server:       
master/192.168.188.131:9000. Already tried 0 time(s).
13/04/14 04:13:05 INFO ipc.Client: Retrying connect to server:                         
master/192.168.188.131:9000. Already tried 1 time(s).
13/04/14 04:13:06 INFO ipc.Client: Retrying connect to server:     
master/192.168.188.131:9000. Already tried 2 time(s).
13/04/14 04:13:07 INFO ipc.Client: Retrying connect to server:   
master/192.168.188.131:9000. Already tried 3 time(s).
13/04/14 04:13:08 INFO ipc.Client: Retrying connect to server:   
master/192.168.188.131:9000. Already tried 4 time(s).
13/04/14 04:13:09 INFO ipc.Client: Retrying connect to server:      
master/192.168.188.131:9000. Already tried 5 time(s).
13/04/14 04:13:10 INFO ipc.Client: Retrying connect to server:     
master/192.168.188.131:9000. Already tried 6 time(s).
13/04/14 04:13:11 INFO ipc.Client: Retrying connect to server:   
master/192.168.188.131:9000. Already tried 7 time(s).
13/04/14 04:13:12 INFO ipc.Client: Retrying connect to server:   
master/192.168.188.131:9000. Already tried 8 time(s).
13/04/14 04:13:13 INFO ipc.Client: Retrying connect to server:    
master/192.168.188.131:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.net.ConnectException: Call to    
master/192.168.188.131:9000 failed on connection exception: java.net.ConnectException:    
Connection refused
at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:546)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at     
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at 
 org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at    

上面写着:连接被拒绝。 通过查看日志,我可以说在启动集群时,您的计算机无法连接到namenode

检查以下事项: -名称节点地址正确(使用core site.xml中的default.fs.Name指定) -检查在mapred-site.xml中指定的任务跟踪器地址是否正确 -检查是否在core-site.xml中指定了hadoop.tmp.dir属性。格式化后,此目录存在于您的计算机上


-格式化后,只需验证另一件事,即版本文件在主节点和从节点中应包含相同的名称空间ID。

它显示:连接被拒绝。 通过查看日志,我可以说在启动集群时,您的计算机无法连接到namenode

检查以下事项: -名称节点地址正确(使用core site.xml中的default.fs.Name指定) -检查在mapred-site.xml中指定的任务跟踪器地址是否正确 -检查是否在core-site.xml中指定了hadoop.tmp.dir属性。格式化后,此目录存在于您的计算机上


-格式化后,只需验证另一件事,即版本文件在主节点和从节点中应包含相同的名称空间ID。

ooh M G我忘了启动hadoop群集。我通过运行以下命令修复了它:-

bin/hadoop namenode -format

bin/start all.sh

噢,我忘了启动hadoop集群。我通过运行以下命令修复了它:-

bin/hadoop namenode -format

bin/start all.sh

您启动了群集吗?您启动了群集吗?我没有指定任务跟踪器的地址,我在mapred-site.xml中只指定了作业跟踪器的地址。我已经在core-site.xml中指定了hadoop.tmp.dir属性,它也存在于我的计算机中的指定路径。当运行“hadoop start dfs.sh”命令时,我在其中获得了一些文件夹。先生,我想知道如何检查存在的版本文件。我没有指定任务跟踪器的地址,我在mapred-site.xml中只指定了作业跟踪器的地址。我已经在core-site.xml中指定了hadoop.tmp.dir属性,它也存在于我的计算机中的指定路径。当运行“hadoop start dfs.sh”命令时,我在其中获得了一些文件夹。先生,我想知道如何检查存在的版本文件