Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/three.js/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无法通过java客户端获取Hadoop作业信息_Java_Hadoop - Fatal编程技术网

无法通过java客户端获取Hadoop作业信息

无法通过java客户端获取Hadoop作业信息,java,hadoop,Java,Hadoop,我正在使用Hadoop 1.2.1,并试图通过java客户端打印作业详细信息,但它没有打印任何内容,这是我的java代码 Configuration configuration = new Configuration(); configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml")); configuration.addResource(new Path("/usr/local/ha

我正在使用Hadoop 1.2.1,并试图通过java客户端打印作业详细信息,但它没有打印任何内容,这是我的java代码

    Configuration configuration = new Configuration();
    configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/hdfs-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/mapred-site.xml")); 
    InetSocketAddress jobtracker = new InetSocketAddress("localhost", 54311);
    JobClient jobClient;
    jobClient = new JobClient(jobtracker, configuration);
    jobClient.setConf(configuration);
    JobStatus[] jobs = jobClient.getAllJobs();
    System.out.println(jobs.length);//it is printing 0.
    for (int i = 0; i < jobs.length; i++) {
        JobStatus js = jobs[i];
        JobID jobId = js.getJobID();
        System.out.println(jobId);
    }
Configuration配置=新配置();
addResource(新路径(“/usr/local/hadoop/conf/core site.xml”);
addResource(新路径(“/usr/local/hadoop/conf/hdfs site.xml”);
addResource(新路径(“/usr/local/hadoop/conf/mapred site.xml”);
InetSocketAddress jobtracker=新的InetSocketAddress(“localhost”,54311);
JobClient;
jobClient=新的jobClient(jobtracker,配置);
setConf(配置);
JobStatus[]jobs=jobClient.getAllJobs();
System.out.println(作业长度)//它正在打印0。
for(int i=0;i
但从求职者的历史来看,我可以看到三份工作。这是屏幕截图 谁能告诉我哪里出了问题。我只想打印所有的工作细节

这是我的配置文件:

核心站点.xml

<configuration>
<property>
<name>hadoop.tmp.dir</name
<value>/data/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose</description>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.  The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>


hadoop.tmp.dir试试这样的东西

jobClient.displayTasks(jobID, "map", "completed");
作业ID在哪里

JobID jobID = new JobID(jobIdentifier, jobNumber);


我不确定
jobClient.getAllJobs()
是否访问已完成的作业。谢谢@Chaos,那么我如何获取已完成的作业信息。有同样的问题。你有没有想过?
TaskReport[] taskReportList =   jobClient.getMapTaskReports(jobID);