python hadoop:mapreduce作业不工作

python hadoop:mapreduce作业不工作,python,hadoop,mapreduce,hdfs,sequencefile,Python,Hadoop,Mapreduce,Hdfs,Sequencefile,我的map reduce程序正在处理20个视频,所以我在hdfs中上传了20个视频,当我开始在终端上执行map reduce代码时,它不会继续。当我运行这个命令时,pydoop submit--upload file to cache stage1.py stage1 path_directory stage1_outputit get stop.log-on终端如下 hduser@Barca-FC:/home/uday/Project/final project/algo2$ pydoop s

我的map reduce程序正在处理20个视频,所以我在hdfs中上传了20个视频,当我开始在终端上执行map reduce代码时,它不会继续。当我运行这个命令时,
pydoop submit--upload file to cache stage1.py stage1 path_directory stage1_output
it get stop.log-on终端如下

hduser@Barca-FC:/home/uday/Project/final project/algo2$ pydoop submit --upload-file-to-cache twodct.py twodct  path_directory twodct_output
16/05/30 18:19:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/30 18:19:21 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
16/05/30 18:19:22 WARN mapreduce.JobSubmitter: No job jar file set.  User classes may not be found. See Job or Job#setJar(String).
16/05/30 18:19:22 INFO input.FileInputFormat: Total input paths to process : 1
16/05/30 18:19:22 INFO mapreduce.JobSubmitter: number of splits:1
16/05/30 18:19:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1464609268645_0002
16/05/30 18:19:23 INFO mapred.YARNRunner: Job jar is not present. Not adding any jar to the list of resources.
16/05/30 18:19:23 INFO impl.YarnClientImpl: Submitted application application_1464609268645_0002
16/05/30 18:19:23 INFO mapreduce.Job: The url to track the job: http://Barca-FC:8088/proxy/application_1464609268645_0002/
16/05/30 18:19:23 INFO mapreduce.Job: Running job: job_1464609268645_0002
我的hadoop配置文件如下所示:

mapred-site.xml:
<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
<property>
<name>mapred.reduce.tasks</name>
<value>1</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
mapred-site.xml:
mapred.job.tracker
本地主机:54311
MapReduce作业跟踪器运行的主机和端口
在如果为“本地”,则作业作为单个映射在进程中运行
并减少任务。
mapred.reduce.tasks
1.
mapreduce.framework.name
纱线
hdfs-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
<property>
 <name>dfs.webhdfs.enabled</name>
 <value>true</value>
</property>
</configuration>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

dfs.replication
1.
默认块复制。
创建文件时,可以指定实际的复制次数。
如果在创建时未指定复制,则使用默认值。
dfs.namenode.name.dir
文件:/usr/local/hadoop\u store/hdfs/namenode
dfs.datanode.data.dir
文件:/usr/local/hadoop\u store/hdfs/datanode
dfs.webhdfs.enabled
真的
core-site.xml:

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
<property>
 <name>dfs.webhdfs.enabled</name>
 <value>true</value>
</property>
</configuration>
<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

hadoop.tmp.dir
/app/hadoop/tmp
其他临时目录的基础。
fs.default.name
hdfs://localhost:54310
默认文件系统的名称。其
方案和权限决定文件系统的实现。这个
uri的方案决定了配置属性(fs.scheme.impl)的命名
文件系统实现类。uri的权限用于
确定文件系统的主机、端口等。
fs.defaultFS
hdfs://localhost:9000
谁能告诉我为什么我的mapreduce工作没有进行?
提前谢谢

map reduce代码是否使用任何本机库w.r.t视频(如果需要),本机库需要在节点上可用。map reduce代码是否使用任何本机库w.r.t视频(如果需要,本机库需要在节点上可用)。