Java Hadoop:ERROR security.UserGroupInformation:MapReduce程序中的PriviledEdActionException

Java Hadoop:ERROR security.UserGroupInformation:MapReduce程序中的PriviledEdActionException,java,hadoop,mapreduce,Java,Hadoop,Mapreduce,我正在尝试运行MapReduce作业。 执行以下命令以运行MapReduce作业时: hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output 它给我以下输出: /usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount

我正在尝试运行MapReduce作业。 执行以下命令以运行MapReduce作业时:

hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output
它给我以下输出:

/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar wordcount /user/hduser/gutenberg /user/hduser/gutenberg-output Warning: $HADOOP_HOME is deprecated. 
15/03/20 22:03:42 ERROR security.UserGroupInformation: PriviledgedActionException as:suzon cause:org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol 
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) 
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to name node: org.apache.hadoop.mapred.JobSubmissionProtocol 
at org.apache.hadoop.hdfs.server.namenode.NameNode.getProtocolVersion(NameNode.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393) 
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387) 
at org.apache.hadoop.ipc.Client.call(Client.java:1107) 
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) 
at org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(Unknown Source) 
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411) at org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:499) 
at org.apache.hadoop.mapred.JobClient.init(JobClient.java:490) 
at org.apache.hadoop.mapred.JobClient.<init>(JobClient.java:473) 
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:513) 
at java.security.AccessController.doPrivileged(Native Method) 
at javax.security.auth.Subject.doAs(Subject.java:415) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapreduce.Job.connect(Job.java:511) 
at org.apache.hadoop.mapreduce.Job.submit(Job.java:499) 
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530) 
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68) 
at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139) 
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
at java.lang.reflect.Method.invoke(Method.java:601) 
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
/usr/local/hadoop$bin/hadoop jar hadoop*示例*.jar wordcount/user/hduser/gutenberg/user/hduser/gutenberg输出警告:$hadoop\u HOME不推荐使用。
15/03/20 22:03:42 ERROR security.UserGroupInformation:PriviledgedActionException as:suzon原因:org.apache.hadoop.ipc.RemoteException:java.io.IOException:name节点的未知协议:org.apache.hadoop.mapred.JobSubmissionProtocol
位于org.apache.hadoop.hdfs.server.namenode.namenode.getProtocolVersion(namenode.java:152),位于sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)中
在java.lang.reflect.Method.invoke(Method.java:601)的sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149)
在org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)org.apache.hadoop.ipc.RemoteException:java.io.IOException:name节点的未知协议:org.apache.hadoop.mapred.JobSubmissionProtocol
位于org.apache.hadoop.hdfs.server.namenode.namenode.getProtocolVersion(namenode.java:152),位于sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)处
位于java.lang.reflect.Method.invoke(Method.java:601)
位于org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149),位于org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
位于org.apache.hadoop.ipc.Client.call(Client.java:1107)
位于org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
位于org.apache.hadoop.mapred.$Proxy2.getProtocolVersion(未知来源)
org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)org.apache.hadoop.mapred.JobClient.createRPCProxy(JobClient.java:499)
位于org.apache.hadoop.mapred.JobClient.init(JobClient.java:490)
位于org.apache.hadoop.mapred.JobClient.(JobClient.java:473)
位于org.apache.hadoop.mapreduce.Job$1.run(Job.java:513)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149),位于org.apache.hadoop.mapreduce.Job.connect(Job.java:511)
位于org.apache.hadoop.mapreduce.Job.submit(Job.java:499)
位于org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:530)
位于org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)中
在java.lang.reflect.Method.invoke(Method.java:601)的sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
位于org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
位于org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)中
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:601)
位于org.apache.hadoop.util.RunJar.main(RunJar.java:156)
我的太平绅士:

*suzon@Suzon:/usr/local/hadoop$ jps 
14944 Jps <br>
14413 SecondaryNameNode 
14233 DataNode 
14076 NameNode*
*suzon@Suzon:/usr/local/hadoop$jps
14944日元
14413第二名称节点 14233数据节点 14076名称节点*
以下是我的core-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/app/hadoop/tmp</value> 
        <description>A base for other temporary directories.</description> 
    </property> 
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://localhost:54311</value> 
        <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> 
    </property> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
        <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> 
    </property> 
</configuration>

hadoop.tmp.dir
/app/hadoop/tmp
其他临时目录的基础。
fs.default.name
hdfs://localhost:54311 
默认文件系统的名称。其方案和权限决定文件系统实现的URI。uri的方案确定命名文件系统实现类的配置属性(fs.scheme.impl)。uri的权限用于确定文件系统的主机、端口等。
mapred.job.tracker
本地主机:54311
MapReduce作业跟踪器运行的主机和端口。如果为“本地”,则作业作为单个映射在进程中运行,并减少任务。
我的mapred-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/app/hadoop/tmp</value> 
        <description>A base for other temporary directories.</description> 
    </property> 
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://localhost:54311</value> 
        <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> 
    </property> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
        <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> 
    </property> 
</configuration>

mapred.job.tracker
本地主机:54311
MapReduce作业跟踪器运行的主机和端口。如果为“本地”,则作业作为单个映射在进程中运行,并减少任务。
我的hdfs-site.xml配置:

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>hadoop.tmp.dir</name> 
        <value>/app/hadoop/tmp</value> 
        <description>A base for other temporary directories.</description> 
    </property> 
    <property> 
        <name>fs.default.name</name> 
        <value>hdfs://localhost:54311</value> 
        <description>The name of the default file system. A URI whose scheme and authority determine the FileSystem implementation. The uri's scheme determines the config property (fs.SCHEME.impl) naming the FileSystem implementation class. The uri's authority is used to determine the host, port, etc. for a filesystem.</description> 
    </property> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>mapred.job.tracker</name> 
        <value>localhost:54311</value> 
        <description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task. </description> 
    </property> 
</configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<!-- Put site-specific property overrides in this file. --> 
<configuration> 
    <property> 
        <name>dfs.replication</name> 
        <value>1</value> 
        <description>Default block replication. The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time. </description> 
    </property> 
</configuration>

dfs.replication
1.
默认块复制。创建文件时,可以指定实际的复制次数。如果在创建时未指定复制,则使用默认值。