Ubuntu MapReduce查询未在配置单元上执行?

Ubuntu MapReduce查询未在配置单元上执行?,ubuntu,hive,mapreduce,hbase,hadoop2,Ubuntu,Hive,Mapreduce,Hbase,Hadoop2,我是Hadoop和Hive的新手,每当我开始执行Hive MapReduce查询,如SELECT COUNT(*)、Avg()、或在Hbase中加载数据等时,它都会显示以下错误:我用谷歌搜索了它,但还没有解决方案。 而其他正常的查询,如select*、create、use,则运行良好 hive> select count(*) from test_table; Query ID = dev4_20171016095209_43c4e980-efbd-42d3-94d4-1a4b8de3d9

我是Hadoop和Hive的新手,每当我开始执行Hive MapReduce查询,如SELECT COUNT(*)、Avg()、或在Hbase中加载数据等时,它都会显示以下错误:我用谷歌搜索了它,但还没有解决方案。 而其他正常的查询,如select*、create、use,则运行良好

hive> select count(*) from test_table;
Query ID = dev4_20171016095209_43c4e980-efbd-42d3-94d4-1a4b8de3d956
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1508127394848_0001, Tracking URL = http://dev4:8088/proxy/application_1508127394848_0001/
Kill Command = /usr/local/hadoop-2.8.1//bin/hadoop job  -kill job_1508127394848_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2017-10-16 09:52:38,820 Stage-1 map = 0%,  reduce = 0%
Ended Job = job_1508127394848_0001 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
Hadoop-warn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>

纱线.节点管理器.辅助服务
mapreduce_shuffle
warn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
Hive-Hive-site.xml

<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hiveuser</value>
        <description>user name for connecting to mysql server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>harileela</value>
        <description>password for connecting to mysql server</description>
    </property>
    <property>
        <name>hive.aux.jars.path</name>
        <value>file:///usr/local/hive/lib/hive-serde-1.2.2.jar</value>
        <description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description>
    </property>
    <property>
        <name>hive.exec.reducers.bytes.per.reducer</name>
        <value>1000000</value>
     </property>

</configuration>

javax.jdo.option.ConnectionURL
jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true
元数据存储在MySQL服务器中
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
MySQL JDBC驱动程序类
javax.jdo.option.ConnectionUserName
蜂巢用户
用于连接到mysql服务器的用户名
javax.jdo.option.ConnectionPassword
哈里埃拉
连接到mysql服务器的密码
hive.aux.jars.path
file:///usr/local/hive/lib/hive-serde-1.2.2.jar
包含用户定义函数和SERDE实现的插件JAR的位置。
hive.exec.reducers.bytes.per.reducer
1000000
以下是我的应用程序概述:

User:   dev4
Name:   select count(*) from test_table(Stage-1)
Application Type:   MAPREDUCE
Application Tags:   
Application Priority:   0 (Higher Integer value indicates higher priority)
YarnApplicationState:   FAILED
Queue:  default
FinalStatus Reported by AM:     FAILED
Started:    Mon Oct 16 13:10:37 +0530 2017
Elapsed:    8sec
Tracking URL:   History
Log Aggregation Status:     DISABLED
Diagnostics:    
Application application_1508139045948_0002 failed 2 times due to AM Container for appattempt_1508139045948_0002_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1508139045948_0002_02_000001
Exit code: 127
Exception message: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
Stack trace: ExitCodeException exitCode=127: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 127
For more detailed output, check the application tracking page: http://dev4:8088/cluster/app/application_1508139045948_0002 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application:  false
Application Node Label expression:  <Not set>
AM container Node Label expression:     <DEFAULT_PARTITION> 
用户:dev4
名称:从测试表(阶段1)中选择计数(*)
应用程序类型:MAPREDUCE
应用程序标签:
应用程序优先级:0(整数值越高表示优先级越高)
YarnApp应用程序状态:失败
队列:默认值
AM报告的最终状态:失败
开始时间:2017年10月16日星期一13:10:37+0530
已用时间:8秒
跟踪URL:历史记录
日志聚合状态:已禁用
诊断:
应用程序应用程序_1508139045948_0002失败2次,原因是appattempt_1508139045948_0002_000002的AM容器已退出,退出代码为127
此尝试失败。诊断:容器启动异常。
集装箱id:Container_1508139045948_0002_02_000001
退出代码:127
异常消息:/bin/bash:/home/dev4/local/hadoop-2.8.1/tmp/hadoop--/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_00001/default_container_executor_session.sh:没有这样的文件或目录
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm local dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh:第4行:/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002/container_1508139045948_00001/container_1508145948_0002_.01/container(文件或目录
/bin/mv:无法统计“/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm local dir/nmPrivate/application\u 1508139045948\u 0002/container\u 1508139045948\u 0002\u 02\u000001/container\u 1508139045948\u 0002\u 02\u000001.pid.exitcode.tmp”:没有这样的文件或目录
堆栈跟踪:exitdeException exitCode=127:/bin/bash:/home/dev4/local/hadoop-2.8.1/tmp/hadoop--/nm-local-dir/usercache/dev4/appcache/application\u 1508139045948\u 0002/container\u 1508139045948\u 0002\u 02\u 00001/default\u container\u executor\u session.sh:没有这样的文件或目录
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm local dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh:第4行:/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002/container_1508139045948_00001/container_1508145948_0002_.01/container(文件或目录
/bin/mv:无法统计“/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm local dir/nmPrivate/application\u 1508139045948\u 0002/container\u 1508139045948\u 0002\u 02\u000001/container\u 1508139045948\u 0002\u 02\u000001.pid.exitcode.tmp”:没有这样的文件或目录
位于org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
位于org.apache.hadoop.util.Shell.run(Shell.java:869)
位于org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
位于org.apache.hadoop.warn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
位于org.apache.hadoop.warn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
位于org.apache.hadoop.warn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
在java.util.concurrent.FutureTask.run(FutureTask.java:266)处
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
容器以非零退出代码127退出
有关更详细的输出,请查看应用程序跟踪页面:http://dev4:8088/cluster/app/application_1508139045948_0002 然后单击指向每次尝试日志的链接。
. 应用程序失败。
非托管应用程序:false
应用程序节点标签表达式:
AM容器节点标签表达式:
我不知道问题出在哪里?
谢谢。

没有人知道问题出在哪里。因为你没有提供任何有用的信息。在S.O.上已经有数百个这样的问题,答案总是“检查纱线日志,了解纱线容器的返回代码为何非零”。嘿,Samson感谢您的回复,我添加了日志详细信息(作为应用程序概述)检查了一下,但我还是出不来。检查一下实际的日志,这样你就能知道真正的错误。(“/tmp/hive.log”)
有关更详细的输出,请查看应用程序跟踪页面:http://dev4:8088/cluster/app/application_1508139045948_0002 然后单击指向每次尝试日志的链接。
这些“每次尝试日志”就是我所说的纱线日志。没有人知道问题出在哪里。因为你没有提供任何有用的信息。那里
<configuration>
<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
</configuration>
<configuration>
    <property>
        <name>javax.jdo.option.ConnectionURL</name>
        <value>jdbc:mysql://localhost/metastore?createDatabaseIfNotExist=true</value>
        <description>metadata is stored in a MySQL server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionDriverName</name>
        <value>com.mysql.jdbc.Driver</value>
        <description>MySQL JDBC driver class</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionUserName</name>
        <value>hiveuser</value>
        <description>user name for connecting to mysql server</description>
    </property>
    <property>
        <name>javax.jdo.option.ConnectionPassword</name>
        <value>harileela</value>
        <description>password for connecting to mysql server</description>
    </property>
    <property>
        <name>hive.aux.jars.path</name>
        <value>file:///usr/local/hive/lib/hive-serde-1.2.2.jar</value>
        <description>The location of the plugin jars that contain implementations of user defined functions and serdes.</description>
    </property>
    <property>
        <name>hive.exec.reducers.bytes.per.reducer</name>
        <value>1000000</value>
     </property>

</configuration>
User:   dev4
Name:   select count(*) from test_table(Stage-1)
Application Type:   MAPREDUCE
Application Tags:   
Application Priority:   0 (Higher Integer value indicates higher priority)
YarnApplicationState:   FAILED
Queue:  default
FinalStatus Reported by AM:     FAILED
Started:    Mon Oct 16 13:10:37 +0530 2017
Elapsed:    8sec
Tracking URL:   History
Log Aggregation Status:     DISABLED
Diagnostics:    
Application application_1508139045948_0002 failed 2 times due to AM Container for appattempt_1508139045948_0002_000002 exited with exitCode: 127
Failing this attempt.Diagnostics: Exception from container-launch.
Container id: container_1508139045948_0002_02_000001
Exit code: 127
Exception message: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
Stack trace: ExitCodeException exitCode=127: /bin/bash: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor_session.sh: No such file or directory
/home/dev4/local/hadoop-2.8.1/tmp/hadoop-${dev4}/nm-local-dir/usercache/dev4/appcache/application_1508139045948_0002/container_1508139045948_0002_02_000001/default_container_executor.sh: line 4: /home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp: No such file or directory
/bin/mv: cannot stat '/home/dev4/local/hadoop-2.8.1/tmp/hadoop-/nm-local-dir/nmPrivate/application_1508139045948_0002/container_1508139045948_0002_02_000001/container_1508139045948_0002_02_000001.pid.exitcode.tmp': No such file or directory
at org.apache.hadoop.util.Shell.runCommand(Shell.java:972)
at org.apache.hadoop.util.Shell.run(Shell.java:869)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:236)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:305)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:84)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 127
For more detailed output, check the application tracking page: http://dev4:8088/cluster/app/application_1508139045948_0002 Then click on links to logs of each attempt.
. Failing the application.
Unmanaged Application:  false
Application Node Label expression:  <Not set>
AM container Node Label expression:     <DEFAULT_PARTITION>