Hadoop 每当我运行配置单元查询时,它都会被那些在MapReduce上工作的操作卡住

Hadoop 每当我运行配置单元查询时,它都会被那些在MapReduce上工作的操作卡住,hadoop,hive,mapreduce,yarn,Hadoop,Hive,Mapreduce,Yarn,My-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapr

My-site.xml

<configuration>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
  <property>
    <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
  </property>
</configuration>

纱线.节点管理器.辅助服务
mapreduce_shuffle
warn.nodemanager.aux-services.mapreduce\u shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
My mapred-site.xml

<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>local</value>
</property>
<property>
    <name>mapreduce.jobhistory.address</name>
    <value>localhost:10020</value>
</property>
</configuration>

mapreduce.framework.name
地方的
mapreduce.jobhistory.address
本地主机:10020
查询ID=niraj_20201108170818_6dd4f715-f1b9-4b31-a184-75c60417a080 职位总数=1 正在启动作业1/1 编译时确定的reduce任务数:1 要更改减速器的平均负载(以字节为单位): 设置hive.exec.reducers.bytes.per.reducer= 为了限制减速器的最大数量: 设置hive.exec.reducers.max= 为了设置恒定数量的减速器: 设置mapreduce.job.reduces= 开始作业=作业\u 1604835040611\u 0001,跟踪URL=http://niraj-HP-Pavilion-Laptop-15-cs1xxx:8088/proxy/application_1604835040611_0001/ Kill命令=/home/niraj/hadoop-3.3.0/bin/mapred job-Kill job_1604835040611_0001
它在这里卡住了。

mapreduce.framework.name
已设置为本地,但您似乎正在使用。。。否则,您必须转到跟踪url以找出它被卡住的原因
Query ID = niraj_20201108170818_6dd4f715-f1b9-4b31-a184-75c60417a080
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1604835040611_0001, Tracking URL = http://niraj-HP-Pavilion-Laptop-15-cs1xxx:8088/proxy/application_1604835040611_0001/
Kill Command = /home/niraj/hadoop-3.3.0/bin/mapred job  -kill job_1604835040611_0001