Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/EmptyTag/124.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop Oozie作业未运行-被暂停_Hadoop_Oozie - Fatal编程技术网

Hadoop Oozie作业未运行-被暂停

Hadoop Oozie作业未运行-被暂停,hadoop,oozie,Hadoop,Oozie,我使用Oozie以伪模式运行Hadoop(我没有使用CDH或Hortonworks等发布的Hadoop)。我在运行时有以下配置-Fedora22VM在VirtualBox上运行,RAM分配4GB,Hadoop2.7,Oozie4.2 在我提交OOZIE的示例Mapreduce作业后,它将被暂停,作业错误如下: 2015-10-29 15:44:59,048 WARN ActionStartXCommand:523 - SERVER[hadoop] USER[hadoop] GROUP[-] T

我使用Oozie以伪模式运行Hadoop(我没有使用CDH或Hortonworks等发布的Hadoop)。我在运行时有以下配置-Fedora22VM在VirtualBox上运行,RAM分配4GB,Hadoop2.7,Oozie4.2

在我提交OOZIE的示例Mapreduce作业后,它将被暂停,作业错误如下:

2015-10-29 15:44:59,048  WARN ActionStartXCommand:523 - SERVER[hadoop] USER[hadoop] GROUP[-] TOKEN[] APP[map-reduce-wf] JOB[0000000-151029154441128-OOZIE-VB-W] ACTION[0000000-151029154441128-OOZIE-VB-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=2048, maxMemory=1024
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:204)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:328)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at     org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)]
org.apache.oozie.action.ActionExecutorException: JA009: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=2048, maxMemory=1024
at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:204)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.createAndPopulateNewRMApp(RMAppManager.java:328)
at org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
at org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
at org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

at org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:440)
at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1132)
at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1286)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
at org.apache.oozie.command.XCommand.call(XCommand.java:286)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2015-10-29 15:44:59048警告操作startxcommand:523-服务器[hadoop]用户[hadoop]组[-]令牌[]应用程序[map reduce wf]作业[0000000-151029154441128-OOZIE-VB-W]操作[0000000-151029154441128-OOZIE-VB]-W@mr-节点]启动操作[mr节点]时出错。ErrorType[TRANSIENT],ErrorCode[JA009],Message[JA009:org.apache.hadoop.warn.exceptions.InvalidResourceRequestException:无效的资源请求,请求的内存<0,或请求的内存>最大配置,requestedMemory=2048,maxMemory=1024
位于org.apache.hadoop.warn.server.resourcemanager.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:204)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.createAndPopulateNerMapp(RMAppManager.java:328)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
位于org.apache.hadoop.warn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
位于org.apache.hadoop.warn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
位于org.apache.hadoop.warn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
在org.apache.hadoop.ipc.protobufrpcnengine$Server$protobufrpinvoker.call(protobufrpcnengine.java:616)
位于org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:422)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
位于org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)]
org.apache.oozie.action.ActionExecuteException:JA009:org.apache.hadoop.warn.exceptions.InvalidResourceRequestException:Invalid资源请求无效,请求的内存<0,或请求的内存>最大配置,requestedMemory=2048,maxMemory=1024
位于org.apache.hadoop.warn.server.resourcemanager.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:204)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.validateAndCreateResourceRequest(RMAppManager.java:385)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.createAndPopulateNerMapp(RMAppManager.java:328)
位于org.apache.hadoop.warn.server.resourcemanager.RMAppManager.submitApplication(RMAppManager.java:281)
位于org.apache.hadoop.warn.server.resourcemanager.ClientRMService.submitApplication(ClientRMService.java:580)
位于org.apache.hadoop.warn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.submitApplication(ApplicationClientProtocolPBServiceImpl.java:218)
位于org.apache.hadoop.warn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:419)
在org.apache.hadoop.ipc.protobufrpcnengine$Server$protobufrpinvoker.call(protobufrpcnengine.java:616)
位于org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
位于org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:422)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
位于org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)
位于org.apache.oozie.action.ActionExecutor.convertExceptionHelper(ActionExecutor.java:456)
位于org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:440)
位于org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1132)
位于org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1286)
位于org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:250)
位于org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:64)
位于org.apache.oozie.command.XCommand.call(XCommand.java:286)
位于org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:321)
位于org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:250)
位于org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
运行(Thread.java:745)
我认为这与MapReduce作业的内存分配有关,但我无法计算出这背后的确切数字。非常感谢您在这方面的帮助

mapred site.xml

  <property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
  </property>

  <property>
      <name>mapreduce.map.memory.mb</name>
      <value>512</value>
  </property>

  <property>
      <name>mapreduce.reduce.memory.mb</name>
      <value>512</value>
  </property>

  <property>
      <name>mapreduce.jobtracker.address</name>
      <value>http://localhost:50031</value>
  </property>

  <property>
      <name>mapreduce.jobtracker.http.address</name>
      <value>http://localhost:50030</value>
  </property>

  <property>
      <name>mapreduce.jobtracker.jobhistory.location</name>
      <value>/home/osboxes/hadoop/logs/jobhistory</value>
  </property>

  <property>
      <name>mapreduce.jobhistory.address</name>
      <value>http://localhost:10020</value>
  </property>

  <property>
     <name>mapreduce.jobhistory.intermediate-done-dir</name>
     <value>/home/osboxes/hadoop/mr-history/temp</value>
  </property>

  <property>
     <name>mapreduce.jobhistory.done-dir</name>
     <value>/home/osboxes/hadoop/mr-history/done</value>
  </property>

  <property>
     <name>mapreduce.cluster.local.dir</name>
     <value>/home/osboxes/hadoop/dfs/local</value>
  </property>

  <property>
     <name>mapreduce.jobtracker.system.dir</name>
     <value>/home/osboxes/hadoop/dfs/system</value>
  </property>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:9000</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
</property>

<property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
</property>

mapreduce.framework.name
纱线
mapreduce.map.memory.mb
512
mapreduce.reduce.memory.mb
512
mapreduce.jobtracker.address
http://localhost:50031
mapreduce.jobtracker.http.address
http://localhost:50030
mapreduce.jobtracker.jobhistory.location
/home/osbox/hadoop/logs/jobhistory
mapreduce.jobhistory.address
http://localhost:10020
mapreduce.jobhistory.intermediate-done-dir
/主页/OSBOX/hadoop/mr历史记录/temp
mapreduce.jobhistory.done-dir
/呵呵
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://localhost:9000</value>
</property>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/tmp/hadoop-${user.name}</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
</property>

<property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
</property>