Hadoop纱线-如何限制请求的内存?

Hadoop纱线-如何限制请求的内存?,hadoop,mapreduce,yarn,Hadoop,Mapreduce,Yarn,试图从hadoop-mapreduce-examples-2.2.0.jar运行PI示例,我得到以下异常: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request, requested memory < 0, or requested memory > max configured

试图从hadoop-mapreduce-examples-2.2.0.jar运行PI示例,我得到以下异常:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException): Invalid resource request, requested memory < 0, or requested memory > max configured, requestedMemory=1536, maxMemory=512

确定map/reduce任务大小的正确方法是什么?

512是
warn.scheduler.maximum allocation mb
warn site.xml
中的默认值,1536是
warn.app.mapreduce.am.resource.mb
参数在
mapred site.xml
中的默认值

确保
allocation mb
app.mapreduce.am.resource.mb
,它就会正常