Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 在清管器v0.15中设置队列名称_Hadoop_Mapreduce_Bigdata_Apache Pig_Mapr - Fatal编程技术网

Hadoop 在清管器v0.15中设置队列名称

Hadoop 在清管器v0.15中设置队列名称,hadoop,mapreduce,bigdata,apache-pig,mapr,Hadoop,Mapreduce,Bigdata,Apache Pig,Mapr,我在尝试通过shell执行pig脚本时遇到以下异常 JobId Alias Feature Message Outputs job_1520637789949_340250 A,B,D,top_rec GROUP_BY Message: java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_152063778994

我在尝试通过shell执行
pig
脚本时遇到以下异常

JobId   Alias   Feature Message Outputs
job_1520637789949_340250        A,B,D,top_rec   GROUP_BY        Message: java.io.IOException: org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1520637789949_340250 to YARN : Application rejected by queue placement policy
我理解这是因为没有为MR执行设置正确的队列名称。为了找到如何为
mapreduce
作业设置
queuename
,我尝试搜索彻底的帮助,
pig--help
,它列出了以下选项

Apache Pig version 0.15.0-mapr-1611 (rexported)
compiled Dec 06 2016, 05:50:07

USAGE: Pig [options] [-] : Run interactively in grunt shell.
       Pig [options] -e[xecute] cmd [cmd ...] : Run cmd(s).
       Pig [options] [-f[ile]] file : Run cmds found in file.
  options include:
    -4, -log4jconf - Log4j configuration file, overrides log conf
    -b, -brief - Brief logging (no timestamps)
    -c, -check - Syntax check
    -d, -debug - Debug level, INFO is default
    -e, -execute - Commands to execute (within quotes)
    -f, -file - Path to the script to execute
    -g, -embedded - ScriptEngine classname or keyword for the ScriptEngine
    -h, -help - Display this message. You can specify topic to get help for that topic.
        properties is the only topic currently supported: -h properties.
    -i, -version - Display version information
    -l, -logfile - Path to client side log file; default is current working directory.
    -m, -param_file - Path to the parameter file
    -p, -param - Key value pair of the form param=val
    -r, -dryrun - Produces script with substituted parameters. Script is not executed.
    -t, -optimizer_off - Turn optimizations off. The following values are supported:
            ConstantCalculator - Calculate constants at compile time
            SplitFilter - Split filter conditions
            PushUpFilter - Filter as early as possible
            MergeFilter - Merge filter conditions
            PushDownForeachFlatten - Join or explode as late as possible
            LimitOptimizer - Limit as early as possible
            ColumnMapKeyPrune - Remove unused data
            AddForEach - Add ForEach to remove unneeded columns
            MergeForEach - Merge adjacent ForEach
            GroupByConstParallelSetter - Force parallel 1 for "group all" statement
            PartitionFilterOptimizer - Pushdown partition filter conditions to loader implementing LoadMetaData
            PredicatePushdownOptimizer - Pushdown filter predicates to loader implementing LoadPredicatePushDown
            All - Disable all optimizations
        All optimizations listed here are enabled by default. Optimization values are case insensitive.
    -v, -verbose - Print all error messages to screen
    -w, -warning - Turn warning logging on; also turns warning aggregation off
    -x, -exectype - Set execution mode: local|mapreduce|tez, default is mapreduce.
    -F, -stop_on_failure - Aborts execution on the first failed job; default is off
    -M, -no_multiquery - Turn multiquery optimization off; default is on
    -N, -no_fetch - Turn fetch optimization off; default is on
    -P, -propertyFile - Path to property file
    -printCmdDebug - Overrides anything else and prints the actual command used to run Pig, including
                     any environment variables that are set by the pig command.
18/03/30 13:03:05 INFO pig.Main: Pig script completed in 163 milliseconds (163 ms)
我尝试了
pig-pmapreduce.job.queuename=my_queue
;并且能够毫无错误地登录到grunt

然而,在第一个命令本身,它抛出下面

ERROR 2997: Encountered IOException. org.apache.pig.tools.parameters.ParseException: Encountered " <OTHER> ".job.queuename=my_queue "" at line 1, column 10.
Was expecting:
    "=" ...
错误2997:遇到IOException。org.apache.pig.tools.parameters.ParseException:在第1行第10列遇到“.job.queuename=my_queue”。
他期望:
"=" ...

我不确定,我是否做对了?

要在
pig 0.15
中设置
queuename
,我有以下选项(可能也适用于其他版本):

1)
pig
附带了一个使用队列名称启动pig会话的选项。 简单使用下面的命令

pig -Dmapreduce.job.queuename=my_queue
2) 另一个选项是在grunt shell或
pig
脚本本身中设置相同的值

set mapreduce.job.queuename my_queue;

要在
pig0.15
中设置
queuename
,我得到了以下选项(它可能也适用于其他版本):

1)
pig
附带了一个使用队列名称启动pig会话的选项。 简单使用下面的命令

pig -Dmapreduce.job.queuename=my_queue
2) 另一个选项是在grunt shell或
pig
脚本本身中设置相同的值

set mapreduce.job.queuename my_queue;