Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/r/64.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 配置单元错误:java.lang.Throwable:子错误_Hadoop_Hive_Hiveql - Fatal编程技术网

Hadoop 配置单元错误:java.lang.Throwable:子错误

Hadoop 配置单元错误:java.lang.Throwable:子错误,hadoop,hive,hiveql,Hadoop,Hive,Hiveql,我使用的是CDH5.9,在执行以下配置单元查询时抛出错误。你知道这个问题吗? 对于普通的select查询,其工作正常,但对于复杂查询,其结果是失败 hive> select * from table where dt='22-01-2017' and field like '%xyz%' limit 10; Query ID = hdfs_20170123200303_44a9c423-4bb3-4f80-ade4-b1312971eb63 Total jobs = 1 Launching

我使用的是CDH5.9,在执行以下配置单元查询时抛出错误。你知道这个问题吗? 对于普通的select查询,其工作正常,但对于复杂查询,其结果是失败

hive> select * from table where dt='22-01-2017' and field like '%xyz%' limit 10;
Query ID = hdfs_20170123200303_44a9c423-4bb3-4f80-ade4-b1312971eb63
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201701131637_0067, Tracking URL = http://cdhum03.temp-dsc-updates.bms.bz:50030/jobdetails.jsp?jobid=job_201701131637_0067
Kill Command = /usr/lib/hadoop/bin/hadoop job  -kill job_201701131637_0067
Hadoop job information for Stage-1: number of mappers: 6; number of reducers: 0
2017-01-23 20:05:46,563 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201701131637_0067 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://cdhum03.temp-dsc-updates.bms.bz:50030/jobdetails.jsp?jobid=job_201701131637_0067
Examining task ID: task_201701131637_0067_m_000007 (and more) from job job_201701131637_0067
Examining task ID: task_201701131637_0067_r_000000 (and more) from job job_201701131637_0067

Task with the most failures(4):
-----
Task ID:
  task_201701131637_0067_m_000006

URL:
  http://cdhum03.temp-dsc-updates.bms.bz:50030/taskdetails.jsp?jobid=job_201701131637_0067&tipid=task_201701131637_0067_m_000006
-----
Diagnostic Messages for this Task:
java.lang.Throwable: Child Error
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250)
Caused by: java.io.IOException: Task process exit with nonzero status of 126.
    at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237)


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: Map: 6   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

谢谢。

请检查您的数据大小,因为您的作业需要更多的日志空间,但jvm较少。请扩展您的群集或在使用时使用特定的查询- 从表中选择*,其中dt='22-01-2017'和类似“%xyz%”的字段限制为10 ,因为“%xyz%”将更好地检查整个数据以使用特定需求。
否则,请删除表并创建一个新的分区表,其中日期作为分区列。

配置单元表是否包含任何分区?当配置单元报告执行错误时,返回代码2,这意味着纱线作业内部发生了错误,请检查纱线日志。那么,为什么不检查这些日志呢?提示:当Hive报告
开始作业=作业\u 201701131637\u 0067
时,这意味着纱线作业ID实际上是
应用程序\u 201701131637\u 0067
(“作业”前缀是纱线之前的遗留内容)获取日志的命令行是
纱线日志-applicationId application_201701131637_0067 | more
(不要忘记“more”,因为纱线日志非常详细)转到链接并检查详细的错误日志: