Hadoop 尝试转储表时出现清管器错误

Hadoop 尝试转储表时出现清管器错误,hadoop,apache-pig,Hadoop,Apache Pig,我正在尝试运行一个非常简单的pig脚本,并不断遇到复杂的问题 脚本: log = LOAD 'C:/Users/malanio/Documents/test.log' USING PigStorage(',') AS (user:chararray, some:long, some2:chararray); DUMP log; 我正在加载的文件: ravi,1,1 出现以下错误: C:\Users\malanio\Documents>pig -x local testrun.pig 2

我正在尝试运行一个非常简单的pig脚本,并不断遇到复杂的问题

脚本:

log = LOAD 'C:/Users/malanio/Documents/test.log' USING PigStorage(',') AS (user:chararray, some:long, some2:chararray);
DUMP log;
我正在加载的文件:

ravi,1,1
出现以下错误:

C:\Users\malanio\Documents>pig -x local testrun.pig
2014-06-12 14:46:22,939 [main] INFO  org.apache.pig.Main - Apache Pig version 0.12.1 (r1585011) compiled Apr 05 2014, 01:41:34
2014-06-12 14:46:22,940 [main] INFO  org.apache.pig.Main - Logging error messages to: C:\hadoop-2.4.0\logs\pig_1402598782937.log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/pig-0.12.1/pig-0.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2014-06-12 14:46:23,616 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file C:\Users\malanio/.pigbootup not found
2014-06-12 14:46:23,702 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
2014-06-12 14:46:23,702 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2014-06-12 14:46:23,704 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: file:///
2014-06-12 14:46:24,275 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: UNKNOWN
2014-06-12 14:46:24,317 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter], RULES_DISABLED=[FilterLogicExpressionSimplifier]}
2014-06-12 14:46:24,470 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
2014-06-12 14:46:24,501 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
2014-06-12 14:46:24,501 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
2014-06-12 14:46:24,526 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - session.id is deprecated. Instead, use dfs.metrics.session-id
2014-06-12 14:46:24,527 [main] INFO  org.apache.hadoop.metrics.jvm.JvmMetrics - Initializing JVM Metrics with processName=JobTracker, sessionId=
2014-06-12 14:46:24,551 [main] WARN  org.apache.pig.backend.hadoop20.PigJobControl - falling back to default JobControl (not using hadoop 0.20 ?)
java.lang.NoSuchFieldException: runnerState
    at java.lang.Class.getDeclaredField(Class.java:1948)
    at org.apache.pig.backend.hadoop20.PigJobControl.<clinit>(PigJobControl.java:51)
    at org.apache.pig.backend.hadoop.executionengine.shims.HadoopShims.newJobControl(HadoopShims.java:98)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:289)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:191)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1324)
    at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1309)
    at org.apache.pig.PigServer.storeEx(PigServer.java:980)
    at org.apache.pig.PigServer.store(PigServer.java:944)
    at org.apache.pig.PigServer.openIterator(PigServer.java:857)
    at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:774)
    at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
    at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
    at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
    at org.apache.pig.Main.run(Main.java:607)
    at org.apache.pig.Main.main(Main.java:156)
2014-06-12 14:46:24,569 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
2014-06-12 14:46:24,579 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
2014-06-12 14:46:24,581 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2014-06-12 14:46:24,584 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
2014-06-12 14:46:24,625 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
2014-06-12 14:46:24,640 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
2014-06-12 14:46:24,642 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
2014-06-12 14:46:24,645 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Distributed cache not supported or needed in local mode. Setting key [pig.schematuple.local.dir] with code temp directory: C:\Users\malanio\AppData\Local\Temp\1402598784640-0
2014-06-12 14:46:24,688 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
2014-06-12 14:46:24,693 [main] INFO  org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
2014-06-12 14:46:24,704 [JobControl] INFO  org.apache.hadoop.metrics.jvm.JvmMetrics - Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
2014-06-12 14:46:24,714 [JobControl] ERROR org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl - Error while trying to run jobs.
java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
    at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
    at java.lang.Thread.run(Thread.java:745)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
2014-06-12 14:46:24,753 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
2014-06-12 14:46:24,764 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
2014-06-12 14:46:24,767 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs
2014-06-12 14:46:24,771 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
2014-06-12 14:46:24,783 [main] ERROR org.apache.pig.tools.pigstats.SimplePigStats - ERROR 2997: Unable to recreate exception from backend error: Unexpected System Error Occured: java.lang.IncompatibleClassChang
eError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
    at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
    at java.lang.Thread.run(Thread.java:745)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)

2014-06-12 14:46:24,821 [main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
2014-06-12 14:46:24,824 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Detected Local mode. Stats reported below may be incomplete
2014-06-12 14:46:24,831 [main] INFO  org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics:

HadoopVersion   PigVersion      UserId  StartedAt       FinishedAt      Features
2.4.0   0.12.1  malanio 2014-06-12 14:46:24     2014-06-12 14:46:24     UNKNOWN

Failed!

Failed Jobs:
JobId   Alias   Feature Message Outputs
N/A     log     MAP_ONLY        Message: Unexpected System Error Occured: java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.setupUdfEnvAndStores(PigOutputFormat.java:225)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat.checkOutputSpecs(PigOutputFormat.java:186)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:458)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:343)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
    at org.apache.hadoop.mapreduce.lib.jobcontrol.JobControl.run(JobControl.java:240)
    at org.apache.pig.backend.hadoop20.PigJobControl.run(PigJobControl.java:121)
    at java.lang.Thread.run(Thread.java:745)
    at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:271)
    file:/tmp/temp-590289635/tmp-804647280,

Input(s):
Failed to read data from "C:/Users/malanio/Documents/test.log"

Output(s):
Failed to produce result in "file:/tmp/temp-590289635/tmp-804647280"

Job DAG:
null


2014-06-12 14:46:24,939 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
2014-06-12 14:46:24,952 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias log
Details at logfile: C:\hadoop-2.4.0\logs\pig_1402598782937.log
C:\Users\malanio\Documents>pig-x local testrun.pig
2014-06-12 14:46:22939[main]INFO org.apache.pig.main-apache pig版本0.12.1(r1585011)编译于2014年4月5日01:41:34
2014-06-12 14:46:22940[main]INFO org.apache.pig.main-将错误消息记录到:C:\hadoop-2.4.0\logs\pig1402598782937.log
SLF4J:类路径包含多个SLF4J绑定。
SLF4J:在[jar:file:/C:/hadoop-2.4.0/share/hadoop/common/lib/SLF4J-log4j12-1.7.5.jar!/org/SLF4J/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:在[jar:file:/C:/pig-0.12.1/pig-0.12.1.jar!/org/SLF4J/impl/StaticLoggerBinder.class]中找到绑定
SLF4J:参见http://www.slf4j.org/codes.html#multiple_bindings 我需要一个解释。
SLF4J:实际绑定的类型为[org.SLF4J.impl.Log4jLoggerFactory]
2014-06-12 14:46:23616[main]INFO org.apache.pig.impl.util.Utils-未找到默认启动文件C:\Users\malanio/.pigbootup
2014-06-12 14:46:23702[main]INFO org.apache.hadoop.conf.Configuration.deprecation-fs.default.name已被弃用。而是使用fs.defaultFS
2014-06-12 14:46:23702[main]INFO org.apache.hadoop.conf.Configuration.deprecation-mapred.job.tracker不推荐使用。相反,请使用mapreduce.jobtracker.address
2014-06-12 14:46:23704[main]INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine-连接到hadoop文件系统的地址:file:///
2014-06-12 14:46:24275[main]INFO org.apache.pig.tools.pigstats.ScriptState-脚本中使用的pig功能:未知
2014-06-12 14:46:24317[main]INFO org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer-{规则_已启用=[AddForEach、ColumnMapKeyPrune、GroupByConstParallelSetter、LimitOptimizer、LoadTypeCastInserter、MergeFilter、MergeForEach、NewPartitionFilterOptimizer、PartitionFilterOptimizer、PushDownForEachFlatte、PushUpFilter、SplitFilter、StreamTypeCastInserter],禁用规则=[FilterLogiceExpressionSimplier]]
2014-06-12 14:46:24470[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler-文件连接阈值:100乐观?错误
2014-06-12 14:46:24501[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer-优化前的MR计划大小:1
2014-06-12 14:46:24501[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer-优化后的MR计划大小:1
2014-06-12 14:46:24526[main]INFO org.apache.hadoop.conf.Configuration.deprecation-session.id已被弃用。请改用dfs.metrics.session-id
2014-06-12 14:46:24527[main]INFO org.apache.hadoop.metrics.jvm.JvmMetrics-使用processName=JobTracker,sessionId初始化jvm度量=
2014-06-12 14:46:24551[main]WARN org.apache.pig.backend.hadoop20.PigJobControl-退回到默认的JobControl(不使用hadoop 0.20?)
java.lang.NoSuchFieldException:runnerState
位于java.lang.Class.getDeclaredField(Class.java:1948)
位于org.apache.pig.backend.hadoop20.PigJobControl.(PigJobControl.java:51)
位于org.apache.pig.backend.hadoop.executionengine.Shimes.hadoopShimes.newJobControl(hadoopShimes.java:98)
位于org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:289)
位于org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.mapreduceLancher.launchPig(mapreduceLancher.java:191)
位于org.apache.pig.PigServer.launchPlan(PigServer.java:1324)
位于org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1309)
位于org.apache.pig.PigServer.storeEx(PigServer.java:980)
位于org.apache.pig.PigServer.store(PigServer.java:944)
位于org.apache.pig.PigServer.openIterator(PigServer.java:857)
位于org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:774)
位于org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
位于org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
位于org.apache.pig.tools.grunt.GruntParser.parsetoponerror(GruntParser.java:173)
位于org.apache.pig.tools.grunt.grunt.exec(grunt.java:84)
位于org.apache.pig.Main.run(Main.java:607)
位于org.apache.pig.Main.Main(Main.java:156)
2014-06-12 14:46:24569[main]INFO org.apache.pig.tools.pigstats.ScriptState-将pig脚本设置添加到作业中
2014-06-12 14:46:24579[main]INFO org.apache.hadoop.conf.Configuration.deprecation-mapred.job.reduce.markreset.buffer.percent已被弃用。请改用mapreduce.reduce.markreset.buffer.percent
2014-06-12 14:46:24581[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler-mapred.job.reduce.markreset.buffer.percent未设置,设置为默认值0.3
2014-06-12 14:46:24584[main]INFO org.apache.hadoop.conf.Configuration.deprecation-mapred.output.compress已被弃用。请改用mapreduce.output.fileoutputformat.compress
2014-06-12 14:46:24625[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler-设置单存储作业
2014-06-12 14:46:24640[main]INFO org.apache.pig.data.SchemaTupleFrontend-键[pig.schematuple]为false,不会生成代码。
2014-06-12 14:46:24642[main]INFO org.apache.pig.data.SchemaTupleFrontend-开始将生成的代码移动到分布式缓存的过程
2014-06-12 14:46:24645[main]INFO org.apache.pig.data.SchemaTupleFrontend-本地模式下不支持或不需要分布式缓存。使用代码temp目录设置键[pig.schematuple.local.dir]:C:\Users\malanio\AppData\local\temp\1402598784640-0
2014-06-12 14:46:24688[main]INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher-1个map reduce作业正在等待提交。
2014-06-12 14:46:24693[main]INFO org.apache.hadoop.conf.Configuration.deprecation-mapred.job.tr