Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/jenkins/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 无法在pentaho中运行pig脚本_Hadoop_Apache Pig_Pentaho_Hadoop2_Pdi - Fatal编程技术网

Hadoop 无法在pentaho中运行pig脚本

Hadoop 无法在pentaho中运行pig脚本,hadoop,apache-pig,pentaho,hadoop2,pdi,Hadoop,Apache Pig,Pentaho,Hadoop2,Pdi,我正在分布式模式下使用Hadoop。我想通过远程机器在hadoop集群上执行pig脚本。为了实现这一点,我使用pentaho&pig脚本实用程序。我设置了所有参数,例如 HDFS主机名:Hadoop主机名 HDFS端口:8020 作业跟踪器主机名:另一个从机名称 作业跟踪器端口:8021 清管器脚本路径 我跟踪了这个链接 但是pig脚本获取失败下面是错误日志 2015/03/27 16:10:20 – RepositoriesMeta – Reading repositories XML fil

我正在分布式模式下使用Hadoop。我想通过远程机器在hadoop集群上执行pig脚本。为了实现这一点,我使用pentaho&pig脚本实用程序。我设置了所有参数,例如 HDFS主机名:Hadoop主机名 HDFS端口:8020 作业跟踪器主机名:另一个从机名称 作业跟踪器端口:8021 清管器脚本路径

我跟踪了这个链接

但是pig脚本获取失败下面是错误日志

2015/03/27 16:10:20 – RepositoriesMeta – Reading repositories XML file: C:\Users\vijay.shinde\.kettle\repositories.xml
2015/03/27 16:10:21 – Version checker – OK
2015/03/27 16:10:45 – Spoon – Connected to metastore : pentaho, added to delegating metastore
2015/03/27 16:11:03 – Spoon – Spoon
2015/03/27 16:11:28 – Spoon – Starting job…
**2015/03/27 16:11:28 – Job_pig – Start of job execution**
2015/03/27 16:11:28 – Job_pig – Starting entry [Pig Script Executor]
2015/03/27 16:11:29 – Pig Script Executor – 2015/03/27 16:11:29 – Connecting to hadoop file system at: hdfs://server_name:8020
2015/03/27 16:11:31 – Pig Script Executor – 2015/03/27 16:11:31 – Connecting to map-reduce job tracker at:job_tracker:8021
2015/03/27 16:11:32 – Pig Script Executor – 2015/03/27 16:11:32 – Pig features used in the script: GROUP_BY,FILTER
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, DuplicateForEachColumnRewrite, FilterLogicExpressionSimplifier, GroupByConstParallelSetter, ImplicitSplitInserter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, NewPartitionFilterOptimizer, PartitionFilterOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – File concatenation threshold: 100 optimistic? false
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Choosing to move algebraic foreach to combiner
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – MR plan size before optimization: 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – MR plan size after optimization: 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Pig script settings are added to the job
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Reduce phase detected, estimating # of required reducers.
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=110
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – Setting Parallelism to 1
2015/03/27 16:11:33 – Pig Script Executor – 2015/03/27 16:11:33 – creating jar file Job9065727596293143224.jar
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – jar file Job9065727596293143224.jar created
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Setting up single store job
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Key [pig.schematuple] is false, will not generate code.
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Starting process to move generated code to distributed cache
2015/03/27 16:11:38 – Pig Script Executor – 2015/03/27 16:11:38 – Setting key [pig.schematuple.classes] with classes to deserialize []
2015/03/27 16:11:39 – Pig Script Executor – 2015/03/27 16:11:39 – 1 map-reduce job(s) waiting for submission.
**2015/03/27 16:37:31 – Pig Script Executor – 2015/03/27 16:37:31 – 0% complete
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.**
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – job null has failed! Stop running all dependent jobs
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – 100% complete
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – There is no log file to write to.
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Backend error message during job submission

N/A filtered_records,grouped_records,max_temp,records   GROUP_BY,COMBINER   Message: java.net.ConnectException: Call From server_name/ip_address to server_name:8050 failed on connection exception: java.net.ConnectException: Connection refused: no further information; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1351)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1300)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
2015/03/27 16:37:36 – Pig Script Executor – at com.sun.proxy.$Proxy21.getNewApplication(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:167)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Method.invoke(Method.java:483)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
2015/03/27 16:37:36 – Pig Script Executor – at com.sun.proxy.$Proxy22.getNewApplication(Unknown Source)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:127)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:135)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:175)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:229)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:355)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
2015/03/27 16:37:36 – Pig Script Executor – at java.security.AccessController.doPrivileged(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at javax.security.auth.Subject.doAs(Subject.java:422)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2015/03/27 16:37:36 – Pig Script Executor – at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.reflect.Method.invoke(Method.java:483)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:191)
2015/03/27 16:37:36 – Pig Script Executor – at java.lang.Thread.run(Thread.java:745)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:270)
2015/03/27 16:37:36 – Pig Script Executor – Caused by: java.net.ConnectException: Connection refused: no further information
2015/03/27 16:37:36 – Pig Script Executor – at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
2015/03/27 16:37:36 – Pig Script Executor – at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.hadoop.ipc.Client.call(Client.java:1318)
2015/03/27 16:37:36 – Pig Script Executor – … 30 more
2015/03/27 16:37:36 – Pig Script Executor – /data/pig/input/test1,

**Input(s):
Failed to read data from “/data/pig/input/pigtest.txt”
Output(s):
Failed to produce result in “/data/pig/input/test1″
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:**
null
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – Failed!
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – ERROR 2244: Job failed, hadoop does not return any error message
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – There is no log file to write to.
2015/03/27 16:37:36 – Pig Script Executor – 2015/03/27 16:37:36 – org.apache.pig.backend.executionengine.ExecException: ERROR 2244: Job failed, hadoop does not return any error message
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:148)
2015/03/27 16:37:36 – Pig Script Executor – at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.hadoop.shim.common.CommonPigShim.executeScript(CommonPigShim.java:105)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.entries.pig.JobEntryPigScriptExecutor.execute(JobEntryPigScriptExecutor.java:492)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:678)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:815)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.execute(Job.java:500)
2015/03/27 16:37:36 – Pig Script Executor – at org.pentaho.di.job.Job.run(Job.java:407)
2015/03/27 16:37:36 – Pig Script Executor – Num successful jobs: 0 num failed jobs: 1
2015/03/27 16:37:36 – Job_pig – Finished job entry [Pig Script Executor] (result=[false])
2015/03/27 16:37:36 – Job_pig – Job execution finished
2015/03/27 16:37:36 – Spoon – Job has ended.
这是我的猪剧本

records = LOAD ‘/data/pig/input/pigtest.txt’ USING PigStorage(‘,’) AS (year:chararray,temperature:int,quality:int);
filtered_records = FILTER records BY quality==1;
grouped_records = GROUP filtered_records BY year;
max_temp = FOREACH grouped_records GENERATE group,MAX(filtered_records.temperature);
STORE max_temp into ‘/data/pig/input/test1′;

谢谢

您是否正确设置了hdfs和JobTracker/Thread的主机配置?是的。如果hdfs&Thread凭据不正确,则map reduce作业未启动。此错误:无法从“/data/pig/input/pigtest.txt”读取数据,这使我认为数据不在该路径中。文件已存在。当我从hadoop集群节点使用pig latin控制台运行时,同样的pig latin脚本也在工作。