Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
流式处理命令失败!使用弹性映射Reduce/S3和R时出错_R_Amazon S3_Amazon Ec2_Hadoop_Elastic Map Reduce - Fatal编程技术网

流式处理命令失败!使用弹性映射Reduce/S3和R时出错

流式处理命令失败!使用弹性映射Reduce/S3和R时出错,r,amazon-s3,amazon-ec2,hadoop,elastic-map-reduce,R,Amazon S3,Amazon Ec2,Hadoop,Elastic Map Reduce,我在这里遵循这个示例,希望使用EC2/S3/EMR/R成功运行一些东西。 作业在流式处理步骤中失败。 以下是错误日志: 控制器: 2011-07-21T19:14:27.711Z INFO Fetching jar file. 2011-07-21T19:14:30.380Z INFO Working dir /mnt/var/lib/hadoop/steps/1 2011-07-21T19:14:30.380Z INFO Executing /usr/lib/jvm/java-6-sun/b

我在这里遵循这个示例,希望使用EC2/S3/EMR/R成功运行一些东西。

作业在流式处理步骤中失败。 以下是错误日志:

控制器:

2011-07-21T19:14:27.711Z INFO Fetching jar file.
2011-07-21T19:14:30.380Z INFO Working dir /mnt/var/lib/hadoop/steps/1
2011-07-21T19:14:30.380Z INFO Executing /usr/lib/jvm/java-6-sun/bin/java -cp /home/hadoop/conf:  /usr/lib/jvm/java-6-sun/lib/tools.jar:/home/hadoop:/home/hadoop/hadoop-0.20-core.jar:/home/hadoop/hadoop-0.20-tools.jar:/home/hadoop/lib/*:/home/hadoop/lib/jetty-ext/* -Xmx1000m -Dhadoop.log.dir=/mnt/var/log/hadoop/steps/1 -Dhadoop.log.file=syslog -Dhadoop.home.dir=/home/hadoop -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,DRFA -Djava.io.tmpdir=/mnt/var/lib/hadoop/steps/1/tmp -Djava.library.path=/home/hadoop/lib/native/Linux-i386-32 org.apache.hadoop.util.RunJar /home/hadoop/contrib/streaming/hadoop-streaming.jar -cacheFile s3n://emrexample21/calculatePiFunction.R#calculatePiFunction.R -input s3n://emrexample21/numberList.txt -output s3n://emrout/ -mapper s3n://emrexample21/mapper.R -reducer s3n://emrexample21/reducer.R
2011-07-21T19:16:12.057Z INFO Execution ended with ret val 1
2011-07-21T19:16:12.057Z WARN Step failed with bad retval
2011-07-21T19:16:14.185Z INFO Step created jobs: job_201107211913_0001
标准:

Streaming Command Failed!
标准:

packageJobJar: [/mnt/var/lib/hadoop/tmp/hadoop-unjar2368654264051498521/] [] /mnt/var/lib/hadoop/steps/2/tmp/streamjob1658200878131882888.jar tmpDir=null
系统日志:

2011-07-21 19:50:29,539 INFO org.apache.hadoop.mapred.JobClient (main): Default number of map tasks: 2
2011-07-21 19:50:29,539 INFO org.apache.hadoop.mapred.JobClient (main): Default number of reduce tasks: 15
2011-07-21 19:50:31,988 INFO com.hadoop.compression.lzo.GPLNativeCodeLoader (main): Loaded native gpl library
2011-07-21 19:50:31,999 INFO com.hadoop.compression.lzo.LzoCodec (main): Successfully loaded & initialized native-lzo library [hadoop-lzo rev 2334756312e0012cac793f12f4151bdaa1b4b1bb]
2011-07-21 19:50:33,040 INFO org.apache.hadoop.mapred.FileInputFormat (main): Total input paths to process : 1
2011-07-21 19:50:35,375 INFO org.apache.hadoop.streaming.StreamJob (main): getLocalDirs(): [/mnt/var/lib/hadoop/mapred]
2011-07-21 19:50:35,375 INFO org.apache.hadoop.streaming.StreamJob (main): Running job: job_201107211948_0001
2011-07-21 19:50:35,375 INFO org.apache.hadoop.streaming.StreamJob (main): To kill this job, run:
2011-07-21 19:50:35,375 INFO org.apache.hadoop.streaming.StreamJob (main): UNDEF/bin/hadoop job  -Dmapred.job.tracker=ip-10-203-50-161.ec2.internal:9001 -kill job_201107211948_0001
2011-07-21 19:50:35,376 INFO org.apache.hadoop.streaming.StreamJob (main): Tracking URL: http://ip-10-203-50-161.ec2.internal:9100/jobdetails.jsp?jobid=job_201107211948_0001
2011-07-21 19:50:36,566 INFO org.apache.hadoop.streaming.StreamJob (main):  map 0%  reduce 0%
2011-07-21 19:50:57,778 INFO org.apache.hadoop.streaming.StreamJob (main):  map 50%  reduce 0%
2011-07-21 19:51:09,839 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 0%
2011-07-21 19:51:12,852 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 1%
2011-07-21 19:51:15,864 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 3%
2011-07-21 19:51:18,875 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 0%
2011-07-21 19:52:12,454 INFO org.apache.hadoop.streaming.StreamJob (main):  map 100%  reduce 100%
2011-07-21 19:52:12,455 INFO org.apache.hadoop.streaming.StreamJob (main): To kill this job, run:
2011-07-21 19:52:12,455 INFO org.apache.hadoop.streaming.StreamJob (main): UNDEF/bin/hadoop job  -Dmapred.job.tracker=ip-10-203-50-161.ec2.internal:9001 -kill job_201107211948_0001
2011-07-21 19:52:12,456 INFO org.apache.hadoop.streaming.StreamJob (main): Tracking URL: http://ip-10-203-50-161.ec2.internal:9100/jobdetails.jsp?jobid=job_201107211948_0001
2011-07-21 19:52:12,456 ERROR org.apache.hadoop.streaming.StreamJob (main): Job not Successful!
2011-07-21 19:52:12,456 INFO org.apache.hadoop.streaming.StreamJob (main): killJob...

我是您试图运行的代码的作者。它是作为R和EMR概念的证明而编写的。使用这种方法很难生成真正有用的代码。向EMR提交R代码以及该方法正常工作所需的所有手动步骤是一项繁琐的工作

为了避免这种单调乏味的工作,我后来编写了一个脚本,它将所有比特加载到S3中以及更新Hadoop节点上的R版本都抽象了出来。Jeffry Breen写了一篇关于使用Segue的文章。看看这个,看看它是否更容易使用

编辑:

我至少应该给出一些在EMR/Hadoop流媒体中调试R代码的技巧:

1) 从Hadoop日志调试R代码几乎是不可能的。根据我的经验,我真的必须建立一个EMR集群,登录到该集群,然后从EMR中手动运行代码。这需要使用定义的密钥启动集群。我通常在单节点集群上进行调试,并使用非常小的数据集。运行多个节点只是为了调试没有意义

2) 在EMR节点上的R中交互运行作业需要hadoop节点上的/home/hadoop/目录中有任何输入文件。要做到这一点,最简单的方法是将所有需要的文件scp到集群

3) 在执行1和2之前,使用相同的方法在本地测试代码

4) 一旦您认为R代码可以工作,您就应该能够在Hadoop机器上实现这一点

cat numberList.txt | ./mapper.R | sort | ./reducer.R

它应该运行。如果未使用映射器或减速器,则可以使用cat替换它们。我在本例中使用numberList.txt,因为在github上的代码中,这是输入文件名

嘿,你链接的代码也很好看。不可能有错误!:)非常感谢!我要去看看Segue和博客文章。调试信息肯定也会有帮助。