Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop Oozie s3作为作业文件夹_Hadoop_Amazon S3_Hive_Oozie - Fatal编程技术网

Hadoop Oozie s3作为作业文件夹

Hadoop Oozie s3作为作业文件夹,hadoop,amazon-s3,hive,oozie,Hadoop,Amazon S3,Hive,Oozie,当从s3提供workflow.xml时,Oozie失败,出现以下错误,但从HDFS提供的workflow.xml工作相同。 oozie的早期版本也是如此,4.3版本的oozie有什么变化吗 环境: HDP3.1.0 Oozie 4.3.1 oozie.service.HadoopAccessorService.supported.filesystems=* 作业属性 适用于Job.properties中的此更改 基本路径=hdfs://ambari-master-1a.xdata.com:802

当从s3提供workflow.xml时,Oozie失败,出现以下错误,但从HDFS提供的workflow.xml工作相同。 oozie的早期版本也是如此,4.3版本的oozie有什么变化吗

环境:

HDP3.1.0 Oozie 4.3.1 oozie.service.HadoopAccessorService.supported.filesystems=* 作业属性

适用于Job.properties中的此更改

基本路径=hdfs://ambari-master-1a.xdata.com:8020/test/oozie

workflow.xml

错误:


这是由于Oozie受到保护的方式造成的。如果您删除,此操作将为您运行。如果不想重新编译,可以在发行版中找到类文件并将其删除。请注意,您的Oozie服务器将易受CVE-2017-15712攻击。

这是由于Oozie受到保护的方式造成的。如果您删除,此操作将为您运行。如果不想重新编译,可以在发行版中找到类文件并将其删除。请注意,您的Oozie服务器将容易受到CVE-2017-15712的攻击。

您可以看到以下问题:,从Oozie WEB-INF/classes中删除RawLocalFilesystem.class并重新启动Oozie是一个临时解决方案,您应该将Oozie版本升级到5.2.0。 顺便问一下,你能展示一下关于S3的配置吗?如何配置S3端点和AK/SK以让oozie访问S3?我正面临这个问题。

您可以看到这个问题:,从oozie WEB-INF/classes中删除RawLocalFilesystem.class并重新启动oozie是一个临时解决方案,您应该将oozie版本升级到5.2.0。
顺便问一下,你能展示一下关于S3的配置吗?如何配置S3端点和AK/SK以让oozie访问S3?我正面临这个问题。

我通过从oozie WEB-INF/classes中删除RawLocalFilesystem.class并重新启动oozie解决了这个问题。对于像S3这样的文件系统,它只需要fs.S3.buffer.dir,而不是一个显式检查来处理CVE,因为它覆盖了整个本地文件系统。这看起来更像一只虫子。谢谢你的回答。我通过从oozie WEB-INF/classes中删除RawLocalFilesystem.class并重新启动oozie解决了这个问题。对于像S3这样的文件系统,它只需要fs.S3.buffer.dir,而不是一个显式检查来处理CVE,因为它覆盖了整个本地文件系统。这看起来更像一只虫子。谢谢你的回答。
nameNode=hdfs://ambari-master-1a.xdata.com:8020
jobTracker=ambari-master-2a.xdata.com:8050
queue=default
#OOZIE job details
basepath=s3a://mybucket/test/oozie
oozie.use.system.libpath=true
oozie.wf.application.path=${basepath}/jobs/test-hive​
​<workflow-app xmlns="uri:oozie:workflow:0.5" name="test-hive">
    <start to="hive-query"/>
    <action name="hive-query" retry-max="2" retry-interval="10">
        <hive xmlns="uri:oozie:hive-action:0.2">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <script>test_hive.sql</script>
        </hive>
        <ok to="end"/>
        <error to="kill"/>
    </action>
    <kill name="kill">
        <message>job failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
    </kill>
    <end name="end"/>
</workflow-app>​
​org.apache.oozie.action.ActionExecutorException: UnsupportedOperationException: Accessing local file system is not allowed
    at org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:446)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1100)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:1214)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:1502)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:241)
    at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
    at org.apache.oozie.command.XCommand.call(XCommand.java:287)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:332)
    at org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:261)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:179)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.UnsupportedOperationException: Accessing local file system is not allowed
    at org.apache.hadoop.fs.RawLocalFileSystem.initialize(RawLocalFileSystem.java:48)
    at org.apache.hadoop.fs.LocalFileSystem.initialize(LocalFileSystem.java:47)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
    at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:435)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:301)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:378)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:461)
    at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:200)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.createTmpFileForWrite(S3AFileSystem.java:572)
    at org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory.create(S3ADataBlocks.java:811)
    at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.createBlockIfNeeded(S3ABlockOutputStream.java:190)
    at org.apache.hadoop.fs.s3a.S3ABlockOutputStream.<init>(S3ABlockOutputStream.java:168)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:778)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1118)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1098)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:987)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:975)
    at org.apache.oozie.action.hadoop.LauncherMapperHelper.setupLauncherInfo(LauncherMapperHelper.java:156)
    at org.apache.oozie.action.hadoop.JavaActionExecutor.createLauncherConf(JavaActionExecutor.java:1040)​