.net AWS、dotnet spark和redshift不工作

.net AWS、dotnet spark和redshift不工作,.net,amazon-web-services,apache-spark,amazon-s3,spark-redshift,.net,Amazon Web Services,Apache Spark,Amazon S3,Spark Redshift,您好,我在使redshift和dotnet spark正常工作时遇到问题: 这是我用于使其在调试模式下工作的配置: C:\bin\spark-2.4.1-bin-hadoop2.7\bin\spark-submit.cmd ` --jars spark-redshift_2.11-2.0.1.jar,aws-java-sdk-1.11.869.jar,aws-java-sdk-s3-1.11.832.jar,hadoop-aws-3.3.0.jar `

您好,我在使redshift和dotnet spark正常工作时遇到问题:

这是我用于使其在调试模式下工作的配置:

C:\bin\spark-2.4.1-bin-hadoop2.7\bin\spark-submit.cmd `
            --jars spark-redshift_2.11-2.0.1.jar,aws-java-sdk-1.11.869.jar,aws-java-sdk-s3-1.11.832.jar,hadoop-aws-3.3.0.jar `
            --class org.apache.spark.deploy.dotnet.DotnetRunner `
            --master local `
            --packages org.apache.spark:spark-avro_2.11:2.4.0 `
            microsoft-spark-2.4.x-0.12.1.jar `
            debug
下面是我的代码摘录:

// Create a Spark session.
            SparkSession spark = SparkSession
                .Builder()
                .AppName("DataScraping ETL into Redshift")
                //.Config("spark.hadoop.fs.s3.access.key", awsAccessKey)
                //.Config("spark.hadoop.fs.s3.awsSecretAccessKey", awsSecretKey)
                .GetOrCreate();

            var productState = spark.Read()
                //.Format("com.databricks.spark.redshift")
                //.Jdbc(url, table, new Dictionary<string, string>
                //{
                //    ["temporary_aws_access_key_id"] = awsAccessKey,
                //    ["temporary_aws_secret_access_key"] = awsSecretKey,
                //    ["aws_iam_role"] = iam_role,
                //    ["tempdir"] = s3Bucket,
                //    ["jdbcdriver"] = "com.amazon.redshift.jdbc42.Driver"
                //})
                //.Count();
                //.Format("jdbc")
                .Format("com.databricks.spark.redshift")
                //.Format("com.amazon.redshift.jdbc42.Driver")
                .Option("temporary_aws_access_key_id", awsAccessKey)
                .Option("temporary_aws_secret_access_key", awsSecretKey)
                .Option("url", url)
                .Option("dbtable", "product_track")
                .Option("aws_iam_role", iam_role)
                .Option("tempdir", s3Bucket)
                .Option("spark.hadoop.fs.s3.access.key", awsAccessKey)
                .Option("spark.hadoop.fs.s3.awsSecretAccessKey", awsSecretKey)
                .Option("jdbcdriver", "com.amazon.redshift.jdbc42.Driver")
                .Load();

            productState.Show(10);
//创建一个Spark会话。
火花会话火花=火花会话
.Builder()
.AppName(“将ETL数据拖入红移”)
//.Config(“spark.hadoop.fs.s3.access.key”,awsAccessKey)
//.Config(“spark.hadoop.fs.s3.awsSecretAccessKey”,awsSecretKey)
.GetOrCreate();
var productState=spark.Read()
//.Format(“com.databricks.spark.redshift”)
//.Jdbc(url、表格、新字典
//{
//[“临时aws访问密钥id”]=awsAccessKey,
//[“临时aws\U秘密访问密钥”]=awsSecretKey,
//[“aws_iam_角色”]=iam_角色,
//[“tempdir”]=s3Bucket,
//[“jdbcdriver”]=“com.amazon.redshift.jdbc42.Driver”
//})
//.Count();
//.格式(“jdbc”)
.Format(“com.databricks.spark.redshift”)
//.Format(“com.amazon.redshift.jdbc42.Driver”)
.选项(“临时aws访问密钥id”,awsAccessKey)
.选项(“临时密钥”,awsSecretKey)
.选项(“url”,url)
.选项(“数据库表”、“产品跟踪”)
.选项(“aws_iam_角色”,iam_角色)
.选项(“tempdir”,s3Bucket)
.Option(“spark.hadoop.fs.s3.access.key”,awsAccessKey)
.Option(“spark.hadoop.fs.s3.awsSecretAccessKey”,awsSecretKey)
.Option(“jdbcdriver”、“com.amazon.redshift.jdbc42.Driver”)
.Load();
productState.Show(10);
请注意,在本文中,我已经证明了许多配置:

我不可能让它工作。例外情况始终相同:

java.io.IOException: No FileSystem for scheme: s3
    at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
    at com.databricks.spark.redshift.Utils$.assertThatFileSystemIsNotS3BlockFileSystem(Utils.scala:156)
    at com.databricks.spark.redshift.RedshiftRelation.<init>(RedshiftRelation.scala:52)
    at com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:49)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:145)
    at org.apache.spark.api.dotnet.DotnetBackendHandler.handleBackendRequest(DotnetBackendHandler.scala:85)
    at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:28)
    at org.apache.spark.api.dotnet.DotnetBackendHandler.channelRead0(DotnetBackendHandler.scala:23)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:138)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:497)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
    at java.lang.Thread.run(Thread.java:748)
java.io.IOException:没有scheme:s3的文件系统
位于org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
位于org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
位于org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
位于org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
位于org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
位于org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
在com.databricks.spark.redshift.Utils$.assertThatFileSystemIsNotS3BlockFileSystem(Utils.scala:156)
在com.databricks.spark.redshift.RedshiftRelation.(RedshiftRelation.scala:52)
位于com.databricks.spark.redshift.DefaultSource.createRelation(DefaultSource.scala:49)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
位于org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于org.apache.spark.api.dotnet.DotnetBackendHandler.handleMethodCall(DotnetBackendHandler.scala:145)
位于org.apache.spark.api.dotnet.dotnebackendhandler.handleBackendRequest(dotnebackendhandler.scala:85)
位于org.apache.spark.api.dotnet.dotnebackendhandler.channelRead0(dotnebackendhandler.scala:28)
位于org.apache.spark.api.dotnet.dotnebackendhandler.channelRead0(dotnebackendhandler.scala:23)
位于io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
位于io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
位于io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
位于io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
位于io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
位于io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
位于io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
位于io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
位于io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
位于io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935)
位于io.netty.channel.nio.AbstractNioByteChannel$niobyteensafe.read(AbstractNioByteChannel.java:138)
位于io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645)
在io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized处(NioEventLoop.java:580)
在io.netty.channel.nio.NioEventLoop.p