Sql server java.lang.NoSuchMethodError:com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V

Sql server java.lang.NoSuchMethodError:com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V,sql-server,scala,apache-spark,sqlbulkcopy,mssql-jdbc,Sql Server,Scala,Apache Spark,Sqlbulkcopy,Mssql Jdbc,我正在尝试为找到sql spark连接器的microsoft sql server jdbc驱动程序使用bulkCopyToSqlDB函数 这是启动spark shell的语法: spark-shell --jars /developer/sqljdbc_6.4/enu/mssql-jdbc-6.4.0.jre8.jar,/developer/azure-sqldb-spark-master/azure-sqldb-spark-master/target/azure-sqldb-spark-1.

我正在尝试为找到sql spark连接器的microsoft sql server jdbc驱动程序使用bulkCopyToSqlDB函数

这是启动spark shell的语法:

spark-shell --jars /developer/sqljdbc_6.4/enu/mssql-jdbc-6.4.0.jre8.jar,/developer/azure-sqldb-spark-master/azure-sqldb-spark-master/target/azure-sqldb-spark-1.0.0-jar-with-dependencies.jar
创建bulkCopyConfig,并在spark shell中运行以下代码行后,在运行以下命令时生成错误:

df.bulkCopyToSqlDB(bulkCopyConfig)
完整的错误消息是:

  Caused by: java.lang.NoSuchMethodError: com.microsoft.sqlserver.jdbc.SQLServerBulkCopyOptions.setAllowEncryptedValueModifications(Z)V
  at com.microsoft.azure.sqldb.spark.bulk.BulkCopyUtils$.getBulkCopyOptions(BulkCopyUtils.scala:109)
  at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions.com$microsoft$azure$sqldb$spark$connect$DataFrameFunctions$$bulkCopy(DataFrameFunctions.scala:126)
  at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
  at com.microsoft.azure.sqldb.spark.connect.DataFrameFunctions$$anonfun$bulkCopyToSqlDB$1.apply(DataFrameFunctions.scala:72)
  at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929)
  at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
  at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:109)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)

这是由于mssql服务器jar的旧版本造成的。确保将其更新为6.4.0或更高版本。如果您使用的是maven,它将如下所示:

    <dependency>
        <groupId>com.microsoft.sqlserver</groupId>
        <artifactId>mssql-jdbc</artifactId>
        <version>6.4.0.jre8</version>
    </dependency>

com.microsoft.sqlserver
mssql jdbc
6.4.0.jre8

注意,在这里找到的sql spark连接器的pom.xml文件中,artifactId过去被称为
sqljdbc42
而不是
mssql jdbc

:使用maven的依赖关系看起来与您上面编写的依赖关系同义。