Apache spark 从pyspark加载数据帧

Apache spark 从pyspark加载数据帧,apache-spark,jdbc,pyspark,apache-spark-sql,pyspark-dataframes,Apache Spark,Jdbc,Pyspark,Apache Spark Sql,Pyspark Dataframes,我正在尝试使用spark.read.jdbc从PySpark连接到MS-SQLDB import os from pyspark.sql import * from pyspark.sql.functions import * from pyspark import SparkContext; from pyspark.sql.session import SparkSession sc = SparkContext.getOrCreate() spark = SparkSession(sc)

我正在尝试使用spark.read.jdbc从PySpark连接到MS-SQLDB

import os
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark import SparkContext;
from pyspark.sql.session import SparkSession
sc = SparkContext.getOrCreate()
spark = SparkSession(sc)

df = spark.read \
     .format('jdbc') \
     .option('url', 'jdbc:sqlserver://local:1433') \
     .option('user', 'sa') \
     .option('password', '12345') \
     .option('dbtable', '(select COL1, COL2 from tbl1 WHERE COL1 = 2)')
然后执行df.load()并返回一个错误:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\spark\spark\python\pyspark\sql\readwriter.py", line 172, in load
    return self._df(self._jreader.load())
  File "C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1256, in __call__
  File "C:\spark\spark\python\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o42.load.
: java.sql.SQLException: No suitable driver
        at java.sql.DriverManager.getDriver(Unknown Source)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions$$anonfun$6.apply(JDBCOptions.scala:105)
        at scala.Option.getOrElse(Option.scala:121)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:104)
        at org.apache.spark.sql.execution.datasources.jdbc.JDBCOptions.<init>(JDBCOptions.scala:35)
        at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:32)
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
文件“C:\spark\spark\python\pyspark\sql\readwriter.py”,第172行,已加载
返回self.\u df(self.\u jreader.load())
文件“C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py”,第1256行,在u调用中__
文件“C:\spark\spark\python\pyspark\sql\utils.py”,第63行,在deco中
返回f(*a,**kw)
文件“C:\spark\spark\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py”,第326行,在get\u return\u值中
py4j.protocol.Py4JJavaError:调用o42.load时出错。
:java.sql.SQLException:没有合适的驱动程序
位于java.sql.DriverManager.getDriver(未知源)
位于org.apache.spark.sql.execution.datasources.jdbc.jdboptions$$anonfun$6.apply(jdboptions.scala:105)
位于org.apache.spark.sql.execution.datasources.jdbc.jdboptions$$anonfun$6.apply(jdboptions.scala:105)
位于scala.Option.getOrElse(Option.scala:121)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbcopies。(jdbcopies.scala:104)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbcopies。(jdbcopies.scala:35)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbrelationprovider.createRelation(jdbrelationprovider.scala:32)

怎么了?

您需要下载JDBC驱动程序并将其放入spark/jars文件夹中

对于SQL SERVER JDBC驱动程序,您可以从