使用PySpark从Azure HDInsight检索数据

使用PySpark从Azure HDInsight检索数据,azure,apache-spark,pyspark,azure-sql-database,azure-dsvm,Azure,Apache Spark,Pyspark,Azure Sql Database,Azure Dsvm,我有访问Azure数据库的凭据和URL 我想用pyspark读取数据,但我不知道怎么做 是否有连接到Azure数据库的特定语法 编辑 在我使用共享代码之后,我收到了这种错误,有什么建议吗 我在机器上看到的一个示例中,他们使用ODBC驱动程序,也许这涉及其中 2018-07-14 11:22:00 WARN SQLServerConnection:2141 - ConnectionID:1 ClientConnectionId: 7561d3ba-71ac-43b3-a35f-26ababef9

我有访问Azure数据库的凭据和URL

我想用pyspark读取数据,但我不知道怎么做

是否有连接到Azure数据库的特定语法

编辑 在我使用共享代码之后,我收到了这种错误,有什么建议吗

我在机器上看到的一个示例中,他们使用ODBC驱动程序,也许这涉及其中

2018-07-14 11:22:00 WARN  SQLServerConnection:2141 - ConnectionID:1 ClientConnectionId: 7561d3ba-71ac-43b3-a35f-26ababef90cc Prelogin error: host servername.azurehdinsight.net port 443 Error reading prelogin response: An existing connection was forcibly closed by the remote host ClientConnectionId:7561d3ba-71ac-43b3-a35f-26ababef90cc

Traceback (most recent call last):
  File "C:/Users/team2/PycharmProjects/Bridgestone/spark_driver_style.py", line 46, in <module>
    .option("password", "**********")\
  File "C:\dsvm\tools\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\readwriter.py", line 172, in load
    return self._df(self._jreader.load())
  File "C:\Users\team2\PycharmProjects\Bridgestone\venv\lib\site-packages\py4j\java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "C:\dsvm\tools\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\utils.py", line 63, in deco
    return f(*a, **kw)
  File "C:\Users\team2\PycharmProjects\Bridgestone\venv\lib\site-packages\py4j\protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o29.load.
: com.microsoft.sqlserver.jdbc.SQLServerException: An existing connection was forcibly closed by the remote host ClientConnectionId:7561d3ba-71ac-43b3-a35f-26ababef90cc
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2400)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2384)
    at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:1884)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.Prelogin(SQLServerConnection.java:2137)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1973)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1628)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectInternal(SQLServerConnection.java:1459)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:773)
    at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1168)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
    at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:115)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:52)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
    at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 

sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
2018-07-14 11:22:00警告SQLServerConnection:2141-连接ID:1客户端连接ID:7561d3ba-71ac-43b3-a35f-26ABABABEFF90CC预登录错误:host servername.azurehdinsight.net端口443错误读取预登录响应:现有连接被远程主机客户端连接ID:7561d3ba-71ac-43b3-a35f-26ABABAFF90CC强制关闭
回溯(最近一次呼叫最后一次):
文件“C:/Users/team2/PycharmProjects/Bridgestone/spark_driver_style.py”,第46行,in
.选项(“密码”,“*******”)\
文件“C:\dsvm\tools\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\readwriter.py”,第172行,处于加载状态
返回self.\u df(self.\u jreader.load())
文件“C:\Users\team2\PycharmProjects\Bridgestone\venv\lib\site packages\py4j\java_gateway.py”,第1257行,在u调用中__
回答,self.gateway\u客户端,self.target\u id,self.name)
文件“C:\dsvm\tools\spark-2.3.0-bin-hadoop2.7\python\pyspark\sql\utils.py”,第63行,deco格式
返回f(*a,**kw)
文件“C:\Users\team2\PycharmProjects\Bridgestone\venv\lib\site packages\py4j\protocol.py”,第328行,在get\u return\u值中
格式(目标id,“.”,名称),值)
py4j.protocol.Py4JJavaError:调用o29.load时出错。
:com.microsoft.sqlserver.jdbc.SQLServerException:远程主机ClientConnectionId:7561d3ba-71ac-43b3-a35f-26ababef90cc强制关闭了现有连接
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2400)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2384)
位于com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:1884)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.Prelogin(SQLServerConnection.java:2137)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1973)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:1628)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.connectioninternal(SQLServerConnection.java:1459)
位于com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:773)
位于com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:1168)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:63)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$createConnectionFactory$1.apply(JdbcUtils.scala:54)
位于org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:56)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbcreation.(jdbcreation.scala:115)
位于org.apache.spark.sql.execution.datasources.jdbc.jdbrelationprovider.createRelation(jdbrelationprovider.scala:52)
位于org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:340)
位于org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227)
位于org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:164)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
在
invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:214)
运行(Thread.java:748)

如果您想从Data Science虚拟机中的pyspark笔记本访问HDInsight群集,可以按照步骤7中描述的步骤进行操作

导入所需的软件包:

#Import required Packages
import pyodbc
import time as time
import json
import os
import urllib
import warnings
import re
import pandas as pd
设置配置单元元存储连接(需要来自群集的用户和密码):

查询数据:

queryString = """
    show tables in default;
"""
pd.read_sql(queryString,connection)

我编辑了这个问题,问题是现在他给了我你在顶部看到的错误(第一行用不同的ID重复了10次)还有其他解决问题的建议吗。为您的问题添加更多上下文以确保理解:您是在HDInsight上运行pyspark并尝试访问Azure SQL DB,还是在Data Science VM上运行pyspark并尝试将HDInsight Spark用作计算上下文?对不起,我忘了编写上下文。我正在Data Science VM(D4s v3)上运行pyspark,并尝试使用HDInsight,因为我的表中包含大量数据,无法在HDInsight Spark群集上使用Jupyter笔记本。更新了我的答案。这是从DSVM中的Jupyter笔记本上获得的,希望它能解决您的问题。
queryString = """
    show tables in default;
"""
pd.read_sql(queryString,connection)