Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 从PySpark查询配置单元表时出错_Apache Spark_Pyspark_Hive - Fatal编程技术网

Apache spark 从PySpark查询配置单元表时出错

Apache spark 从PySpark查询配置单元表时出错,apache-spark,pyspark,hive,Apache Spark,Pyspark,Hive,我想使用PySpark(当前运行在本地,但将迁移到Databricks)查询配置单元表,但我一直遇到错误。凭借我不具备的Java知识,我今天花了大半时间从web上尝试各种解决方案,但似乎没有任何效果 我尝试过的事情: 通过DBeaver查询具有相同凭据的表 使用StructType和StructField指定方案,但得到相同的错误 使用PyHive和impyla进行连接,但没有成功。不断获取TSocket读取0字节错误 注册为临时表并使用SQL查询,但出现相同错误 任何指导都是值得的!谢谢

我想使用PySpark(当前运行在本地,但将迁移到Databricks)查询配置单元表,但我一直遇到错误。凭借我不具备的Java知识,我今天花了大半时间从web上尝试各种解决方案,但似乎没有任何效果

我尝试过的事情:

  • 通过DBeaver查询具有相同凭据的表
  • 使用
    StructType
    StructField
    指定方案,但得到相同的错误
  • 使用PyHiveimpyla进行连接,但没有成功。不断获取TSocket读取0字节错误
  • 注册为临时表并使用SQL查询,但出现相同错误
任何指导都是值得的!谢谢

从pyspark.sql导入SparkSession
#初始化spark会话
spark=SparkSession.builder.appName('test').getOrCreate()
#连接
driver=“org.apache.hive.jdbc.HiveDriver”
远程_table=spark.read.format(“jdbc”)\
.选项(“驱动程序”,驱动程序)\
.选项(“url”,url)\
.选项(“数据库表”,表)\
.选项(“用户”,用户名)\
.选项(“密码”,密码)\
.load()\
.限额(100)
#打印模式
远程_table.printSchema()
输出

#显示前10行
远程表格。选择(“*”)。显示(10)
输出

---------------------------------------------------------------------------
Py4JJavaError回溯(最近一次调用)
在里面
---->1远程_表。选择(“*”)。显示(10)
show中的~/opt/anaconda3/lib/python3.7/site-packages/pyspark/sql/dataframe.py(self,n,truncate,vertical)
438         """
439如果isinstance(truncate,bool)和truncate:
-->440打印(self.\u jdf.showString(n,20,垂直))
441其他:
442打印(self._jdf.showString(n,int(截断),垂直))
~/opt/anaconda3/lib/python3.7/site-packages/py4j/java\u gateway.py in\uuuu调用(self,*args)
1303 answer=self.gateway\u client.send\u命令(command)
1304返回值=获取返回值(
->1305应答,self.gateway\u客户端,self.target\u id,self.name)
1306
1307对于临时参数中的临时参数:
装饰中的~/opt/anaconda3/lib/python3.7/site-packages/pyspark/sql/utils.py(*a,**kw)
126 def装饰(*a,**千瓦):
127尝试:
-->128返回f(*a,**kw)
129除py4j.protocol.Py4JJavaError外,错误为e:
130 converted=convert\u异常(例如java\u异常)
获取返回值中的~/opt/anaconda3/lib/python3.7/site-packages/py4j/protocol.py(应答、网关客户端、目标id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用o158.showString时出错。
:org.apache.spark.sparkeexception:作业因阶段失败而中止:阶段4.0中的任务0失败1次,最近的失败:阶段4.0中的任务0.0丢失(TID 4,us-c02sc3d2gvc1.fios-router.home,executor driver):java.sql.SQLException:无法将列7转换为双精度:java.lang.NumberFormatException:用于输入字符串:“ga_union.sessions”
位于org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:298)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5(JdbcUtils.scala:417)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5$adapted(JdbcUtils.scala:416)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
位于org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
位于org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
在org.apache.spark.interruptblediator.hasNext(interruptblediator.scala:37)
位于org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
位于org.apache.spark.sql.catalyst.expressions.GeneratedClass$GenerateEditorForCodeGenStage1.processNext(未知源)
位于org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
位于org.apache.spark.sql.execution.whisttagecodegenexec$$anon$1.hasNext(whisttagecodegenexec.scala:729)
位于scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
位于org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
位于org.apache.spark.shuffle.shufflewWriteProcessor.write(shufflewWriteProcessor.scala:59)
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)上
在org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)上
位于org.apache.spark.scheduler.Task.run(Task.scala:127)
在org.apache.spark.executor.executor$TaskRunner.$anonfun$run$3(executor.scala:446)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:449)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
原因:java.lang.NumberFormatException:对于输入字符串:“ga_union.sessions”
位于sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
位于java.lang.Double.parseDouble(Double.java:538)
位于org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:293)
…还有22个
驱动程序堆栈跟踪:
位于org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
位于org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
在org。
root
 |-- ga_union.calendar_date: string (nullable = true)
 |-- ga_union.profile_view: string (nullable = true)
 |-- ga_union.channel_grouping: string (nullable = true)
 |-- ga_union.device_category: string (nullable = true)
 |-- ga_union.ga_source: string (nullable = true)
 |-- ga_union.ga_medium: string (nullable = true)
 |-- ga_union.sessions: double (nullable = true)
 |-- ga_union.bounces: double (nullable = true)
 |-- ga_union.pageviews: double (nullable = true)
 |-- ga_union.users: double (nullable = true)
 |-- ga_union.total_time_on_site: double (nullable = true)
 |-- ga_union.newsletter_signup: double (nullable = true)
 |-- ga_union.configuration_starts: double (nullable = true)
 |-- ga_union.configuration_complete: double (nullable = true)
 |-- ga_union.goal15_completions: double (nullable = true)

---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-29-58d5fd3b71ec> in <module>
----> 1 remote_table.select("*").show(10)

~/opt/anaconda3/lib/python3.7/site-packages/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
    438         """
    439         if isinstance(truncate, bool) and truncate:
--> 440             print(self._jdf.showString(n, 20, vertical))
    441         else:
    442             print(self._jdf.showString(n, int(truncate), vertical))

~/opt/anaconda3/lib/python3.7/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1303         answer = self.gateway_client.send_command(command)
   1304         return_value = get_return_value(
-> 1305             answer, self.gateway_client, self.target_id, self.name)
   1306 
   1307         for temp_arg in temp_args:

~/opt/anaconda3/lib/python3.7/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
    126     def deco(*a, **kw):
    127         try:
--> 128             return f(*a, **kw)
    129         except py4j.protocol.Py4JJavaError as e:
    130             converted = convert_exception(e.java_exception)

~/opt/anaconda3/lib/python3.7/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o158.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 4, us-c02sc3d2gvc1.fios-router.home, executor driver): java.sql.SQLException: Cannot convert column 7 to double: java.lang.NumberFormatException: For input string: "ga_union.sessions"
    at org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:298)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5(JdbcUtils.scala:417)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5$adapted(JdbcUtils.scala:416)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NumberFormatException: For input string: "ga_union.sessions"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
    at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
    at java.lang.Double.parseDouble(Double.java:538)
    at org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:293)
    ... 22 more

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)
    at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
    at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)
    at scala.Option.foreach(Option.scala:407)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2120)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2139)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:467)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:420)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:47)
    at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3627)
    at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:2697)
    at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2697)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2904)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:300)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:337)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot convert column 7 to double: java.lang.NumberFormatException: For input string: "ga_union.sessions"
    at org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:298)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5(JdbcUtils.scala:417)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.$anonfun$makeGetter$5$adapted(JdbcUtils.scala:416)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:361)
    at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anon$1.getNext(JdbcUtils.scala:343)
    at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:31)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:729)
    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
    at org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:59)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:52)
    at org.apache.spark.scheduler.Task.run(Task.scala:127)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
Caused by: java.lang.NumberFormatException: For input string: "ga_union.sessions"
    at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
    at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110)
    at java.lang.Double.parseDouble(Double.java:538)
    at org.apache.hive.jdbc.HiveBaseResultSet.getDouble(HiveBaseResultSet.java:293)
    ... 22 more
spark = SparkSession \
    .builder \
    .appName("Python Spark SQL Hive integration example") \
    .config("spark.sql.warehouse.dir", warehouse_location) \
    .enableHiveSupport() \
    .getOrCreate()