Pyspark:将数据帧写入Cassandra表不起作用
当我试图通过pyspark从Cassandra表中读取数据时,工作正常。但是,当我试图将数据帧写入Cassandra表时,给出了java.lang.NoClassDefFoundError以及相同的Spark Cassandra连接包 版本详情: 卡桑德拉:Pyspark:将数据帧写入Cassandra表不起作用,pyspark,datastax,spark-cassandra-connector,Pyspark,Datastax,Spark Cassandra Connector,当我试图通过pyspark从Cassandra表中读取数据时,工作正常。但是,当我试图将数据帧写入Cassandra表时,给出了java.lang.NoClassDefFoundError以及相同的Spark Cassandra连接包 版本详情: 卡桑德拉: Connected to Test Cluster at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.0.18 | CQL spec 3.4.0 | Native protocol v4] Use
Connected to Test Cluster at 127.0.0.1:9042.
[cqlsh 5.0.1 | Cassandra 3.0.18 | CQL spec 3.4.0 | Native protocol v4]
Use HELP for help.
火花:
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 2.4.3
/_/
Using Python version 2.7.5 (default, Jun 11 2019 14:33:56)
火花-卡桑德拉接头:
bin/pyspark --packages datastax:spark-cassandra-connector:2.4.0-s_2.11
代码:
>>> from pyspark import SparkContext, SparkConf
>>> from pyspark.sql import SQLContext, SparkSession
>>> from pyspark.sql.types import *
>>> import os
>>> spark = SparkSession.builder \
... .appName('SparkCassandraApp') \
... .config('spark.cassandra.connection.host', '127.0.0.1') \
... .config('spark.cassandra.connection.port', '9042') \
... .config('spark.cassandra.output.consistency.level','ONE') \
... .master('local[2]') \
... .getOrCreate()
>>> df = spark.read.format("org.apache.spark.sql.cassandra").options(table="emp",keyspace="tutorialspoint").load()
>>> df.show()
+------+---------+--------+----------+-------+
|emp_id| emp_city|emp_name| emp_phone|emp_sal|
+------+---------+--------+----------+-------+
| 2|Hyderabad| robin|9848022339| 40000|
| 1|Hyderabad| ram|9848022338| 50000|
| 3| Chennai| rahman|9848022330| 45000|
+------+---------+--------+----------+-------+
在同一个终端中,尝试写入Cassandra表
>>> df.write\
... .format("org.apache.spark.sql.cassandra")\
... .mode('append')\
... .options(table="emp", keyspace="tutorialspoint")\
... .save()
19/09/26 15:34:15 ERROR Executor: Exception in task 6.0 in stage 3.0 (TID 25)
java.lang.NoClassDefFoundError: com/twitter/jsr166e/LongAdder
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsSupport$class.$init$(OutputMetricsUpdater.scala:107)
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsUpdater.<init>(OutputMetricsUpdater.scala:153)
at org.apache.spark.metrics.OutputMetricsUpdater$.apply(OutputMetricsUpdater.scala:75)
at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:209)
at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:183)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
19/09/26 15:34:15 ERROR Executor: Exception in task 7.0 in stage 3.0 (TID 26)
java.lang.NoClassDefFoundError: com/twitter/jsr166e/LongAdder
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsSupport$class.$init$(OutputMetricsUpdater.scala:107)
at org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsUpdater.<init>(OutputMetricsUpdater.scala:153)
at org.apache.spark.metrics.OutputMetricsUpdater$.apply(OutputMetricsUpdater.scala:75)
at com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:209)
at com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
at com.datastax.spark.connector.writer.TableWriter.write(TableWriter.scala:183)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
>>df.write\
... .格式(“org.apache.spark.sql.cassandra”)\
... .模式('append')\
... .选项(table=“emp”,keyspace=“tutorialspoint”)\
... .保存()
19/09/26 15:34:15错误执行者:第3.0阶段任务6.0中出现异常(TID 25)
java.lang.NoClassDefFoundError:com/twitter/jsr166e/LongAdder
在org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsSupport$class.$init$(OutputMetricsUpdater.scala:107)
位于org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsUpdater.(OutputMetricsUpdater.scala:153)
位于org.apache.spark.metrics.OutputMetricsUpdater$.apply(OutputMetricsUpdater.scala:75)
在com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:209)
在com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
在com.datastax.spark.connector.writer.TableWriter.write上(TableWriter.scala:183)
在com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
在com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
位于org.apache.spark.scheduler.Task.run(Task.scala:121)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:408)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:414)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
19/09/26 15:34:15错误执行者:第3.0阶段任务7.0中的异常(TID 26)
java.lang.NoClassDefFoundError:com/twitter/jsr166e/LongAdder
在org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsSupport$class.$init$(OutputMetricsUpdater.scala:107)
位于org.apache.spark.metrics.OutputMetricsUpdater$TaskMetricsUpdater.(OutputMetricsUpdater.scala:153)
位于org.apache.spark.metrics.OutputMetricsUpdater$.apply(OutputMetricsUpdater.scala:75)
在com.datastax.spark.connector.writer.TableWriter.writeInternal(TableWriter.scala:209)
在com.datastax.spark.connector.writer.TableWriter.insert(TableWriter.scala:197)
在com.datastax.spark.connector.writer.TableWriter.write上(TableWriter.scala:183)
在com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
在com.datastax.spark.connector.RDDFunctions$$anonfun$saveToCassandra$1.apply(RDDFunctions.scala:36)
位于org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
位于org.apache.spark.scheduler.Task.run(Task.scala:121)
位于org.apache.spark.executor.executor$TaskRunner$$anonfun$10.apply(executor.scala:408)
位于org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
位于org.apache.spark.executor.executor$TaskRunner.run(executor.scala:414)
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
运行(Thread.java:748)
您需要一个不同的卡桑德拉连接器
。Datastax连接器适用于Scala/Java,但您需要python对应物
是Datastax Spark连接器的Python端口
例如:
import pyspark_cassandra
conf = SparkConf() \
.setAppName("PySpark Cassandra Test") \
.setMaster("spark://spark-master:7077") \
.set("spark.cassandra.connection.host", "cas-1")
sc = CassandraSparkContext(conf=conf)
rdd = sc.parallelize([{
"key": k,
"stamp": datetime.now(),
"val": random() * 10,
"tags": ["a", "b", "c"],
"options": {
"foo": "bar",
"baz": "qux",
}
} for k in ["x", "y", "z"]])
rdd.saveToCassandra(
"keyspace",
"table",
ttl=timedelta(hours=1),
)
这也发生在我身上,标准的连接器包可能出了问题。jsr长加法器jar丢失和/或替换为pyspark某处无法识别的另一个jar。如果您下载缺少的jar并将其作为另一个包包含,那么它就可以工作了 由以下答案得出的解决方案
Datastax连接器应该可以很好地与Python配合使用:@AlexOtt我最近有一次
没有按预期工作的经历。你试过了吗?如果答案有效,我会尝试更新。我的解决方案有用吗?