Apache spark 是否可以通过spark Cassandra连接器将Cassandra群集名称(在配置中定义)传递给save()
因此,根据文档,read可以正常工作:Apache spark 是否可以通过spark Cassandra连接器将Cassandra群集名称(在配置中定义)传递给save(),apache-spark,spark-dataframe,spark-cassandra-connector,Apache Spark,Spark Dataframe,Spark Cassandra Connector,因此,根据文档,read可以正常工作: val cql = new org.apache.spark.sql.cassandra.CassandraSQLContext(sc) cql.setConf("cluster-src/spark.cassandra.connection.host", "1.1.1.1") cql.setConf("cluster-dst/spark.cassandra.connection.host", "2.2.2.2") ... var df = cql.re
val cql = new org.apache.spark.sql.cassandra.CassandraSQLContext(sc)
cql.setConf("cluster-src/spark.cassandra.connection.host", "1.1.1.1")
cql.setConf("cluster-dst/spark.cassandra.connection.host", "2.2.2.2")
...
var df = cql.read.format("org.apache.spark.sql.cassandra")
.option("table", "my_table")
.option("keyspace", "my_keyspace")
.option("cluster", "cluster-src")
.load()
但不清楚如何将目标集群名称传递给save对应方。这显然不起作用,它只是尝试连接到本地spark主机:
df.write
.format("org.apache.spark.sql.cassandra")
.option("table", "my_table")
.option("keyspace", "my_keyspace")
.option("cluster", "cluster-dst")
.save()
更新:
找到了一个解决办法,但有点难看。因此,不是:
.option("cluster", "cluster-dst")
使用:
看起来像只虫子。请在此处提交JIRA文件:
.option("spark_cassandra_connection_host", cql.getConf("cluster-dst/spark.cassandra.connection.host")