Apache spark 没有足够的副本可用于一致性本地查询(需要1个副本,但只有0个副本处于活动状态)

Apache spark 没有足够的副本可用于一致性本地查询(需要1个副本,但只有0个副本处于活动状态),apache-spark,cassandra,spark-cassandra-connector,Apache Spark,Cassandra,Spark Cassandra Connector,我正在运行spark cassandra连接器,遇到了一个奇怪的问题: 我以以下方式运行spark shell: bin/spark-shell --packages datastax:spark-cassandra-connector:2.0.0-M2-s_2.1 然后运行以下命令: import com.datastax.spark.connector._ val rdd = sc.cassandraTable("test_spark", "test") println(rdd.first

我正在运行spark cassandra连接器,遇到了一个奇怪的问题: 我以以下方式运行spark shell:

bin/spark-shell --packages datastax:spark-cassandra-connector:2.0.0-M2-s_2.1
然后运行以下命令:

import com.datastax.spark.connector._
val rdd = sc.cassandraTable("test_spark", "test")
println(rdd.first)
# CassandraRow{id: 2, name: john, age: 29}
问题是以下命令给出了一个错误:

rdd.take(1).foreach(println)
# CassandraRow{id: 2, name: john, age: 29}
rdd.take(2).foreach(println)
# Caused by: com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_ONE (1 required but only 0 alive)
# at com.datastax.driver.core.exceptions.UnavailableException.copy(UnavailableException.java:128)
# at com.datastax.driver.core.Responses$Error.asException(Responses.java:114)
# at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:467)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1012)
# at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:935)
# at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
以下命令将挂起:

println(rdd.count)
我的Cassandra密钥空间似乎具有正确的复制因子:

describe test_spark;
CREATE KEYSPACE test_spark WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}  AND durable_writes = true;

如何修复上述两个错误?

我假设您在使用
本地\u ONE
(spark connector default)一致性时遇到了SimpleStregy和multi dc的问题。它将在本地DC中查找一个节点以向其发出请求,但有可能所有副本都存在于不同的DC中,因此无法满足要求。()

如果你(
input.consistency.level
ONE
),我想问题会解决的。你也应该考虑使用网络拓扑策略。