Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/spring-boot/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Spring boot Spring Boot Cassandra反应性WriteTimeOutException问题_Spring Boot_Reactive_Spring Data Cassandra - Fatal编程技术网

Spring boot Spring Boot Cassandra反应性WriteTimeOutException问题

Spring boot Spring Boot Cassandra反应性WriteTimeOutException问题,spring-boot,reactive,spring-data-cassandra,Spring Boot,Reactive,Spring Data Cassandra,我正在尝试使用Spring反应式Cassandra在Cassandra中加载200000批的数据。大多数情况下,它运行正常,但有时我会看到下面的错误,在保存到Cassandra时导致数据丢失 {"thread":"cluster1-nio-worker-0","level":"ERROR","loggerName":"com.google.common.util.concurrent.AbstractFuture","message":"RuntimeException while execut

我正在尝试使用Spring反应式Cassandra在Cassandra中加载200000批的数据。大多数情况下,它运行正常,但有时我会看到下面的错误,在保存到Cassandra时导致数据丢失

{"thread":"cluster1-nio-worker-0","level":"ERROR","loggerName":"com.google.common.util.concurrent.AbstractFuture","message":"RuntimeException while executing runnable org.springframework.data.cassandra.core.cql.session.DefaultBridgedReactiveSession$$Lambda$1041/1304979740@2b095cbc with executor org.springframework.data.cassandra.core.cql.session.DefaultBridgedReactiveSession$$Lambda$1042/1415755843@74ca8623","thrown":{"commonElementCount":0,"localizedMessage":"org.springframework.data.cassandra.CassandraWriteTimeoutException: ReactiveSessionCallback; CQL [INSERT INTO*******]


Cassandra timeout during SIMPLE write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write); nested exception is com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during SIMPLE write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)","name":"org.springframework.data.cassandra.CassandraWriteTimeoutException","cause":{"commonElementCount":0,"localizedMessage":"Cassandra timeout during SIMPLE write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)","message":"Cassandra timeout during SIMPLE write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)","name":"com.datastax.driver.core.exceptions.WriteTimeoutException","cause":{"commonElementCount":0,"localizedMessage":"Cassandra timeout during SIMPLE write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)

如果我有机会在spring boot中配置写入超时或其他解决方法来解决此问题,请帮助我(我知道批大小很大,使用如此大的批不是空闲的,但我必须在有限的时间内完成此数据加载)

“我们必须尽快完成工作,所以答案是对系统投入更多“。根据你的情况,你可能会损害你的表现,而不是帮助它。我不知道您的设置,但是如果您的集群中有多个节点,并且您正在批处理多个分区/节点,那么您正是在这样做——损害性能,而不是帮助性能。如果要插入到单个分区中,则可以。但是,如果您再次收到错误和数据丢失,则说明您没有达到目标,可能希望缩小尺寸。你试过异步插入吗?非常快。是的,我使用的是async Inserti如果您将批处理与async一起使用,这可能不是一个好主意(从来没有亲自这么做过)-如果您使用多个分区,我认为这将是灾难性的。我会简单地使用异步插入,而不使用批处理。应能够自行产生显著负载。如果您仍然有问题,我会检查节点以确保您没有过载(检查丢失的突变等)。如果太多,你可能不得不退后。如果节点过载且无法达到SLA(希望是良好的扩展设计),则可能需要添加节点以进行扩展。