ApacheIgnite:获取错误:获取早于计算关联的拓扑版本的关联

ApacheIgnite:获取错误:获取早于计算关联的拓扑版本的关联,ignite,Ignite,我正在Kubernetes集群的Linux环境中运行ApacheIgnite.NET2.7集群。Ignite群集由5个Ignite节点组成,运行3个微服务(2x1st服务、2x2nd服务和1个3rd服务)。其中两个微服务部署了两个相互调用的Ignite服务 集群启动成功,发现工作正常,所有节点都添加到集群中。但突然之间,一个服务的两个实例(2个节点)都失败,并出现以下错误: java.lang.IllegalStateException: Getting affinity for topolog

我正在Kubernetes集群的Linux环境中运行ApacheIgnite.NET2.7集群。Ignite群集由5个Ignite节点组成,运行3个微服务(2x1st服务、2x2nd服务和1个3rd服务)。其中两个微服务部署了两个相互调用的Ignite服务

集群启动成功,发现工作正常,所有节点都添加到集群中。但突然之间,一个服务的两个实例(2个节点)都失败,并出现以下错误:

java.lang.IllegalStateException: Getting affinity for topology version earlier than affinity is calculated [locNode=TcpDiscoveryNode [id=76308a3b-221a-4307-b181-bd4e66d82683, addrs=[10.0.0.62, 127.0.0.1], sockAddrs=[/127.0.0.1:47500, product-service-deployment-7dd5496d58-l426m/10.0.0.62:47500], discPort=47500, order=8, intOrder=6, lastExchangeTime=1560283011887, loc=true, ver=2.7.0#20181130-sha1:256ae401, isClient=false], grp=ignite-sys-cache, topVer=AffinityTopologyVersion [topVer=17, minorTopVer=0], head=AffinityTopologyVersion [topVer=18, minorTopVer=0], history=[AffinityTopologyVersion [topVer=9, minorTopVer=0], AffinityTopologyVersion [topVer=11, minorTopVer=0], AffinityTopologyVersion [topVer=11, minorTopVer=1], AffinityTopologyVersion [topVer=12, minorTopVer=0], AffinityTopologyVersion [topVer=14, minorTopVer=0], AffinityTopologyVersion [topVer=16, minorTopVer=0], AffinityTopologyVersion [topVer=18, minorTopVer=0]]]
    at org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.cachedAffinity(GridAffinityAssignmentCache.java:712)
    at org.apache.ignite.internal.processors.affinity.GridAffinityAssignmentCache.nodes(GridAffinityAssignmentCache.java:612)
    at org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.nodesByPartition(GridCacheAffinityManager.java:226)
    at org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByPartition(GridCacheAffinityManager.java:266)
    at org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:257)
    at org.apache.ignite.internal.processors.cache.GridCacheAffinityManager.primaryByKey(GridCacheAffinityManager.java:281)
    at org.apache.ignite.internal.processors.service.GridServiceProcessor$TopologyListener$1.run0(GridServiceProcessor.java:1877)
    at org.apache.ignite.internal.processors.service.GridServiceProcessor$DepRunnable.run(GridServiceProcessor.java:2064)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
这会导致其他服务失败,因为它依赖于第一个服务:

Unhandled Exception: Apache.Ignite.Core.Services.ServiceInvocationException: Proxy method invocation failed with an exception. Examine InnerException for details. ---> Apache.Ignite.Core.Common.IgniteException: Failed to find deployed service: ProductService ---> Apache.Ignite.Core.Common.JavaException: class org.apache.ignite.IgniteException: Failed to find deployed service: ProductService
由于Kubernetes正在重新启动第二个服务,因此第一个服务报告不断的拓扑更改:

[19:57:14] Topology snapshot [ver=20, locNode=76308a3b, servers=4, clients=0, state=ACTIVE, CPUs=4, offheap=6.2GB, heap=2.0GB]
[19:57:15] Topology snapshot [ver=21, locNode=76308a3b, servers=5, clients=0, state=ACTIVE, CPUs=5, offheap=7.8GB, heap=2.5GB]
[19:57:17] Topology snapshot [ver=22, locNode=76308a3b, servers=4, clients=0, state=ACTIVE, CPUs=4, offheap=6.2GB, heap=2.0GB]
[19:57:49] Topology snapshot [ver=23, locNode=76308a3b, servers=5, clients=0, state=ACTIVE, CPUs=5, offheap=7.8GB, heap=2.5GB]
[19:57:50] Topology snapshot [ver=24, locNode=76308a3b, servers=4, clients=0, state=ACTIVE, CPUs=4, offheap=6.2GB, heap=2.0GB]
[19:57:56] Topology snapshot [ver=25, locNode=76308a3b, servers=5, clients=0, state=ACTIVE, CPUs=5, offheap=7.8GB, heap=2.5GB]
[19:57:58] Topology snapshot [ver=26, locNode=76308a3b, servers=4, clients=0, state=ACTIVE, CPUs=4, offheap=6.2GB, heap=2.0GB]
[19:58:41] Topology snapshot [ver=27, locNode=76308a3b, servers=5, clients=0, state=ACTIVE, CPUs=5, offheap=7.8GB, heap=2.5GB]
在我发现这个问题之前,我对Kubernetes集群进行了一次小规模的重新配置,这并没有导致pod重启。不确定这是否是问题的原因

这是一个已知的有解决方案的问题吗?我应该检查哪些内容(特别是日志中的内容)来揭示这种情况


谢谢大家!

在计算相关性之前获取拓扑版本的相关性
错误是由已知问题引起的。这是一张JIRA的票:

到目前为止,还没有注意到这个问题带来的负面影响,因此吊舱故障可能是由其他原因造成的


在Ignite2.8中不会有这样的问题,因为服务处理器的实现已经完全返工。以下是相关的IEP:

感谢您的回复。为了使集群在Kubernetes环境中的运行更加可靠,我放弃了Kubernetes IP finder并部署了Zookeeper。虽然偶尔会出现一些问题,但总体而言,节点报告的错误要少得多。v2.8预计何时正式发布?@AlexAvrutin目前还没有发布2.8的确切计划。通常新版本每3个月发布一次。上一个版本是2.7.5,仅在一周前发布。我想下一个是2.8。您可以尝试使用其中一个,但我不建议您继续使用它,只进行开发。