读取akka.conf文件时,内部会发生什么情况?
我正在使用OpenDaylight并试图用ApacheIgnite替换默认的分布式数据库。 我使用的是通过使用这里的源代码获得的jar: 并将其部署在OpenDaylight karaf容器中 以下是我在OpenDaylight中使用的akka.conf文件的一部分,该文件用ApacheIgnite替换了LevelDB日志读取akka.conf文件时,内部会发生什么情况?,akka,actor,ignite,opendaylight,akka-persistence,Akka,Actor,Ignite,Opendaylight,Akka Persistence,我正在使用OpenDaylight并试图用ApacheIgnite替换默认的分布式数据库。 我使用的是通过使用这里的源代码获得的jar: 并将其部署在OpenDaylight karaf容器中 以下是我在OpenDaylight中使用的akka.conf文件的一部分,该文件用ApacheIgnite替换了LevelDB日志 odl-cluster-data { akka { loglevel = DEBUG actor { provider = "akka.cluster.
odl-cluster-data {
akka {
loglevel = DEBUG
actor {
provider = "akka.cluster.ClusterActorRefProvider"
default-dispatcher {
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 10
}
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "10.145.59.44"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://test@127.0.0.1:2551"
]
min-nr-of-members = 1
auto-down-unreachable-after = 30s
}
# Disable legacy metrics in akka-cluster.
akka.cluster.metrics.enabled=off
akka.persistence.journal.plugin = "akka.persistence.journal.ignite"
akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot.ignite"
extensions = ["akka.persistence.ignite.extension.IgniteExtensionProvider"]
persistence {
# Ignite journal plugin
journal {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.journal.IgniteWriteJournal"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-journal"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where journal cache is already created
cachesAlreadyCreated = false
}
}
# Ignite snapshot plugin
snapshot {
ignite {
# Class name of the plugin
class = "akka.persistence.ignite.snapshot.IgniteSnapshotStore"
plugin-dispatcher = "ignite-dispatcher"
cache-prefix = "akka-snapshot"
// Should be based into the the dara grid topology
cache-backups = 1
// if ignite is already started in a separate standalone grid where snapshot cache is already created
cachesAlreadyCreated = false
}
}
}
}
}
但是,IgniteWriteJournal类似乎没有加载,我通过在其constructor中放入一些print语句进行了检查,如下所示
public IgniteWriteJournal(Config config) throws NotSerializableException {
System.out.println("!@#$% inside IgniteWriteJournal constructor\n");
ActorSystem actorSystem = context().system();
serializer = SerializationExtension.get(actorSystem).serializerFor(PersistentRepr.class);
storage = new Store<>(actorSystem);
JournalCaches journalCaches = journalCacheProvider.apply(config, actorSystem);
sequenceNumberTrack = journalCaches.getSequenceCache();
cache = journalCaches.getJournalCache();
}
public IgniteWriteJournal(配置)抛出NotSerializableException{
System.out.println(“IgniteWriteJournal构造函数内部的”!@#$%;
ActorSystem ActorSystem=context().system();
serializer=SerializationExtension.get(actorSystem.serializerFor(PersistentRepr.class));
存储=新存储(actorSystem);
JournalCaches JournalCaches=journalCacheProvider.apply(配置,actorSystem);
sequenceNumberTrack=journalCaches.getSequenceCache();
cache=journalCaches.getJournalCache();
}
那么,akka.persistence.journal.ignite标记中提到的类到底发生了什么?该类的构造函数是否被调用?读取akka.conf文件时,在后台会发生什么情况?您在哪里查找打印输出-在data/log/karaf.log中?System.out.println不会出现在那里-使用org.slf4j.Logger
您是如何重建IgniteWriteJournal源代码并部署新工件的?是否确实已部署更改?打印语句将显示在Apache Karaf控制台中。除了这些打印行之外,源代码还有一些日志行,如-\n\n\n\n\log.debug(“带参数persistenceId的doAsyncReplayMessages:'{}”:fromSequenceNr{}:toSequenceNr{}:max{}”、persistenceId、fromSequenceNr、toSequenceNr、max);//这些日志行在karaf中不存在。我使用Maven build重建了新工件,并在karaf中热部署了它。我确信这些更改是有效的,因为扩展类中的print语句是在Karaf控制台中打印的。