Cluster computing Artemis集群消息重新分发不';好像不行

Cluster computing Artemis集群消息重新分发不';好像不行,cluster-computing,message-queue,activemq-artemis,Cluster Computing,Message Queue,Activemq Artemis,我使用的是Artemis 2.6.3。我在对称拓扑中创建了两个节点。 生产者和消费者由Spring代理处理。它写信给 地址:“/topic/notification/username/lual”(多播),生成的队列是不持久的。 消费者仅在连接到节点1(产生消息的位置)时接收消息。我可以将一个连接到接收消息的节点1,另一个连接到不接收消息的节点2。如果节点2上的两个节点都没有接收到消息。 我认为消息重新分发不起作用,但无法找出原因。我遵循了示例和可用的文档。 我在下面添加了所有配置 生成的图表:

我使用的是Artemis 2.6.3。我在对称拓扑中创建了两个节点。
生产者和消费者由Spring代理处理。它写信给 地址:“/topic/notification/username/lual”(多播),生成的队列是不持久的。 消费者仅在连接到节点1(产生消息的位置)时接收消息。我可以将一个连接到接收消息的节点1,另一个连接到不接收消息的节点2。如果节点2上的两个节点都没有接收到消息。 我认为消息重新分发不起作用,但无法找出原因。我遵循了示例和可用的文档。 我在下面添加了所有配置

生成的图表:

broker.xml

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>$HOSTNAME</name>


      <persistence-enabled>true</persistence-enabled>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>ASYNCIO</journal-type>

      <paging-directory>data/paging</paging-directory>

      <bindings-directory>data/bindings</bindings-directory>

      <journal-directory>data/journal</journal-directory>

      <large-messages-directory>data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-file-size>10M</journal-file-size>

      <!--
       This value was determined through a calculation.
       Your system could perform 0,47 writes per millisecond
       on the current journal configuration.
       That translates as a sync write every 2148000 nanoseconds.

       Note: If you specify 0 the system will perform writes directly to the disk.
             We recommend this to be 0 if you are using journalType=MAPPED and journal-datasync=false.
      -->
      <journal-buffer-timeout>2148000</journal-buffer-timeout>


      <!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       -->
      <journal-max-io>1</journal-max-io>
      <!--
        You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        -->

      <!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->

      <!-- <network-check-period>10000</network-check-period> -->
      <!-- <network-check-timeout>1000</network-check-timeout> -->

      <!-- this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6

           Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running -->
      <!-- <network-check-list>10.0.0.1</network-check-list> -->

      <!-- use this to customize the ping used for ipv4 addresses -->
      <!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->

      <!-- use this to customize the ping used for ipv6 addresses -->
      <!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->



    <connectors>
        <!-- Connector used to be announced through cluster connections and notifications -->
        <connector name="artemis">tcp://$HOSTNAME:61616</connector>
    </connectors>



      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      <!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in memory

            The system will use half of the available memory (-Xmx) by default for the global-max-size.
            You may specify a different value here if you need to customize it to your needs.

            <global-max-size>100Mb</global-max-size>

      -->

      <acceptors>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->

         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more information. -->

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://$HOSTNAME:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://$HOSTNAME:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://$HOSTNAME:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://$HOSTNAME:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://$HOSTNAME:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>


      <cluster-user>adminCluster</cluster-user>
      <cluster-password>adminCluster</cluster-password>

      <broadcast-groups>
        <broadcast-group name="artemis-broadcast-group">
            <jgroups-file>jgroups-stacks.xml</jgroups-file>
            <jgroups-channel>artemis_broadcast_channel</jgroups-channel>
            <!--<broadcast-period>5000</broadcast-period>-->
            <connector-ref>artemis</connector-ref>
        </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
        <discovery-group name="artemis-discovery-group">
          <jgroups-file>jgroups-stacks.xml</jgroups-file>
          <jgroups-channel>artemis_broadcast_channel</jgroups-channel>
          <refresh-timeout>10000</refresh-timeout>
        </discovery-group>
      </discovery-groups>

      <cluster-connections>
        <cluster-connection name="artemis-cluster">
          <address>#</address>
          <connector-ref>artemis</connector-ref>
          <check-period>1000</check-period>
          <connection-ttl>5000</connection-ttl>
          <min-large-message-size>50000</min-large-message-size>
          <call-timeout>5000</call-timeout>
          <retry-interval>500</retry-interval>
          <retry-interval-multiplier>2.0</retry-interval-multiplier>
          <max-retry-interval>5000</max-retry-interval>
          <initial-connect-attempts>-1</initial-connect-attempts>
          <reconnect-attempts>-1</reconnect-attempts>
          <use-duplicate-detection>true</use-duplicate-detection>
          <forward-when-no-consumers>false</forward-when-no-consumers>
          <max-hops>1</max-hops>
          <confirmation-window-size>32000</confirmation-window-size>
          <call-failover-timeout>30000</call-failover-timeout>
          <notification-interval>1000</notification-interval>
          <notification-attempts>2</notification-attempts>
          <discovery-group-ref discovery-group-name="artemis-discovery-group"/>
        </cluster-connection>
      </cluster-connections>


      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
            <redistribution-delay>0</redistribution-delay>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>

$HOSTNAME
符合事实的
异步
数据/分页
数据/绑定
数据/日志
数据/大型消息
符合事实的
2.
10
10米
2148000
1.
tcp://$HOSTNAME:61616
5000
90
符合事实的
120000
60000
停止

我相信您的问题在于
群集连接中
地址的值:

<address>#</address>
明确了这一点(我的重点):

每个群集连接仅适用于与指定的
地址
字段匹配的地址当一个地址以该字段中指定的字符串开头时,它在群集连接上匹配。群集连接上的
地址
字段还支持逗号分隔的列表和排除语法
。要防止在此群集连接上匹配地址,请在群集连接地址字符串前面加上

因此,通过使用
#
作为
地址
的值,您的意思是只有以
#
开头的地址才应该集群化-可能不是您想要的。我的猜测是,您希望所有地址都进行集群,在这种情况下,您只需将
地址
保留为空即可。文档中的示例为空,文档说明:

在上面显示的情况下,集群连接将负载平衡消息发送到所有地址(因为它是空的)


您是否确认集群正在正确形成?您是否看到指示群集网桥已成功连接的日志记录?另外,为什么要使用JGroups进行集群?为什么不直接使用默认的多播配置呢?附加一个日志。从图中可以看出,集群似乎是形成的。在日志中,我还看到AMQ221027:Bridge ClusterConnectionBridge 2次。只是因为我想学习。目前,我没有使用多播,而是使用UDP广播(效率不高),但在docker(我猜)中,它的麻烦更少。FWIW,当我说“多播”时,我指的是“UDP多播”,就我所知,它与“UDP广播”是同一回事。我无法根据您提供的信息判断发生了什么。您是否尝试过简化用例(例如,从等式中删除Spring Broker Relay或使用静态集群在同一台机器上运行两个代理)?你能设计一个可复制的测试用例吗?好吧……多播在一个特定的地址范围内工作,而CIDR/24的广播将有一个以.255结尾的地址(但这对于本主题并不重要)。如果您对docker感到满意,我想我可以在github中分享Apacheq artemis部分。我想你有一个简单的方法来创建STOMP生产者和消费者进行测试,不是吗?据我所知,重新分配工作与核心桥梁连接,就我所见,它似乎是建立起来的。我认为集群发现过程运行得很好。现在它按预期工作:)我很确定这一定是n00b的错误。非常感谢你找到它。
<address>#</address>