将单个令牌节点添加到现有datastax cassandra群集和数据传输不起作用

将单个令牌节点添加到现有datastax cassandra群集和数据传输不起作用,datastax,datastax-enterprise,datastax-java-driver,datastax-startup,datastax-enterprise-graph,Datastax,Datastax Enterprise,Datastax Java Driver,Datastax Startup,Datastax Enterprise Graph,向现有datastax群集和数据传输中添加每个节点的新单个令牌不起作用。下面提到了所遵循的过程。请更新我,如果我遵循的过程是错误的。谢谢 我们的AWS EC2数据中心中有3个单令牌范围datastax节点,支持搜索和图形。我们计划在数据中心中再添加3个节点。我们目前正在使用DseSimpleSnitch和简单网络拓扑作为密钥空间。此外,我们当前的复制系数为2 节点1:10.10.1.36 节点2:10.10.1.46 节点3:10.10.1.56 cat /etc/default/dse |

向现有datastax群集和数据传输中添加每个节点的新单个令牌不起作用。下面提到了所遵循的过程。请更新我,如果我遵循的过程是错误的。谢谢

我们的AWS EC2数据中心中有3个单令牌范围datastax节点,支持搜索和图形。我们计划在数据中心中再添加3个节点。我们目前正在使用DseSimpleSnitch和简单网络拓扑作为密钥空间。此外,我们当前的复制系数为2

节点1:10.10.1.36
节点2:10.10.1.46
节点3:10.10.1.56

 cat /etc/default/dse | grep -E 'GRAPH_ENABLED=|SOLR_ENABLED='
   GRAPH_ENABLED=1  
   SOLR_ENABLED=1  
数据中心:SearchGraph

Address     Rack          Status   State    Load      Owns Token               
10.10.1.46  rack1       Up     Normal  760.14 MiB  ? -9223372036854775808                  
10.10.1.36  rack1       Up     Normal  737.69 MiB  ? -3074457345618258603                   
10.10.1.56  rack1       Up     Normal  752.25 MiB  ? 3074457345618258602                   
Address     Rack        Status State   Load            Owns                Token

10.10.1.46  rack1       Up     Normal  852.93 MiB ? -9223372036854775808
10.10.1.36  rack1       Up     Moving  900.12 MiB ? -3074457345618258603
10.10.2.96  rack1       UP     Normal  465.02 KiB ? -2
10.10.2.97  rack1       Up     Normal  109.16 MiB ? 3074457345618258600
10.10.1.56  rack1       Up     Moving  594.49 MiB ? 3074457345618258602
10.10.2.86  rack1       Up     Normal  663.94 MiB ? 6148914691236517202
步骤(1)为了向数据中心添加3个新节点,我们首先将密钥空间拓扑和snitch更改为网络感知

1) 换了告密者。 cat/etc/dse/cassandra/cassandra.yaml | grep端点| u告密者: 端点_告密者:八卦属性文件告密者

cat /etc/dse/cassandra/cassandra-rackdc.properties |grep -E 'dc=|rack='
  dc=SearchGraph
  rack=rack1
(二) (a) 关闭所有节点,然后重新启动它们

(b) 在每个节点上运行顺序修复和nodetool清理

3) 更改了键空间拓扑

ALTER KEYSPACE tech_app1 WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2};
ALTER KEYSPACE tech_app2 WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2};
ALTER KEYSPACE tech_chat WITH REPLICATION = {'class' : 'NetworkTopologyStrategy', 'SearchGraph' : 2};
参考文献:

步骤(2)对于更新令牌范围和设置新的cassandra节点,我们遵循以下过程

1) 重新计算令牌范围

root@ip-10-10-1-36:~# token-generator
DC#1:

2) 在新节点上安装了相同版本的Datastax enterprise

3) 已停止节点服务并清除数据

4) (a)按照以下方式将令牌范围分配给新节点

Node 4: 10.10.2.96     Range: -2 
Node 5: 10.10.2.97     Range: 3074457345618258600
Node 6: 10.10.2.86     Range: 6148914691236517202
4) (b)在每个新节点上配置cassandra.yaml:

节点4:

cluster_name: 'SearchGraph' 
num_tokens: 1
initial_token: -2  
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.96 
rpc_address: 10.10.2.96 
endpoint_snitch: GossipingPropertyFileSnitch
节点5:

cluster_name: 'SearchGraph' 
num_tokens: 1
initial_token: 3074457345618258600  
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.97 
rpc_address: 10.10.2.97
endpoint_snitch: GossipingPropertyFileSnitch
节点6:

cluster_name: 'SearchGraph' 
num_tokens: 1
initial_token: 6148914691236517202   
parameters: 
- seeds: "10.10.1.46, 10.10.1.56" 
listen_address: 10.10.2.86 
rpc_address: 10.10.2.86 
endpoint_snitch: GossipingPropertyFileSnitch
5) 换了告密者

cat /etc/dse/cassandra/cassandra.yaml | grep endpoint_snitch:
endpoint_snitch: GossipingPropertyFileSnitch

cat /etc/dse/cassandra/cassandra-rackdc.properties |grep -E 'dc=|rack='
dc=SearchGraph
rack=rack1
6) 在关闭consistent.RangeMovation的情况下,每隔两分钟在每个新节点上启动DataStax Enterprise:

JVM_OPTS="$JVM_OPTS -Dcassandra.consistent.rangemovement=false
7) 在新节点完全引导后,使用nodetool move按照步骤4(a)中完成的令牌重新计算为现有节点分配新的初始_令牌。在每个节点上一次执行一个进程

On  Node 1(10.10.1.36)  :  nodetool move -3074457345618258603
On  Node 2(10.10.1.46)  :  nodetool move -9223372036854775808
On  Node 3(10.10.1.56)  :  nodetool move  3074457345618258602
数据中心:SearchGraph

Address     Rack          Status   State    Load      Owns Token               
10.10.1.46  rack1       Up     Normal  760.14 MiB  ? -9223372036854775808                  
10.10.1.36  rack1       Up     Normal  737.69 MiB  ? -3074457345618258603                   
10.10.1.56  rack1       Up     Normal  752.25 MiB  ? 3074457345618258602                   
Address     Rack        Status State   Load            Owns                Token

10.10.1.46  rack1       Up     Normal  852.93 MiB ? -9223372036854775808
10.10.1.36  rack1       Up     Moving  900.12 MiB ? -3074457345618258603
10.10.2.96  rack1       UP     Normal  465.02 KiB ? -2
10.10.2.97  rack1       Up     Normal  109.16 MiB ? 3074457345618258600
10.10.1.56  rack1       Up     Moving  594.49 MiB ? 3074457345618258602
10.10.2.86  rack1       Up     Normal  663.94 MiB ? 6148914691236517202
更新帖子:

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
   bisect or trisect the value which are generated using  
   "token-generator"
         Node 4: 10.10.2.96     Range: -6148914691236517206 
         Node 5: 10.10.2.97     Range: -2
         Node 6: 10.10.2.86     Range: 6148914691236517202

Note : We don't need to change the token range of existing nodes in data   
       center.No need to follow procedure in Step 7 which i have mentioned 
       above.
Increased load_max_time_per_core value in  dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method

     1) Started the new nodes as non-solr and wait for all cassandra data  
        to migrate to joining nodes.
     2) Add the parameter auto_bootstrap: False directive to the 
        cassandra.yaml file
     3) Re-start the same nodes after enabling solr. Changed parameter 
        SOLR_ENABLED=1 in /etc/default/dse
     3) Re-index in all new joined nodes. I had to reloaded all core 
        required with the reindex=true and distributed=false parameters in 
        new  joined nodes. 
        Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html
但是我们在加入节点时遇到以下错误

AbstractSolrSecondaryIndex.java:1884 - Cannot find core chat.chat_history
AbstractSolrSecondaryIndex.java:1884 - Cannot find core chat.history
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.business_units
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.feeds
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.feeds_2
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.knowledegmodule
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.userdetails
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.userdetails_2
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.vault_details
AbstractSolrSecondaryIndex.java:1884 - Cannot find core search.workgroup
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.feeds
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.knowledgemodule
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.organizations
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.userdetails
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.vaults
AbstractSolrSecondaryIndex.java:1884 - Cannot find core cloud.workgroup
节点加入失败,出现以下错误:

ERROR [main] 2017-08-10 04:22:08,449  DseDaemon.java:488 - Unable to start DSE server.
com.datastax.bdp.plugin.PluginManager$PluginActivationException: Unable to activate plugin com.datastax.bdp.plugin.SolrContainerPlugin


Caused by: java.lang.IllegalStateException: Cannot find secondary index for core ekamsearch.userdetails_2, did you create it? 
If yes, please consider increasing the value of the dse.yaml option load_max_time_per_core, current value in minutes is: 10

ERROR [main] 2017-08-10 04:22:08,450  CassandraDaemon.java:705 - Exception encountered during startup
java.lang.RuntimeException: com.datastax.bdp.plugin.PluginManager$PluginActivationException: Unable to activate plugin

以前有人遇到过这些错误或警告吗?令牌分配问题:

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
   bisect or trisect the value which are generated using  
   "token-generator"
         Node 4: 10.10.2.96     Range: -6148914691236517206 
         Node 5: 10.10.2.97     Range: -2
         Node 6: 10.10.2.86     Range: 6148914691236517202

Note : We don't need to change the token range of existing nodes in data   
       center.No need to follow procedure in Step 7 which i have mentioned 
       above.
Increased load_max_time_per_core value in  dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method

     1) Started the new nodes as non-solr and wait for all cassandra data  
        to migrate to joining nodes.
     2) Add the parameter auto_bootstrap: False directive to the 
        cassandra.yaml file
     3) Re-start the same nodes after enabling solr. Changed parameter 
        SOLR_ENABLED=1 in /etc/default/dse
     3) Re-index in all new joined nodes. I had to reloaded all core 
        required with the reindex=true and distributed=false parameters in 
        new  joined nodes. 
        Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html
解决了解决方案问题:找不到相关文件:

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
   bisect or trisect the value which are generated using  
   "token-generator"
         Node 4: 10.10.2.96     Range: -6148914691236517206 
         Node 5: 10.10.2.97     Range: -2
         Node 6: 10.10.2.86     Range: 6148914691236517202

Note : We don't need to change the token range of existing nodes in data   
       center.No need to follow procedure in Step 7 which i have mentioned 
       above.
Increased load_max_time_per_core value in  dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method

     1) Started the new nodes as non-solr and wait for all cassandra data  
        to migrate to joining nodes.
     2) Add the parameter auto_bootstrap: False directive to the 
        cassandra.yaml file
     3) Re-start the same nodes after enabling solr. Changed parameter 
        SOLR_ENABLED=1 in /etc/default/dse
     3) Re-index in all new joined nodes. I had to reloaded all core 
        required with the reindex=true and distributed=false parameters in 
        new  joined nodes. 
        Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html

令牌分配问题::

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
   bisect or trisect the value which are generated using  
   "token-generator"
         Node 4: 10.10.2.96     Range: -6148914691236517206 
         Node 5: 10.10.2.97     Range: -2
         Node 6: 10.10.2.86     Range: 6148914691236517202

Note : We don't need to change the token range of existing nodes in data   
       center.No need to follow procedure in Step 7 which i have mentioned 
       above.
Increased load_max_time_per_core value in  dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method

     1) Started the new nodes as non-solr and wait for all cassandra data  
        to migrate to joining nodes.
     2) Add the parameter auto_bootstrap: False directive to the 
        cassandra.yaml file
     3) Re-start the same nodes after enabling solr. Changed parameter 
        SOLR_ENABLED=1 in /etc/default/dse
     3) Re-index in all new joined nodes. I had to reloaded all core 
        required with the reindex=true and distributed=false parameters in 
        new  joined nodes. 
        Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html
解决了解决方案问题:找不到相关文件:

1) I had wrongly assigned token range in Step 4) (a). Assign token which 
   bisect or trisect the value which are generated using  
   "token-generator"
         Node 4: 10.10.2.96     Range: -6148914691236517206 
         Node 5: 10.10.2.97     Range: -2
         Node 6: 10.10.2.86     Range: 6148914691236517202

Note : We don't need to change the token range of existing nodes in data   
       center.No need to follow procedure in Step 7 which i have mentioned 
       above.
Increased load_max_time_per_core value in  dse.yaml configuration file, 
still i was receving the error.Finalys solved the issue 
by following method

     1) Started the new nodes as non-solr and wait for all cassandra data  
        to migrate to joining nodes.
     2) Add the parameter auto_bootstrap: False directive to the 
        cassandra.yaml file
     3) Re-start the same nodes after enabling solr. Changed parameter 
        SOLR_ENABLED=1 in /etc/default/dse
     3) Re-index in all new joined nodes. I had to reloaded all core 
        required with the reindex=true and distributed=false parameters in 
        new  joined nodes. 
        Ref : http://docs.datastax.com/en/archived/datastax_enterprise/4.0/datastax_enterprise/srch/srchReldCore.html

手动分配令牌时有任何特殊原因,但您可以在Cassandra.yaml中设置numtoken=1,并让Cassandra为您处理。我已经根据上述步骤2(1)中提到的重新计算配置了num_令牌:1和初始_令牌范围。我们希望手动分配初始_令牌范围,而不是Cassandra来处理它,因为我认为如果我们更改它并使用Opscenter重新平衡,当前集群Solr将无法工作,请澄清我是否错了。我们遵循的上述步骤是否正确?我认为,在扩展cassandra节点时手动管理令牌是非常繁琐的。num_标记:1本身将自动帮助在Cassandra级别管理该标记,并且当数据重新平衡到新节点时,Solr将对其进行索引。当数据移动到新节点时,在运行nodetool cleanup时,相应的记录将从旧节点中删除。随着记录在旧节点中消失,Solr中相应的索引项也随之消失。从Solr core中,您将能够看到被索引的记录的数量,并且可以在添加节点后进行验证。我会避免手动分发令牌。因此,我们可以使用num_令牌启动3个新节点:1,那么集群中已设置初始_令牌的现有3个节点如何。最简单的方法是在将数据移动到新的联合节点时,一次解除一个节点的工作。当您手动分配令牌时,您可以在没有初始令牌的情况下使用replace_address将它们添加回去,而无需任何特殊原因,同时您可以在Cassandra.yaml中设置numtoken=1,并让Cassandra为您处理。我已经按照上述步骤2(1)中提到的重新计算配置了num_令牌:1和初始_令牌范围。我们希望手动分配初始_令牌范围,而不是Cassandra来处理它,因为我认为如果我们更改它并使用Opscenter重新平衡,当前集群Solr将无法工作,请澄清我是否错了。我们遵循的上述步骤是否正确?我认为,在扩展cassandra节点时手动管理令牌是非常繁琐的。num_标记:1本身将自动帮助在Cassandra级别管理该标记,并且当数据重新平衡到新节点时,Solr将对其进行索引。当数据移动到新节点时,在运行nodetool cleanup时,相应的记录将从旧节点中删除。随着记录在旧节点中消失,Solr中相应的索引项也随之消失。从Solr core中,您将能够看到被索引的记录的数量,并且可以在添加节点后进行验证。我会避免手动分发令牌。因此,我们可以使用num_令牌启动3个新节点:1,那么集群中已设置初始_令牌的现有3个节点如何。最简单的方法是在将数据移动到新的联合节点时,一次解除一个节点的工作。您可以使用replace_address将它们添加回,而无需初始令牌