elasticsearch elasticsearch-删除第二个elasticsearch节点并添加另一个节点,获取未分配的碎片,elasticsearch,logstash,elasticsearch,Logstash" /> elasticsearch elasticsearch-删除第二个elasticsearch节点并添加另一个节点,获取未分配的碎片,elasticsearch,logstash,elasticsearch,Logstash" />

elasticsearch elasticsearch-删除第二个elasticsearch节点并添加另一个节点,获取未分配的碎片

elasticsearch elasticsearch-删除第二个elasticsearch节点并添加另一个节点,获取未分配的碎片,elasticsearch,logstash,elasticsearch,Logstash,作为Elasticsearch的初学者,我两周前才使用它,我做了一件愚蠢的事情 我的Elasticsearch有一个集群,有两个节点,一个主数据节点版本1.4.2,一个非数据节点版本1.1.1。使用时存在冲突版本,我决定关闭并删除非数据节点,然后安装另一个数据节点版本1.4.2查看我的图片,以便于想象。那么node3被命名为node2 然后,我检查弹性状态 { "cluster_name":"elasticsearch", "status":"yellow", "ti

作为Elasticsearch的初学者,我两周前才使用它,我做了一件愚蠢的事情

我的Elasticsearch有一个集群,有两个节点,一个主数据节点版本1.4.2,一个非数据节点版本1.1.1。使用时存在冲突版本,我决定关闭并删除非数据节点,然后安装另一个数据节点版本1.4.2查看我的图片,以便于想象。那么node3被命名为node2

然后,我检查弹性状态

{ 
    "cluster_name":"elasticsearch",
    "status":"yellow",
    "timed_out":false,
    "number_of_nodes":2,
    "number_of_data_nodes":2,
    "active_primary_shards":725,
    "active_shards":1175,
    "relocating_shards":0,
    "initializing_shards":0,
    "unassigned_shards":273
}
检查群集状态

curl -XGET http://localhost:9200/_cat/shards


    logstash-2015.03.25 2 p STARTED       3031  621.1kb 10.146.134.94 node1        
    logstash-2015.03.25 2 r UNASSIGNED
    logstash-2015.03.25 0 p STARTED       3084  596.4kb 10.146.134.94 node1        
    logstash-2015.03.25 0 r UNASSIGNED                                                     
    logstash-2015.03.25 3 p STARTED       3177  608.4kb 10.146.134.94 node1        
    logstash-2015.03.25 3 r UNASSIGNED                                                     
    logstash-2015.03.25 1 p STARTED       3099  577.3kb 10.146.134.94 node1       
    logstash-2015.03.25 1 r UNASSIGNED                      
    logstash-2014.12.30 4 r STARTED                     10.146.134.94 node2 
    logstash-2014.12.30 4 p STARTED         94  114.3kb 10.146.134.94 node1        
    logstash-2014.12.30 0 r STARTED        111  195.8kb 10.146.134.94 node2 
    logstash-2014.12.30 0 p STARTED        111  195.8kb 10.146.134.94 node1       
    logstash-2014.12.30 3 r STARTED        110    144kb 10.146.134.94 node2 
    logstash-2014.12.30 3 p STARTED        110    144kb 10.146.134.94 node1
我已经读过相关的问题,并尝试去理解它,但没有运气。我也在回答中评论了我的错误

我明白了

我跟着答案走了进去

但是没有感情

我该怎么办? 提前感谢

更新:当我检查挂起的任务时,它显示:

{"tasks":[{"insert_order":88401,"priority":"HIGH","source":"shard-failed 
    ([logstash-2015.01.19][3], node[PVkS47JyQQq6G-lstUW04w], [R], s[INITIALIZING]),
    **reason [Failed to start shard, message** [RecoveryFailedException[[logstash-2015.01.19][3]: **Recovery failed from** [node1][_72bJJX0RuW7AyM86WUgtQ]
    [localhost][inet[/localhost:9300]]{master=true} into [node2][PVkS47JyQQq6G-lstUW04w]
    [localhost][inet[/localhost:9302]]{master=false}]; 
    nested: RemoteTransportException[[node1][inet[/localhost:9300]]
    [internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[[logstash-2015.01.19][3] Phase[2] Execution failed];
    nested: RemoteTransportException[[node2][inet[/localhost:9302]][internal:index/shard/recovery/prepare_translog]];
    nested: EngineCreationFailureException[[logstash-2015.01.19][3] **failed to create engine]; 
    nested: FileSystemException**[data/elasticsearch/nodes/0/indices/logstash-2015.01.19/3/index/_0.si: **Too many open files**]; ]]","executing":true,"time_in_queue_millis":53,"time_in_queue":"53ms"}]}
如果有两个节点,如 1节点-1-ES 1.4.2 2节点-2-Es 1.1.1

现在按照以下步骤进行调试

1从节点2停止所有elasticsearch实例。 2在新的elasticsearch节点中安装elasticsearch 1.4.2。 将elasticsearch.yml更改为主节点配置,特别是这三种配置设置

 cluster.name: <Same as master node>
 node.name: < Node name for Node-2>
 discovery.zen.ping.unicast.hosts: <Master Node IP>
3重新启动节点2 Elasticsearch。
4验证Node-1日志。

这可能是您的问题,就在日志的底部:打开的文件太多。您必须增加Elasticsearch可以打开的文件数。这取决于您的操作系统。@MagnusBäck:也许,当我安装新的索引时,我指出了存储索引数据的目录,就像删除了节点v1.1.1一样。那么,我该怎么解决呢?谢谢你的快速回复。您可以看到,主节点localhost:9300、ES1.1.1localhost:9301和newinstallonelocalhost:9302。因此,我已经删除了elastic 1.1.1并安装了es1.4.2。将cluster.name更改为与node2相同的主节点、节点名称。两个ES都安装在同一台服务器上,是否需要更改discovery.zen.ping.unicast.hosts:?因为,当我检查此服务器中的节点时,它会显示这两个节点的信息。Id另外,当我检查node1日志时,它会与各种日志一起永远运行,例如:2015.01.19][0]:从[node1]72bJJX0RuW7AyM86WUgtQ][localhost][inet[/localhost:9300]{master true}恢复失败到[node2][pvks47jyqq6g-lstUW04w][localhost][inet[/localhost:9302]]{master=false}];reationFailureException[[logstash-2015.01.19][0]未能创建引擎];嵌套:FileSystemException[data/elasticsearch/nodes/0/index/logstash-2015.01.19/0/index//u b.si:打开的文件太多];]。。。。
curl -XPUT 'localhost:9200/_cluster/settings' -d '{
"transient" : {
    "cluster.routing.allocation.enable" : "all"
}
}'
{"tasks":[{"insert_order":88401,"priority":"HIGH","source":"shard-failed 
    ([logstash-2015.01.19][3], node[PVkS47JyQQq6G-lstUW04w], [R], s[INITIALIZING]),
    **reason [Failed to start shard, message** [RecoveryFailedException[[logstash-2015.01.19][3]: **Recovery failed from** [node1][_72bJJX0RuW7AyM86WUgtQ]
    [localhost][inet[/localhost:9300]]{master=true} into [node2][PVkS47JyQQq6G-lstUW04w]
    [localhost][inet[/localhost:9302]]{master=false}]; 
    nested: RemoteTransportException[[node1][inet[/localhost:9300]]
    [internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[[logstash-2015.01.19][3] Phase[2] Execution failed];
    nested: RemoteTransportException[[node2][inet[/localhost:9302]][internal:index/shard/recovery/prepare_translog]];
    nested: EngineCreationFailureException[[logstash-2015.01.19][3] **failed to create engine]; 
    nested: FileSystemException**[data/elasticsearch/nodes/0/indices/logstash-2015.01.19/3/index/_0.si: **Too many open files**]; ]]","executing":true,"time_in_queue_millis":53,"time_in_queue":"53ms"}]}
 cluster.name: <Same as master node>
 node.name: < Node name for Node-2>
 discovery.zen.ping.unicast.hosts: <Master Node IP>