Cluster computing ceph:370 PG处于非活动状态超过300秒

Cluster computing ceph:370 PG处于非活动状态超过300秒,cluster-computing,ceph,Cluster Computing,Ceph,我有一个小型Ceph集群。其设置方式如下所述: 在部署节点(ntp服务器所在的位置)上重新启动后,我得到: 节点已启动,ssh可访问。有没有办法使系统恢复健康?显然OSD deamon已关闭(即使在报告为“启动”的节点上)。运行后I=0;对于{02..10}{12..14}{16..23}中的ID;ceph部署osd激活节点${ID}:/var/local/osd${I};I=$(${I}+1));完成我现在健康状况良好 非常感谢ceph IRC频道 ceph health; ceph osd

我有一个小型Ceph集群。其设置方式如下所述:

在部署节点(ntp服务器所在的位置)上重新启动后,我得到:


节点已启动,ssh可访问。有没有办法使系统恢复健康?

显然OSD deamon已关闭(即使在报告为“启动”的节点上)。运行后
I=0;对于{02..10}{12..14}{16..23}中的ID;ceph部署osd激活节点${ID}:/var/local/osd${I};I=$(${I}+1));完成
我现在健康状况良好

非常感谢ceph IRC频道

ceph health; ceph osd tree
HEALTH_ERR 370 pgs are stuck inactive for more than 300 seconds; 370 pgs stale; 370 pgs stuck stale; too many PGs per OSD (307 > max 300)
ID  WEIGHT   TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 10.88989 root default                                      
 -2  0.54449     host node02                                  
  0  0.54449         osd.0      down        0          1.00000
 -3  0.54449     host node03                                  
  1  0.54449         osd.1      down        0          1.00000
 -4  0.54449     host node04                                  
  2  0.54449         osd.2      down        0          1.00000
 -5  0.54449     host node05                                  
  3  0.54449         osd.3      down        0          1.00000
 -6  0.54449     host node06                                  
  4  0.54449         osd.4      down        0          1.00000
 -7  0.54449     host node07                                  
  5  0.54449         osd.5      down        0          1.00000
 -8  0.54449     host node08                                  
  6  0.54449         osd.6      down        0          1.00000
 -9  0.54449     host node09                                  
  7  0.54449         osd.7      down        0          1.00000
-10  0.54449     host node10                                  
  8  0.54449         osd.8      down        0          1.00000
-11  0.54449     host node12                                  
  9  0.54449         osd.9      down        0          1.00000
-12  0.54449     host node13                                  
 10  0.54449         osd.10     down        0          1.00000
-13  0.54449     host node14                                  
 11  0.54449         osd.11     down        0          1.00000
-14  0.54449     host node16                                  
 12  0.54449         osd.12     down        0          1.00000
-15  0.54449     host node17                                  
 13  0.54449         osd.13     down        0          1.00000
-16  0.54449     host node18                                  
 14  0.54449         osd.14     down        0          1.00000
-17  0.54449     host node19                                  
 15  0.54449         osd.15       up  1.00000          1.00000
-18  0.54449     host node20                                  
 16  0.54449         osd.16       up  1.00000          1.00000
-19  0.54449     host node21                                  
 17  0.54449         osd.17       up  1.00000          1.00000
-20  0.54449     host node22                                  
 18  0.54449         osd.18       up  1.00000          1.00000
-21  0.54449     host node23                                  
 19  0.54449         osd.19       up  1.00000          1.00000