Ubuntu CEPH HEALTH_警告降级数据冗余:重新称重后pgs尺寸过小
我们有3台服务器和15台OSD的CEPH设置。两周前,我们收到了“2个OSD几乎满”的警告。我们使用下面的命令重新加载了OSD,并重新启动了两个OSDUbuntu CEPH HEALTH_警告降级数据冗余:重新称重后pgs尺寸过小,ubuntu,openstack,ceph,Ubuntu,Openstack,Ceph,我们有3台服务器和15台OSD的CEPH设置。两周前,我们收到了“2个OSD几乎满”的警告。我们使用下面的命令重新加载了OSD,并重新启动了两个OSD ceph osd reweight-by-utilization 重新启动后,我们在过去两周内都处于警告之下 # ceph health detail HEALTH_WARN Degraded data redundancy: 7 pgs undersized PG_DEGRADED Degraded data redundancy: 7 pg
ceph osd reweight-by-utilization
重新启动后,我们在过去两周内都处于警告之下
# ceph health detail
HEALTH_WARN Degraded data redundancy: 7 pgs undersized
PG_DEGRADED Degraded data redundancy: 7 pgs undersized
pg 39.7 is stuck undersized for 1398599.590587, current state active+undersized+remapped, last acting [10,1]
pg 39.1e is stuck undersized for 1398600.838131, current state active+undersized, last acting [1,10]
pg 39.2d is stuck undersized for 1398600.848232, current state active+undersized, last acting [10,1]
pg 39.58 is stuck undersized for 1398600.850871, current state active+undersized, last acting [10,1]
pg 39.5f is stuck undersized for 1398600.836724, current state active+undersized, last acting [1,10]
pg 39.79 is stuck undersized for 1398600.848756, current state active+undersized, last acting [10,1]
pg 54.d is stuck undersized for 1398599.590531, current state active+undersized+remapped, last acting [10,1]
# ceph -w
cluster:
id: 2e7201e4-9cdc-41db-a995-4844eb07c255
health: HEALTH_WARN
Degraded data redundancy: 7 pgs undersized
services:
mon: 3 daemons, quorum CEPH001,CEPH002,CEPH003
mgr: CEPH001(active), standbys: CEPH003
osd: 15 osds: 15 up, 15 in; 10 remapped pgs
data:
pools: 11 pools, 1238 pgs
objects: 292.7 k objects, 1.3 TiB
usage: 4.0 TiB used, 41 TiB / 45 TiB avail
pgs: 1223 active+clean
8 active+clean+remapped
5 active+undersized
2 active+undersized+remapped
io:
client: 21 KiB/s rd, 1.1 MiB/s wr, 55 op/s rd, 100 op/s wr
我是CEPH的新手,这是预期的行为吗?或者如何解决这个问题
# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
1 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 166
5 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 159
10 ssd 1.00000 0.95001 447 GiB 71 GiB 376 GiB 15.96 1.78 166
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
1 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 166
5 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 159
10 ssd 1.00000 0.95001 447 GiB 71 GiB 376 GiB 15.96 1.78 166
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 1.00000 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 1.00000 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 1.00000 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 1.00000 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
6 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 1.00000 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 1.00000 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 1.00000 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
11 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 1.00000 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 1.00000 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 1.00000 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
0 hdd 3.63899 1.00000 3.6 TiB 347 GiB 3.3 TiB 9.31 1.04 287
2 hdd 3.63899 1.00000 3.6 TiB 350 GiB 3.3 TiB 9.39 1.05 266
3 hdd 3.63899 1.00000 3.6 TiB 307 GiB 3.3 TiB 8.25 0.92 255
4 hdd 3.63899 1.00000 3.6 TiB 363 GiB 3.3 TiB 9.75 1.09 286
1 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 166
6 hdd 3.63899 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.61 1.07 274
7 hdd 3.63899 1.00000 3.6 TiB 369 GiB 3.3 TiB 9.91 1.11 270
8 hdd 3.63899 1.00000 3.6 TiB 317 GiB 3.3 TiB 8.51 0.95 242
9 hdd 3.63899 1.00000 3.6 TiB 358 GiB 3.3 TiB 9.62 1.07 254
5 ssd 1.00000 0.95001 447 GiB 5.8 GiB 441 GiB 1.29 0.14 159
11 hdd 3.63899 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.88 0.99 277
12 hdd 3.63899 1.00000 3.6 TiB 331 GiB 3.3 TiB 8.89 0.99 269
13 hdd 3.63899 1.00000 3.6 TiB 279 GiB 3.4 TiB 7.49 0.84 260
14 hdd 3.63899 1.00000 3.6 TiB 330 GiB 3.3 TiB 8.85 0.99 276
10 ssd 1.00000 0.95001 447 GiB 71 GiB 376 GiB 15.96 1.78 166
TOTAL 45 TiB 4.0 TiB 41 TiB 8.96
MIN/MAX VAR: 0.14/1.78 STDDEV: 2.03
# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-43 12.00000 pool demognocchi
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-40 12.00000 pool demobackup
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-37 3.00000 pool demossd
1 ssd 1.00000 osd.1 up 0.95001 1.00000
5 ssd 1.00000 osd.5 up 0.95001 1.00000
10 ssd 1.00000 osd.10 up 0.95001 1.00000
-34 12.00000 pool demosata
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-31 12.00000 pool demoglance
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-18 12.00000 pool defaultbackup
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-17 12.00000 pool backup
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-16 12.00000 pool gnocchi
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-15 3.00000 pool ssdvolume01
1 ssd 1.00000 osd.1 up 0.95001 1.00000
5 ssd 1.00000 osd.5 up 0.95001 1.00000
10 ssd 1.00000 osd.10 up 0.95001 1.00000
-14 12.00000 pool defaultsata01
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-13 12.00000 pool defaultglance01
0 hdd 1.00000 osd.0 up 1.00000 1.00000
2 hdd 1.00000 osd.2 up 1.00000 1.00000
3 hdd 1.00000 osd.3 up 1.00000 1.00000
4 hdd 1.00000 osd.4 up 1.00000 1.00000
6 hdd 1.00000 osd.6 up 1.00000 1.00000
7 hdd 1.00000 osd.7 up 1.00000 1.00000
8 hdd 1.00000 osd.8 up 1.00000 1.00000
9 hdd 1.00000 osd.9 up 1.00000 1.00000
11 hdd 1.00000 osd.11 up 1.00000 1.00000
12 hdd 1.00000 osd.12 up 1.00000 1.00000
13 hdd 1.00000 osd.13 up 1.00000 1.00000
14 hdd 1.00000 osd.14 up 1.00000 1.00000
-1 46.66498 root default
-3 15.55499 host CEPH001
0 hdd 3.63899 osd.0 up 1.00000 1.00000
2 hdd 3.63899 osd.2 up 1.00000 1.00000
3 hdd 3.63899 osd.3 up 1.00000 1.00000
4 hdd 3.63899 osd.4 up 1.00000 1.00000
1 ssd 1.00000 osd.1 up 0.95001 1.00000
-7 15.55499 host CEPH002
6 hdd 3.63899 osd.6 up 1.00000 1.00000
7 hdd 3.63899 osd.7 up 1.00000 1.00000
8 hdd 3.63899 osd.8 up 1.00000 1.00000
9 hdd 3.63899 osd.9 up 1.00000 1.00000
5 ssd 1.00000 osd.5 up 0.95001 1.00000
-10 15.55499 host CEPH003
11 hdd 3.63899 osd.11 up 1.00000 1.00000
12 hdd 3.63899 osd.12 up 1.00000 1.00000
13 hdd 3.63899 osd.13 up 1.00000 1.00000
14 hdd 3.63899 osd.14 up 1.00000 1.00000
10 ssd 1.00000 osd.10 up 0.95001 1.00000
# ceph osd pool ls detail
pool 37 'defaultglance01' replicated size 3 min_size 1 crush_rule 4 object_hash rjenkins pg_num 128 pgp_num 128 last_change 2993 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~3,7~2e,37~4,41~a,50~4,56~2,5a~2,5d~3,61~1]
pool 38 'defaultsata01' replicated size 3 min_size 1 crush_rule 3 object_hash rjenkins pg_num 200 pgp_num 200 last_change 2971 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~59]
pool 39 'ssdvolume01' replicated size 3 min_size 1 crush_rule 1 object_hash rjenkins pg_num 150 pgp_num 150 last_change 3005 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~3]
pool 40 'gnocchi' replicated size 3 min_size 1 crush_rule 5 object_hash rjenkins pg_num 256 pgp_num 256 last_change 1821 flags hashpspool stripe_width 0 application rbd
pool 45 'backup' erasure size 3 min_size 2 crush_rule 6 object_hash rjenkins pg_num 256 pgp_num 256 last_change 2392 flags hashpspool,ec_overwrites,selfmanaged_snaps stripe_width 8192 application rbd
removed_snaps [1~b]
pool 46 'defaultbackup' replicated size 3 min_size 1 crush_rule 7 object_hash rjenkins pg_num 56 pgp_num 56 last_change 2098 flags hashpspool stripe_width 0 application rbd
pool 52 'demoglance' replicated size 3 min_size 1 crush_rule 8 object_hash rjenkins pg_num 16 pgp_num 16 last_change 2848 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~3,7~1,9~1,b~2]
pool 53 'demosata' replicated size 3 min_size 1 crush_rule 9 object_hash rjenkins pg_num 128 pgp_num 128 last_change 2974 lfor 0/2620 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~6,8~f,1d~2,20~2]
pool 54 'demossd' replicated size 3 min_size 1 crush_rule 10 object_hash rjenkins pg_num 16 pgp_num 16 last_change 3005 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
removed_snaps [1~3]
pool 55 'demobackup' replicated size 3 min_size 1 crush_rule 11 object_hash rjenkins pg_num 16 pgp_num 16 last_change 2596 flags hashpspool stripe_width 0 application rbd
pool 56 'demognocchi' replicated size 3 min_size 1 crush_rule 12 object_hash rjenkins pg_num 16 pgp_num 16 last_change 2597 flags hashpspool stripe_width 0 application rbd
ceph 13.2.6版(7b695f835b03642f85998b2ae7b6dd093d9fbce4)模拟(稳定)我们使用的是粉碎图,粉碎图中提到的OSD池重量是1,因此我们用一个来重新称量SSD OSD
#ceph osd reweight <OSD Number> 1
#ceph osd重新称重1
感谢@eblock的帮助。我们使用的是crush map,我们的crush map中提到的OSD池重量是1,因此我们用一个重新加权SSD OSD
#ceph osd reweight <OSD Number> 1
#ceph osd重新称重1
谢谢@eblock的帮助。您能编辑您的问题并添加
ceph osd df
输出吗?我假设重新称重没有足够的可用空间,无法成功完成。您唯一的选择是扩展群集并添加更多存储或删除未使用的数据以清除这些警告。请在问题中添加ceph osd tree
。你能解释一下SSD是如何使用的吗?我甚至不知道osd树也有每个池的输出,你是怎么做到的?您正在运行哪个ceph版本?哪些池是39和54(ceph osd池ls detail
)?我仍然不了解全部情况。这是哪种ceph版本?我想知道为什么每个池输出和常规osd树输出中的压碎重量不同。无论如何,我会尝试将SSD的重量重新调整为1,如果您有3个SSD,但将所有的重新重量平均减少,这是没有意义的。如果运行ceph osd crush reweight osd.11,并对其他两个SSD重复该操作,会发生什么情况?重新称重不会清除任何数据,只会重新洗牌。因此,如果您执行我建议的命令,您的数据将不会发生任何变化。您可以编辑您的问题并添加ceph osd df
输出吗?我假设重新称重没有足够的可用空间,无法成功完成。您唯一的选择是扩展群集并添加更多存储或删除未使用的数据以清除这些警告。请在问题中添加ceph osd tree
。你能解释一下SSD是如何使用的吗?我甚至不知道osd树也有每个池的输出,你是怎么做到的?您正在运行哪个ceph版本?哪些池是39和54(ceph osd池ls detail
)?我仍然不了解全部情况。这是哪种ceph版本?我想知道为什么每个池输出和常规osd树输出中的压碎重量不同。无论如何,我会尝试将SSD的重量重新调整为1,如果您有3个SSD,但将所有的重新重量平均减少,这是没有意义的。如果运行ceph osd crush reweight osd.11,并对其他两个SSD重复该操作,会发生什么情况?重新称重不会清除任何数据,只会重新洗牌。因此,如果您执行我建议的命令,您的数据将不会发生任何变化。