Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/date/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在kubernetes集群中安装rook ceph后显示OSD 0_Kubernetes_Ceph_Rook Storage_Kubernetes Rook - Fatal编程技术网

在kubernetes集群中安装rook ceph后显示OSD 0

在kubernetes集群中安装rook ceph后显示OSD 0,kubernetes,ceph,rook-storage,kubernetes-rook,Kubernetes,Ceph,Rook Storage,Kubernetes Rook,我使用3个VP设置了3个节点kubernetes,并安装了rook/ceph 当我跑的时候 kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash ceph status 我得到以下结果 osd: 0 osds: 0 up, 0 in 我试过了 ceph device ls 结果是 DEVICE HOST:DEV DAEMONS LIFE EXPECTANCY ceph osd status未给出任何结

我使用3个VP设置了3个节点kubernetes,并安装了rook/ceph

当我跑的时候

kubectl exec -it rook-ceph-tools-78cdfd976c-6fdct -n rook-ceph bash
ceph status
我得到以下结果

osd: 0 osds: 0 up, 0 in
我试过了

ceph device ls
结果是

DEVICE  HOST:DEV  DAEMONS  LIFE EXPECTANCY
ceph osd status
未给出任何结果

这是我使用的yaml文件

https://github.com/rook/rook/blob/master/cluster/examples/kubernetes/ceph/cluster.yaml
当我使用下面的命令时

sudo kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-4xddh provision
结果是

2021-05-10 05:45:09.440650 I | cephosd: skipping device "sda1" because it contains a filesystem "ext4"
2021-05-10 05:45:09.440653 I | cephosd: skipping device "sda2" because it contains a filesystem "ext4"
2021-05-10 05:45:09.475841 I | cephosd: configuring osd devices: {"Entries":{}}
2021-05-10 05:45:09.475875 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2021-05-10 05:45:09.476221 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2021-05-10 05:45:10.057411 D | cephosd: {}
2021-05-10 05:45:10.057469 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2021-05-10 05:45:10.057501 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2021-05-10 05:45:10.541968 D | cephosd: {}
2021-05-10 05:45:10.551033 I | cephosd: 0 ceph-volume raw osd devices configured on this node
2021-05-10 05:45:10.551274 W | cephosd: skipping OSD configuration as no devices matched the storage settings for this node "node1"
我的磁盘分区

root@node1: lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /

我做错了什么?

我想为了让roof ceph正常工作,我应该在节点上附加一个原始卷,因为它不允许在主磁盘上挂载目录

现在看起来像这样

root@node1:~/marketing-automation-agency# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   400G  0 disk 
├─sda1   8:1    0   953M  0 part /boot
└─sda2   8:2    0 399.1G  0 part /

我有一个类似的问题,OSD在我多次安装和拆卸以进行测试之后,并没有出现在
ceph状态中

我通过运行

dd if=/dev/zero of=/dev/sdX bs=1M status=progress
完全删除此类原始块磁盘上的任何信息