Kubernetes 在尝试分配给pi上的本地存储时,未找到要绑定的持久卷
它适用于我的mack8s实例,但不适用于我的raspberry-pi实例。实际上,我正在尝试建立pihole的k8s云实现。这样,我就可以监视它,并使它保持容器化,而不是在应用程序范围之外运行。理想情况下,我会尽量把所有东西都装进集装箱,以保持清洁 我在一个2节点的Raspberry Pi 4、4G/ea集群上运行 在我的mac上运行以下文件时,它会正确构建,但在名为:master pi的pi上,它会失败:Kubernetes 在尝试分配给pi上的本地存储时,未找到要绑定的持久卷,kubernetes,raspberry-pi,raspberry-pi4,Kubernetes,Raspberry Pi,Raspberry Pi4,它适用于我的mack8s实例,但不适用于我的raspberry-pi实例。实际上,我正在尝试建立pihole的k8s云实现。这样,我就可以监视它,并使它保持容器化,而不是在应用程序范围之外运行。理想情况下,我会尽量把所有东西都装进集装箱,以保持清洁 我在一个2节点的Raspberry Pi 4、4G/ea集群上运行 在我的mac上运行以下文件时,它会正确构建,但在名为:master pi的pi上,它会失败: Events: Type Reason Age F
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 44m default-scheduler 0/2 nodes are available: 1 node(s) didn't find available persistent volumes to bind, 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
我实现的YAML看起来非常简单:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-etc-volume
labels:
directory: etc
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/etc #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-etc-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi # Possibly update to 2Gi later.
selector:
matchLabels:
directory: etc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pihole-local-dnsmasq-volume
labels:
directory: dnsmasq.d
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local
local:
path: /home/pi/Documents/pihole/dnsmasq #Location where it will live.
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- master-pi #docker-desktop # Hosthome where lives.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pihole-local-dnsmasq-claim
spec:
storageClassName: local
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
selector:
matchLabels:
directory: dnsmasq.d
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pihole
labels:
app: pihole
spec:
replicas: 1
selector:
matchLabels:
app: pihole
template:
metadata:
labels:
app: pihole
name: pihole
spec:
containers:
- name: pihole
image: pihole/pihole:latest
imagePullPolicy: Always
env:
- name: TZ
value: "America/New_York"
- name: WEBPASSWORD
value: "secret"
volumeMounts:
- name: pihole-local-etc-volume
mountPath: "/etc/pihole"
- name: pihole-local-dnsmasq-volume
mountPath: "/etc/dnsmasq.d"
volumes:
- name: pihole-local-etc-volume
persistentVolumeClaim:
claimName: pihole-local-etc-claim
- name: pihole-local-dnsmasq-volume
persistentVolumeClaim:
claimName: pihole-local-dnsmasq-claim
---
apiVersion: v1
kind: Service
metadata:
name: pihole
spec:
selector:
app: pihole
ports:
- port: 8000
targetPort: 80
name: pihole-admin
- port: 53
targetPort: 53
protocol: TCP
name: dns-tcp
- port: 53
targetPort: 53
protocol: UDP
name: dns-udp
externalIPs:
- 192.168.10.75 #Static IP I need to assign for the network.
其他说明:
我确保我以前创建了这些文件夹,它们都是chmod 777。
df
产生:
pi@master-pi:~/Documents/pihole$ df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 383100 5772 377328 2% /run
/dev/mmcblk0p2 30450144 14283040 14832268 50% /
tmpfs 1915492 0 1915492 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4096 0 4096 0% /sys/fs/cgroup
/dev/mmcblk0p1 258095 147696 110399 58% /boot/firmware
tmpfs 383096 116 382980 1% /run/user/1000
因此,我相信该位置所需的大小(/home/pi/Documents/etc)仅为1G,但它看起来是半满的,因此约15G可用
我可以提供更多信息,但我只是不明白为什么这里有两件事需要学习
/hello/world
时,它不会在主机上自动生成路径,这实际上很烦人,因为如果您有N个pod,您需要所有节点都有该路径,以防它被安排到不同的路径。主节点确定事情发生的位置,因此如果它将其传递给无法处理它的节点,它将得到一个退避错误。最好将路径放在所有节点上
它讨论了如何定义存储类,以及如何请求30gi的存储大小。这将与索赔一起使用。现在已经太晚了,但我将尝试为基本问题编写一个类似的示例。可能您正在主节点上创建PV,其中不允许调度POD。尝试在运行pod的同一节点上创建PV。还可以使用kubectl get pvI检查PV的错误。我不知道是否允许主节点不进行调度。有没有办法改变这种状况?如果Pod将随机分布在节点上,我将如何将卷分配到Pod所在的节点中?我可以设定吗?我应该设置它吗?默认情况下,出于安全原因,您的集群不会在控制平面节点上调度POD。如果希望能够在控制平面节点上调度吊舱,请运行:
kubectl-taint-nodes--all-node-role.kubernetes.io/master-
。你能试试看,让我知道它是否有效吗?@Jakub这不是什么大问题。我只是不知道那是一件事。我注意到,如果路径不存在,设置local:path:/hello/world
将不会创建该路径,因此这也成了一个问题,但我只是更新了所有节点以使路径可用。最初,它没有设置,因为POD不了解路径等。因此,当我手动遍历所有节点并创建DIR时,无论在哪个节点上调度,它都会工作。我希望K8s能够处理dir的创建。@Fallerenreaper很高兴它现在可以工作了,你能添加这个作为答案并接受它吗?所以如果有人有同样的问题,他会在这里找到答案。