Kubernetes群集上的SqlServer无法部署

Kubernetes群集上的SqlServer无法部署,kubernetes,sql-server-2019,Kubernetes,Sql Server 2019,我正在尝试使用bases os ubuntu server 19最新补丁将SQLServer始终打开部署到kubernetes本地集群中 这是设置 kubeadmv1.16.0 Docker 18.09.7 Nodes NAME STATUS ROLES AGE VERSION master-node Ready master 6d19h v1.16.0 slave-node1 Ready <none> 6d18h

我正在尝试使用bases os ubuntu server 19最新补丁将SQLServer始终打开部署到kubernetes本地集群中

这是设置

kubeadmv1.16.0

Docker 18.09.7

Nodes
NAME          STATUS   ROLES    AGE     VERSION
master-node   Ready    master   6d19h   v1.16.0
slave-node1   Ready    <none>   6d18h   v1.16.0
slave-node2   Ready    <none>   6d19h   v1.16.0
PV和PV索赔

kind: PersistentVolume
apiVersion: v1
metadata:
  name: ag1-pv-volume-node1
  labels:
    type: local
spec:
  storageClassName: default
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  local:
    path: "/var/opt/mssql"
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - slave-node1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mssql-data1-claim
  namespace: ag1
spec:
  storageClassName: default
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  selector:
    matchLabels:
      name: ag1-pv-volume-node1
操作员文件

微软的那个

SqlServer部署

apiVersion: mssql.microsoft.com/v1
kind: SqlServer
metadata:
  labels: {name: mssql1, type: sqlservr}
  name: mssql1
  namespace: ag1
spec:
  acceptEula: true
  agentsContainerImage: mcr.microsoft.com/mssql/ha:2019-CTP2.1-ubuntu
  availabilityGroups: [ag1]
  instanceRootVolumeClaimTemplate:
    accessModes: [ReadWriteOnce]
    resources:
      requests: {storage: 3Gi}
    storageClass: default
  saPassword:
    secretKeyRef: {key: sapassword, name: sql-secrets}
  sqlServerContainer: {image: 'mcr.microsoft.com/mssql/server:2019-CTP2.1-ubuntu'}
  volumes:
    - name: sql-server-storage1
      persistentVolumeClaim:
        claimName: mssql-data1-claim
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - slave-node1
---
apiVersion: v1
kind: Service
metadata: {name: mssql1, namespace: ag1}
spec:
  ports:
  - {name: tds, port: 1433}
  selector: {name: mssql1, type: sqlservr}
  type: NodePort
下面是kubectl get pods的结果-A

NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
ag1                    mssql-operator-5c85589dfb-j2t6f              1/1     Running   0          3d23h
kube-system            coredns-5644d7b6d9-dh9fg                     1/1     Running   2          6d20h
kube-system            coredns-5644d7b6d9-p84nl                     1/1     Running   2          6d20h
kube-system            etcd-master-node                             1/1     Running   2          6d19h
kube-system            kube-apiserver-master-node                   1/1     Running   3          6d19h
kube-system            kube-controller-manager-master-node          1/1     Running   5          6d19h
kube-system            kube-flannel-ds-amd64-cpsf9                  1/1     Running   1          6d19h
kube-system            kube-flannel-ds-amd64-d5sj4                  1/1     Running   2          6d18h
kube-system            kube-flannel-ds-amd64-jg6pd                  1/1     Running   2          6d19h
kube-system            kube-proxy-2cq5m                             1/1     Running   2          6d20h
kube-system            kube-proxy-8rc4m                             1/1     Running   1          6d19h
kube-system            kube-proxy-rh27f                             1/1     Running   1          6d18h
kube-system            kube-scheduler-master-node                   1/1     Running   4          6d19h
kubernetes-dashboard   dashboard-metrics-scraper-566cddb686-dmns8   1/1     Running   1          6d18h
kubernetes-dashboard   kubernetes-dashboard-7b5bf5d559-6nqsm        1/1     Running   7          6d18h

欢迎收看StackOverflow@Ricardo

根据我自己的经验,要使HA MSSQL Server与Linux workers上的“MSSQL operator”协同工作,而不必对“
deploy ag.py
”脚本进行重大更改,您需要首先解决的事情很少

我假设您正在
--dry run
模式下运行“
/deploy-ag.py deploy
”,这样您就有机会在使用
kubectl
应用某些清单之前对其进行调整

  • 确保“PVC”可以绑定到以前创建的“本地”类型的“PV”

    • 尤其要确保PVC中定义的“storageClassName”与“PV”中的“storageClassName”匹配(在运行脚本之前,我首先在“Kubernetes/sample deployment script/templates/PVC.yaml”中手动指定),例如
  • PV_1.yaml

    apiVersion:v1 种类:PersistentVolume 元数据: 标签:{storage:ag1} 名称:ag1-mssql1-pv 规格: accessModes:[ReadWriteOnce] 容量:{存储:2Gi} 本地: 路径:“/mnt/data” storageClassName:gp2 节点相关性: 必修的: nodeSelectorTerms: -匹配表达式: -关键字:kubernetes.io/hostname 接线员:在 价值观:
    -node-1.region.compute.internal欢迎来到StackOverflow@Ricardo

    根据我自己的经验,要使HA MSSQL Server与Linux workers上的“MSSQL operator”协同工作,而不必对“
    deploy ag.py
    ”脚本进行重大更改,您需要首先解决的事情很少

    我假设您正在
    --dry run
    模式下运行“
    /deploy-ag.py deploy
    ”,这样您就有机会在使用
    kubectl
    应用某些清单之前对其进行调整

  • 确保“PVC”可以绑定到以前创建的“本地”类型的“PV”

    • 尤其要确保PVC中定义的“storageClassName”与“PV”中的“storageClassName”匹配(在运行脚本之前,我首先在“Kubernetes/sample deployment script/templates/PVC.yaml”中手动指定),例如
  • PV_1.yaml

    apiVersion:v1 种类:PersistentVolume 元数据: 标签:{storage:ag1} 名称:ag1-mssql1-pv 规格: accessModes:[ReadWriteOnce] 容量:{存储:2Gi} 本地: 路径:“/mnt/data” storageClassName:gp2 节点相关性: 必修的: nodeSelectorTerms: -匹配表达式: -关键字:kubernetes.io/hostname 接线员:在 价值观:
    -node-1.region.compute.internal是复制错误还是在你的yaml文件?在创建后,您是否更改了?您在创建pv和pvc之前创建了storageclass吗?您好,感谢您的评论我更改了存储类以匹配我创建的存储类,应该始终是默认的,即使集群上没有SC默认?对这些是我正在使用的YAML文件请删除此
    apiVersion:storage.k8s.io/v1种类:StorageClass元数据:名称:local storage provisioner:kubernetes.io/aws-ebs回收策略:Retain allowvolumeeexpansion:true volumeBindingMode:WaitForFirstConsumeradministrador@master-节点:~/YAML$more 2_Define_PV1.YAML
    来自您的PVC和PV yaml文件,这将向您解释sc如何工作的一切,请添加
    kubectl get pods-A
    屏幕,以便我们实际查看是否有一些挂起的文件而不是运行。您好,llink中附带了我的文件和一些关于您所问问题的屏幕。谢谢[请将这些示例和屏幕作为编辑添加到您的答案。
    apiVersion:storage.k8s.io/v1种类:StorageClass元数据:名称:本地存储供应器:kubernetes.io/aws-ebs回收策略:保留所有卷扩展:true volumeBindingMode:WaitForFirstConsumeradministrador@master-node:~/YAML$more 2_Define_PV1.YAML
    那是m吗I复制或它在您的yaml文件中?创建后是否更改了?在创建pv和pvc之前是否创建了storageclass?您好,感谢您的评论我更改了存储类以匹配我创建的存储类,应该始终是默认的,即使集群上没有SC默认值?是的,这些是我使用的yaml文件,请删除te this
    apiVersion:storage.k8s.io/v1种类:StorageClass元数据:名称:本地存储provisioner:kubernetes.io/aws-ebs回收策略:保留允许的卷扩展:true volumeBindingMode:WaitForFirstConsumeradministrador@master-node:~/YAML$more 2_Define_PV1.YAML
    从您的PVC和PV YAML文件中,这将解释一切关于sc是如何工作的,请添加
    kubectl get pods-A的屏幕,这样我们就可以实际查看是否有一些挂起的而不是运行的屏幕。您好,llink中附带了我的文件和一些关于您所问问题的屏幕。谢谢[请添加这些示例和屏幕作为您答案的编辑]。
    
    apiVersion: mssql.microsoft.com/v1
    kind: SqlServer
    metadata:
      labels: {name: mssql1, type: sqlservr}
      name: mssql1
      namespace: ag1
    spec:
      acceptEula: true
      agentsContainerImage: mcr.microsoft.com/mssql/ha:2019-CTP2.1-ubuntu
      availabilityGroups: [ag1]
      instanceRootVolumeClaimTemplate:
        accessModes: [ReadWriteOnce]
        resources:
          requests: {storage: 3Gi}
        storageClass: default
      saPassword:
        secretKeyRef: {key: sapassword, name: sql-secrets}
      sqlServerContainer: {image: 'mcr.microsoft.com/mssql/server:2019-CTP2.1-ubuntu'}
      volumes:
        - name: sql-server-storage1
          persistentVolumeClaim:
            claimName: mssql-data1-claim
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
              - slave-node1
    ---
    apiVersion: v1
    kind: Service
    metadata: {name: mssql1, namespace: ag1}
    spec:
      ports:
      - {name: tds, port: 1433}
      selector: {name: mssql1, type: sqlservr}
      type: NodePort
    
    NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
    ag1                    mssql-operator-5c85589dfb-j2t6f              1/1     Running   0          3d23h
    kube-system            coredns-5644d7b6d9-dh9fg                     1/1     Running   2          6d20h
    kube-system            coredns-5644d7b6d9-p84nl                     1/1     Running   2          6d20h
    kube-system            etcd-master-node                             1/1     Running   2          6d19h
    kube-system            kube-apiserver-master-node                   1/1     Running   3          6d19h
    kube-system            kube-controller-manager-master-node          1/1     Running   5          6d19h
    kube-system            kube-flannel-ds-amd64-cpsf9                  1/1     Running   1          6d19h
    kube-system            kube-flannel-ds-amd64-d5sj4                  1/1     Running   2          6d18h
    kube-system            kube-flannel-ds-amd64-jg6pd                  1/1     Running   2          6d19h
    kube-system            kube-proxy-2cq5m                             1/1     Running   2          6d20h
    kube-system            kube-proxy-8rc4m                             1/1     Running   1          6d19h
    kube-system            kube-proxy-rh27f                             1/1     Running   1          6d18h
    kube-system            kube-scheduler-master-node                   1/1     Running   4          6d19h
    kubernetes-dashboard   dashboard-metrics-scraper-566cddb686-dmns8   1/1     Running   1          6d18h
    kubernetes-dashboard   kubernetes-dashboard-7b5bf5d559-6nqsm        1/1     Running   7          6d18h