Docker compose Docker compose to K8s:如何排除故障;没有';t匹配吊舱';s节点关联性;?

Docker compose Docker compose to K8s:如何排除故障;没有';t匹配吊舱';s节点关联性;?,docker-compose,kubernetes-pod,Docker Compose,Kubernetes Pod,这是我在会议上提出的问题的继续 在我对docker compose文件运行了kompose convert之后,我获得的文件与我接受的答案中所列的文件完全相同。然后,我按顺序运行以下命令: $ kubectl apply -f dev-orderer1-pod.yaml $ kubectl apply -f dev-orderer1-service.yaml $ kubectl apply -f dev-peer1-pod.yaml $ kubectl apply -f dev-peer1-se

这是我在会议上提出的问题的继续

在我对docker compose文件运行了
kompose convert
之后,我获得的文件与我接受的答案中所列的文件完全相同。然后,我按顺序运行以下命令:

$ kubectl apply -f dev-orderer1-pod.yaml
$ kubectl apply -f dev-orderer1-service.yaml
$ kubectl apply -f dev-peer1-pod.yaml
$ kubectl apply -f dev-peer1-service.yaml
$ kubectl apply -f dev-couchdb1-pod.yaml
$ kubectl apply -f dev-couchdb1-service.yaml
$ kubectl apply -f ar2bc-networkpolicy.yaml
当我尝试查看我的播客时,我看到:

$ kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
dev-couchdb1   0/1     Pending   0          7m20s
dev-orderer1   0/1     Pending   0          8m25s
dev-peer1      0/1     Pending   0          7m39s
$ kubectl describe pod dev-orderer1
Name:         dev-orderer1
Namespace:    default
Priority:     0
Node:         <none>
Labels:       io.kompose.network/ar2bc=true
              io.kompose.service=dev-orderer1
Annotations:  kompose.cmd: kompose convert -f docker-compose-orderer1.yaml -f docker-compose-peer1.yaml --volumes hostPath
              kompose.version: 1.22.0 (955b78124)
Status:       Pending
IP:
IPs:          <none>
Containers:
  dev-orderer1:
    Image:      hyperledger/fabric-orderer:latest
    Port:       7050/TCP
    Host Port:  0/TCP
    Args:
      orderer
    Environment:
      ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE:  /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY:   /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_CLUSTER_ROOTCAS:            [/var/hyperledger/orderer/tls/ca.crt]
      ORDERER_GENERAL_GENESISFILE:                /var/hyperledger/orderer/orderer.genesis.block
      ORDERER_GENERAL_GENESISMETHOD:              file
      ORDERER_GENERAL_LISTENADDRESS:              0.0.0.0
      ORDERER_GENERAL_LOCALMSPDIR:                /var/hyperledger/orderer/msp
      ORDERER_GENERAL_LOCALMSPID:                 OrdererMSP
      ORDERER_GENERAL_LOGLEVEL:                   INFO
      ORDERER_GENERAL_TLS_CERTIFICATE:            /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_TLS_ENABLED:                true
      ORDERER_GENERAL_TLS_PRIVATEKEY:             /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_TLS_ROOTCAS:                [/var/hyperledger/orderer/tls/ca.crt]
    Mounts:
      /var/hyperledger/orderer/msp from dev-orderer1-hostpath1 (rw)
      /var/hyperledger/orderer/orderer.genesis.block from dev-orderer1-hostpath0 (rw)
      /var/hyperledger/orderer/tls from dev-orderer1-hostpath2 (rw)
      /var/hyperledger/production/orderer from orderer1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-44lfq (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  dev-orderer1-hostpath0:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/channel-artifacts/genesis.block
    HostPathType:
  dev-orderer1-hostpath1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/msp
    HostPathType:
  dev-orderer1-hostpath2:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/tls
    HostPathType:
  orderer1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf
    HostPathType:
  default-token-44lfq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-44lfq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=isprintdev
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  51s (x27 over 27m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.
当我试图描述这三个豆荚中的任何一个时,我看到:

$ kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
dev-couchdb1   0/1     Pending   0          7m20s
dev-orderer1   0/1     Pending   0          8m25s
dev-peer1      0/1     Pending   0          7m39s
$ kubectl describe pod dev-orderer1
Name:         dev-orderer1
Namespace:    default
Priority:     0
Node:         <none>
Labels:       io.kompose.network/ar2bc=true
              io.kompose.service=dev-orderer1
Annotations:  kompose.cmd: kompose convert -f docker-compose-orderer1.yaml -f docker-compose-peer1.yaml --volumes hostPath
              kompose.version: 1.22.0 (955b78124)
Status:       Pending
IP:
IPs:          <none>
Containers:
  dev-orderer1:
    Image:      hyperledger/fabric-orderer:latest
    Port:       7050/TCP
    Host Port:  0/TCP
    Args:
      orderer
    Environment:
      ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE:  /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY:   /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_CLUSTER_ROOTCAS:            [/var/hyperledger/orderer/tls/ca.crt]
      ORDERER_GENERAL_GENESISFILE:                /var/hyperledger/orderer/orderer.genesis.block
      ORDERER_GENERAL_GENESISMETHOD:              file
      ORDERER_GENERAL_LISTENADDRESS:              0.0.0.0
      ORDERER_GENERAL_LOCALMSPDIR:                /var/hyperledger/orderer/msp
      ORDERER_GENERAL_LOCALMSPID:                 OrdererMSP
      ORDERER_GENERAL_LOGLEVEL:                   INFO
      ORDERER_GENERAL_TLS_CERTIFICATE:            /var/hyperledger/orderer/tls/server.crt
      ORDERER_GENERAL_TLS_ENABLED:                true
      ORDERER_GENERAL_TLS_PRIVATEKEY:             /var/hyperledger/orderer/tls/server.key
      ORDERER_GENERAL_TLS_ROOTCAS:                [/var/hyperledger/orderer/tls/ca.crt]
    Mounts:
      /var/hyperledger/orderer/msp from dev-orderer1-hostpath1 (rw)
      /var/hyperledger/orderer/orderer.genesis.block from dev-orderer1-hostpath0 (rw)
      /var/hyperledger/orderer/tls from dev-orderer1-hostpath2 (rw)
      /var/hyperledger/production/orderer from orderer1 (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-44lfq (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  dev-orderer1-hostpath0:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/channel-artifacts/genesis.block
    HostPathType:
  dev-orderer1-hostpath1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/msp
    HostPathType:
  dev-orderer1-hostpath2:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf/crypto-config/ordererOrganizations/ar2dev.accessreal.com/orderers/orderer1.ar2dev.accessreal.com/tls
    HostPathType:
  orderer1:
    Type:          HostPath (bare host directory volume)
    Path:          /home/isprintsg/hlf
    HostPathType:
  default-token-44lfq:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-44lfq
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/hostname=isprintdev
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  51s (x27 over 27m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.
$kubectl描述pod开发订单1
名称:dev-order1
名称空间:默认值
优先级:0
节点:
标签:io.kompose.network/ar2bc=true
io.kompose.service=dev-order1
注释:kompose.cmd:kompose convert-f docker-compose-order1.yaml-f docker-compose-peer1.yaml--卷主机路径
kompose.version:1.22.0(955b78124)
状态:待定
知识产权:
IPs:
容器:
dev-Order1:
图:hyperledger/结构订购者:最新
端口:7050/TCP
主机端口:0/TCP
Args:
订购者
环境:
ORDERER\u GENERAL\u CLUSTER\u CLIENTCERTIFICATE:/var/hyperledger/ORDERER/tls/server.crt
ORDERER\u GENERAL\u CLUSTER\u CLIENTPRIVATEKEY:/var/hyperledger/ORDERER/tls/server.key
订购方\一般\集群\根CAS:[/var/hyperledger/order/tls/ca.crt]
ORDERER\u GENERAL\u GENESISFILE:/var/hyperledger/ORDERER/ORDERER.genesis.block
订购方\常规\基因方法:文件
订购方\常规\列表地址:0.0.0.0
订购方\一般\本地MSPDIR:/var/hyperledger/Order/msp
订购方\u常规\u本地MSPID:orderMSP
订购方\常规\日志级别:信息
订购方\一般\ TLS\证书:/var/hyperledger/order/TLS/server.crt
订购方\u常规\u TLS\u已启用:true
订购方\总\ TLS \私钥:/var/hyperledger/order/TLS/server.key
订购方\总\ TLS \根CAS:[/var/hyperledger/order/TLS/ca.crt]
挂载:
/dev-order1-hostpath1(rw)中的var/hyperledger/order/msp
/dev-order1-hostpath0(rw)中的var/hyperledger/order/order.genesis.block
/dev-order1-hostpath2(rw)中的var/hyperledger/order/tls
/来自Order1(rw)的var/hyperledger/production/orderer
/来自default-token-44lfq(ro)的var/run/secrets/kubernetes.io/serviceCount
条件:
类型状态
播客计划错误
卷数:
dev-order1-hostpath0:
类型:主机路径(裸主机目录卷)
路径:/home/isprintsg/hlf/channel artifacts/genesis.block
主机路径类型:
dev-order1-hostpath1:
类型:主机路径(裸主机目录卷)
路径:/home/isprintsg/hlf/crypto-config/orderOrganizations/ar2dev.accessreal.com/orders/order1.ar2dev.accessreal.com/msp
主机路径类型:
dev-order1-hostpath2:
类型:主机路径(裸主机目录卷)
路径:/home/isprintsg/hlf/crypto-config/orderOrganizations/ar2dev.accessreal.com/orders/order1.ar2dev.accessreal.com/tls
主机路径类型:
订购人1:
类型:主机路径(裸主机目录卷)
路径:/home/isprintsg/hlf
主机路径类型:
default-token-44lfq:
类型:Secret(由Secret填充的卷)
SecretName:default-token-44lfq
可选:false
QoS等级:最佳努力
节点选择器:kubernetes.io/hostname=isprintdev
容差:node.kubernetes.io/未就绪:NoExecute op=存在300秒
node.kubernetes.io/unreachable:NoExecute op=存在300秒
活动:
从消息中键入原因年龄
----     ------            ----                ----               -------
警告失败调度51s(x27超过27m)默认调度程序0/1节点可用:1个节点与Pod的节点关联性不匹配。
最后的错误消息对所有三个POD都是通用的。我试图用谷歌搜索这条消息,但令人惊讶的是,我没有得到任何直接的结果。这个信息意味着什么,我应该如何着手解决这个问题?如果你想知道的话,我对库伯内特斯还是个新手


编辑

  • dev-order1-pod.yaml-
  • dev-order1-service.yaml-
  • dev-peer1-pod.yaml-
  • dev-peer1-service.yaml-
  • dev-couchdb1-pod.yaml-
  • dev-couchdb1-service.yaml-
  • ar2bc-networkpolicy.yaml-

“Pod YAML中的某些内容指出,该Pod只能在特定节点上运行,但没有任何节点符合该规则。”在这里查看Kubernetes YAML将非常有用。(kubectl description的输出也建议了一些
主机路径:
类型的卷,这不是一种可靠的数据存储方式,因此您也需要对它们进行编辑。因此,问题通常向我建议,Kompose可以作为生成Kubernetes YAML的起点,但并不能真正生成一个可运行的f无需手动编辑的工件。)@DavidMaze我将粘贴yaml并很快更新我的问题