Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/google-cloud-platform/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Kubernetes 在使用strimzi部署到k8s上后,无法将生产者/消费者连接到kafka云代理_Kubernetes_Google Cloud Platform_Apache Kafka_Strimzi - Fatal编程技术网

Kubernetes 在使用strimzi部署到k8s上后,无法将生产者/消费者连接到kafka云代理

Kubernetes 在使用strimzi部署到k8s上后,无法将生产者/消费者连接到kafka云代理,kubernetes,google-cloud-platform,apache-kafka,strimzi,Kubernetes,Google Cloud Platform,Apache Kafka,Strimzi,我无法让制片人就卡夫卡主题进行创作。我的卡夫卡服务已通过strimzi部署在k8s上。我的k8s集群在谷歌云服务上有2个节点 正如我从k8s配置中看到的,所有服务都处于启用状态: kafka集群kafka外部引导程序是用于与kafka代理通信的服务(nodeport)。基本上,它将请求从外部节点转发到内部代理服务。以下是一些细节: 按照指南(使用minikube作为集群示例),我提取了节点的ip: kubectl get nodes --output=jsonpath='{range .it

我无法让制片人就卡夫卡主题进行创作。我的卡夫卡服务已通过strimzi部署在k8s上。我的k8s集群在谷歌云服务上有2个节点

正如我从k8s配置中看到的,所有服务都处于启用状态:

kafka集群kafka外部引导程序
是用于与kafka代理通信的服务(
nodeport
)。基本上,它将请求从外部节点转发到内部代理服务。以下是一些细节:

按照指南(使用minikube作为集群示例),我提取了节点的ip:

kubectl get nodes --output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="ExternalIP")].address}{"\n"}{end}'
35.xxx.xxx.xxx
34.xxx.xxx.xxx
(此处指南的主要区别在于,我使用的是“ExternalIP”而不是“InternalIP”,因为我在远程完成所有工作)

然后我搜索服务公开的端口:

kubectl get service kafka-cluster-kafka-external-bootstrap -n xxxx-kafka -o=jsonpath='{.spec.ports[0].nodePort}{"\n"}'
30680
但当我尝试使用apache kafka的本地bin启动我的制作人时,我得到了以下结果:

sh kafka-console-producer.sh --broker-list 35.xxx.xxx.xxx:30680 --topic test
>[2020-02-11 16:37:18,388] WARN [Producer clientId=console-producer] Connection to node -1 (/35.xxx.xxx.xxx:30680) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
所以我尝试ping ip,看看它是否可以访问:

ping 35.xxx.xxx.xxx
PING 35.xxx.xxx.xxx (35.xxx.xxx.xxx) 56(84) bytes of data.
64 bytes from 35.xxx.xxx.xxx: icmp_seq=1 ttl=54 time=63.4 ms
64 bytes from 35.xxx.xxx.xxx: icmp_seq=2 ttl=54 time=51.4 ms
它是可访问的,但端口不可访问:

telnet 35.xxx.xxx.xxx 30680
Trying 35.xxx.xxx.xxx...
telnet: Unable to connect to remote host: Connection timed out
这是我对kafka群集的yaml配置:

apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
  creationTimestamp: "2020-02-10T15:04:38Z"
  generation: 1
  name: kafka-cluster
  namespace: xxxx-kafka
  resourceVersion: "5868409"
  selfLink: /apis/kafka.strimzi.io/v1beta1/namespaces/xxxx-kafka/kafkas/kafka-cluster
  uid: 93d0d9b6-7c88-4e01-af9c-49f9fcaac1d1
spec:
  entityOperator:
    topicOperator: {}
    userOperator: {}
  kafka:
    config:
      offsets.topic.replication.factor: 1
      transaction.state.log.min.isr: 1
      transaction.state.log.replication.factor: 1
    listeners:
      external:
        tls: false
        type: nodeport
      plain: {}
      tls: {}
    replicas: 1
    storage:
      type: jbod
      volumes:
      - deleteClaim: false
        id: 0
        size: 100Gi
        type: persistent-claim
  zookeeper:
    replicas: 1
    storage:
      deleteClaim: false
      size: 100Gi
      type: persistent-claim
status:
  conditions:
  - lastTransitionTime: 2020-02-11T15:33:50+0000
    status: "True"
    type: Ready
  listeners:
  - addresses:
    - host: kafka-cluster-kafka-bootstrap.xxxx-kafka.svc
      port: 9092
    type: plain
  - addresses:
    - host: kafka-cluster-kafka-bootstrap.xxxx-kafka.svc
      port: 9093
    type: tls
  - addresses:
    - host: <AnyNodeAddress>
      port: 30680
    type: external
  observedGeneration: 1
我还尝试从GCP节点进行内部连接,但遇到了相同的问题。我提取了内部IP:

kubectl get nodes --output=jsonpath='{range .items[*]}{.status.addresses[?(@.type=="InternalIP")].address}{"\n"}{end}'
10.132.0.4
10.132.0.5
然后我尝试从GCP节点连接生产者,我得到的结果如下:

sh kafka-console-producer.sh --broker-list 10.132.0.4:30680 --topic test_topic
>hello
[2020-02-12 15:31:15,033] ERROR Error when sending message to topic test_topic with key: null, value: 4 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Topic test_topic not present in metadata after 60000 ms.
>[2020-02-12 15:32:24,629] WARN [Producer clientId=console-producer] Connection to node 0 (/34.xxx.xxx.xxx:30350) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
现在,在得到连接错误之前,我还得到了这个:

[2020-02-12 15:31:15,033] ERROR Error when sending message to topic test_topic with key: null, value: 4 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
    org.apache.kafka.common.errors.TimeoutException: Topic test_topic not present in metadata after 60000 ms.
这是卡夫卡集群卡夫卡经纪人yaml:

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  creationTimestamp: "2020-02-11T15:31:57Z"
  labels:
    app.kubernetes.io/instance: kafka-cluster
    app.kubernetes.io/managed-by: strimzi-cluster-operator
    app.kubernetes.io/name: strimzi
    strimzi.io/cluster: kafka-cluster
    strimzi.io/kind: Kafka
    strimzi.io/name: kafka-cluster-kafka-brokers
  name: kafka-cluster-kafka-brokers
  namespace: xxxx-kafka
  ownerReferences:
  - apiVersion: kafka.strimzi.io/v1beta1
    blockOwnerDeletion: false
    controller: false
    kind: Kafka
    name: kafka-cluster
    uid: 93d0d9b6-7c88-4e01-af9c-49f9fcaac1d1
  resourceVersion: "5867905"
  selfLink: /api/v1/namespaces/xxxx-kafka/services/kafka-cluster-kafka-brokers
  uid: 42d7fa08-cf52-47e1-9746-89a45d65351b
spec:
  clusterIP: None
  ports:
  - name: replication
    port: 9091
    protocol: TCP
    targetPort: 9091
  - name: clients
    port: 9092
    protocol: TCP
    targetPort: 9092
  - name: clientstls
    port: 9093
    protocol: TCP
    targetPort: 9093
  publishNotReadyAddresses: true
  selector:
    strimzi.io/cluster: kafka-cluster
    strimzi.io/kind: Kafka
    strimzi.io/name: kafka-cluster-kafka
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

您是否尝试过使用第二个节点IP?我还认为使用服务类型
ClusterIP
@Crou会更容易。是的,我对第二个节点也有同样的问题。如果您只打算为Kafka使用纯文本侦听器,它最好是内部的,您也可以使用http和正常的入口/负载平衡器。我在这里列出的所有内容都是Kafka,除了PubSub
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  creationTimestamp: "2020-02-11T15:31:57Z"
  labels:
    app.kubernetes.io/instance: kafka-cluster
    app.kubernetes.io/managed-by: strimzi-cluster-operator
    app.kubernetes.io/name: strimzi
    strimzi.io/cluster: kafka-cluster
    strimzi.io/kind: Kafka
    strimzi.io/name: kafka-cluster-kafka-brokers
  name: kafka-cluster-kafka-brokers
  namespace: xxxx-kafka
  ownerReferences:
  - apiVersion: kafka.strimzi.io/v1beta1
    blockOwnerDeletion: false
    controller: false
    kind: Kafka
    name: kafka-cluster
    uid: 93d0d9b6-7c88-4e01-af9c-49f9fcaac1d1
  resourceVersion: "5867905"
  selfLink: /api/v1/namespaces/xxxx-kafka/services/kafka-cluster-kafka-brokers
  uid: 42d7fa08-cf52-47e1-9746-89a45d65351b
spec:
  clusterIP: None
  ports:
  - name: replication
    port: 9091
    protocol: TCP
    targetPort: 9091
  - name: clients
    port: 9092
    protocol: TCP
    targetPort: 9092
  - name: clientstls
    port: 9093
    protocol: TCP
    targetPort: 9093
  publishNotReadyAddresses: true
  selector:
    strimzi.io/cluster: kafka-cluster
    strimzi.io/kind: Kafka
    strimzi.io/name: kafka-cluster-kafka
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}