Apache spark Kafka节点端口服务无法在群集外部访问
我一直在尝试使用部署卡夫卡。所以我为Kafka pods定义了NodePort服务。我检查了具有相同主机和端口的控制台Kafka producer和consumer,它们工作正常。但是,当我创建Spark应用程序作为数据使用者和Kafka作为生产者时,它们无法连接到Kafka服务0。对于主机和服务节点端口端口,我使用minikube ip而不是节点ip。 虽然,在Spark日志中,我看到NodePort服务解析端点,代理被发现为POD地址和端口:Apache spark Kafka节点端口服务无法在群集外部访问,apache-spark,kubernetes,apache-kafka,kubernetes-helm,Apache Spark,Kubernetes,Apache Kafka,Kubernetes Helm,我一直在尝试使用部署卡夫卡。所以我为Kafka pods定义了NodePort服务。我检查了具有相同主机和端口的控制台Kafka producer和consumer,它们工作正常。但是,当我创建Spark应用程序作为数据使用者和Kafka作为生产者时,它们无法连接到Kafka服务0。对于主机和服务节点端口端口,我使用minikube ip而不是节点ip。 虽然,在Spark日志中,我看到NodePort服务解析端点,代理被发现为POD地址和端口: INFO AbstractCoordinator
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Discovered group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null)
INFO ConsumerCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Revoking previously assigned partitions []
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] (Re-)joining group
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2147483645 (/172.17.0.20:9092) could not be established. Broker may not be available.
INFO AbstractCoordinator: [Consumer clientId=consumer-1, groupId=avro_data] Group coordinator 172.17.0.20:9092 (id: 2147483645 rack: null) is unavailable or invalid, will attempt rediscovery
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 2 (/172.17.0.20:9092) could not be established. Broker may not be available.
WARN NetworkClient: [Consumer clientId=consumer-1, groupId=avro_data] Connection to node 0 (/172.17.0.12:9092) could not be established. Broker may not be available.
如何改变这种行为
节点端口服务定义如下所示:
kind: Service
apiVersion: v1
metadata:
name: kafka-service
spec:
selector:
app: cp-kafka
release: my-confluent-oss
ports:
- protocol: TCP
targetPort: 9092
port: 32400
nodePort: 32400
type: NodePort
Spark消费品配置:
def kafkaParams() = Map[String, Object](
"bootstrap.servers" -> "192.168.99.100:32400",
"schema.registry.url" -> "http://192.168.99.100:8081",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[KafkaAvroDeserializer],
"group.id" -> "avro_data",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
props.put("bootstrap.servers", "192.168.99.100:32400")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32500")
卡夫卡制作人配置:
def kafkaParams() = Map[String, Object](
"bootstrap.servers" -> "192.168.99.100:32400",
"schema.registry.url" -> "http://192.168.99.100:8081",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[KafkaAvroDeserializer],
"group.id" -> "avro_data",
"auto.offset.reset" -> "earliest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
props.put("bootstrap.servers", "192.168.99.100:32400")
props.put("client.id", "avro_data")
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
props.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer")
props.put("schema.registry.url", "http://192.168.99.100:32500")
卡夫卡的所有K8s服务:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kafka-service NodePort 10.99.113.234 <none> 32400:32400/TCP 6m34s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27d
my-confluent-oss-cp-kafka ClusterIP 10.100.156.108 <none> 9092/TCP 102m
my-confluent-oss-cp-kafka-connect ClusterIP 10.99.78.89 <none> 8083/TCP 102m
my-confluent-oss-cp-kafka-headless ClusterIP None <none> 9092/TCP 102m
my-confluent-oss-cp-kafka-rest ClusterIP 10.100.152.109 <none> 8082/TCP 102m
my-confluent-oss-cp-ksql-server ClusterIP 10.96.249.202 <none> 8088/TCP 102m
my-confluent-oss-cp-schema-registry ClusterIP 10.109.27.45 <none> 8081/TCP 102m
my-confluent-oss-cp-zookeeper ClusterIP 10.102.182.90 <none> 2181/TCP 102m
my-confluent-oss-cp-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP 102m
schema-registry-service NodePort 10.103.100.64 <none> 32500:32500/TCP 33m
zookeeper-np NodePort 10.98.180.130 <none> 32181:32181/TCP 53m
当我试图从外部访问运行在minikube上的kafka代理时,我遇到了类似的问题 这是我如何解决的。在使用helm安装之前,请从本地存储库安装 在此文件内编辑 搜索nodeport:并将其enabled字段更改为true。 节点端口: 已启用:true 通过删除以下内容取消对这两行的注释: .listeners:|- 外部://${HOST_IP}:$31090+${KAFKA_BROKER_ID} 将${HOST_IP}替换为minikube IP在cmd中输入minikube IP以检索k8s主机IP,例如:196.169.99.100 如果只有一个代理正在运行,则用代理ID替换${KAFKA_BROKER_ID},默认情况下仅为0 最后,它看起来是这样的: .listeners:|- 外部://196.169.99.100:31090
现在,通过将bootstrap.servers指向196.169.99.100:31090,您可以从外部访问k8s群集中运行的kafka代理。当我尝试从外部访问minikube上运行的kafka代理时,遇到了类似的问题 这是我如何解决的。在使用helm安装之前,请从本地存储库安装 在此文件内编辑 搜索nodeport:并将其enabled字段更改为true。 节点端口: 已启用:true 通过删除以下内容取消对这两行的注释: .listeners:|- 外部://${HOST_IP}:$31090+${KAFKA_BROKER_ID} 将${HOST_IP}替换为minikube IP在cmd中输入minikube IP以检索k8s主机IP,例如:196.169.99.100 如果只有一个代理正在运行,则用代理ID替换${KAFKA_BROKER_ID},默认情况下仅为0 最后,它看起来是这样的: .listeners:|- 外部://196.169.99.100:31090 现在,您可以通过将bootstrap.servers指向196.169.99.100:31090,从外部访问k8s群集中运行的kafka broker。您需要指向NodePort IP地址,以便客户端可以正确连接到该地址。可能的副本需要指向NodePort IP地址,以便客户端可以正确连接到该地址。可能的副本属于