MacOs上的gRPC网络到运行Istio的Kubernetes

MacOs上的gRPC网络到运行Istio的Kubernetes,kubernetes,grpc,google-kubernetes-engine,istio,docker-for-mac,Kubernetes,Grpc,Google Kubernetes Engine,Istio,Docker For Mac,我无法通过Kuberneres在MacOs上运行Docker For Desktop将gRPC连接到Istio 更新:这在谷歌Kubernetes引擎上也不起作用(或者是同样的问题) 更新:如果你有这个gRPC(任何gRPC样本)设置在GKE上工作,请让我知道 最新消息:这项工作直接与大使开箱即用: apiVersion: v1 kind: Service metadata: labels: app: auth-service-grpc name: auth-service-gr

我无法通过KuberneresMacOs上运行Docker For DesktopgRPC连接到Istio

更新:这在谷歌Kubernetes引擎上也不起作用(或者是同样的问题)

更新:如果你有这个gRPC(任何gRPC样本)设置在GKE上工作,请让我知道

最新消息:这项工作直接与大使开箱即用:

apiVersion: v1
kind: Service
metadata:
  labels:
    app: auth-service-grpc
  name: auth-service-grpc
  namespace: default
  annotations:
    sidecar.istio.io/inject: false
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind: Mapping
      name: grpc_mapping
      grpc: true
      prefix: /main.Greeter/
      rewrite: /main.Greeter/
spec:
  type: LoadBalancer
  ports:
  - port: 3000
    name: grpc-greet
    targetPort: grpc-auth
  selector:
    app: auth-deployment-grpc
grpc_cli ls本地主机:3000-l

D0827 12:45:51.753655000 140736110936960 ev_posix.cc:142]              Using polling engine: poll
D0827 12:45:51.756976000 140736110936960 dns_resolver.cc:331]          Using native dns resolver
I0827 12:45:51.763898000 140736110936960 subchannel.cc:608]            New connected subchannel at 0x7f83ebd05ea0 for subchannel 0x7f83ebd045b0
filename: grpc_reflection_v1alpha/reflection.proto
package: grpc.reflection.v1alpha;
service ServerReflection {
  rpc ServerReflectionInfo(stream grpc.reflection.v1alpha.ServerReflectionRequest) returns (stream grpc.reflection.v1alpha.ServerReflectionResponse) {}
}

filename: auth.proto
package: pb;
service Auth {
  rpc TheMethod(pb.TheRequest) returns (pb.TheReply) {}
}
但不是和Istio

对于初学者:
grpc\u clils localhost:3000-l
向我展示:

export GRPC_VERBOSITY=DEBUG
grpc_cli ls localhost:3000 -l                                            
D0826 14:42:35.175041000 140736110936960 ev_posix.cc:142]              Using polling engine: poll
D0826 14:42:35.176359000 140736110936960 dns_resolver.cc:331]          Using native dns resolver
I0826 14:42:35.180535000 140736110936960 subchannel.cc:646]            Connect failed: {"created":"@1535287355.180495000","description":"Failed to connect to remote host: OS Error","errno":61,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":201,"os_error":"Connection refused","syscall":"connect","target_address":"ipv6:[::1]:3000"}
I0826 14:42:35.180675000 140736110936960 subchannel.cc:646]            Connect failed: {"created":"@1535287355.180647000","description":"Failed to connect to remote host: OS Error","errno":61,"file":"src/core/lib/iomgr/tcp_client_posix.cc","file_line":201,"os_error":"Connection refused","syscall":"connect","target_address":"ipv4:127.0.0.1:3000"}
I0826 14:42:35.180691000 140736110936960 subchannel.cc:470]            Subchannel 0x7fb149503470: Retry in 1000 milliseconds
Received an error when querying services endpoint.
I0826 14:42:35.180827000 140736110936960 proto_reflection_descriptor_database.cc:51] ServerReflectionInfo rpc failed. Error code: 14, details: Connect Failed
grpc_cli ls本地主机:8060-l:

D0827 12:13:36.558654000 140736110936960 ev_posix.cc:142]              Using polling engine: poll
D0827 12:13:36.559903000 140736110936960 dns_resolver.cc:331]          Using native dns resolver
I0827 12:13:36.565146000 140736110936960 subchannel.cc:608]            New connected subchannel at 0x7fa28c5068c0 for subchannel 0x7fa28c504eb0
D0827 12:13:36.567485000 140736110936960 dns_resolver.cc:247]          In cooldown from last resolution (from 7 ms ago). Will resolve again in 993 ms
Received an error when querying services endpoint.
I0827 12:13:36.568479000 140736110936960 proto_reflection_descriptor_database.cc:51] ServerReflectionInfo rpc failed. Error code: 14, details: Socket closed
端口
3000
未打开,如果我在
127.0.0.1
上进行端口扫描,我会得到以下结果:

Port Scanning host: 127.0.0.1

     Open TCP Port:     80          http
     Open TCP Port:     443         https
     Open TCP Port:     631         ipp
     Open TCP Port:     6443        sun-sr-https
     Open TCP Port:     8060
     Open TCP Port:     15011
     Open TCP Port:     15030
     Open TCP Port:     15031
     Open TCP Port:     31400
     Open TCP Port:     65190
Port Scan has completed…
注意,我已关闭MacOs上的防火墙

istio系统:

svc/istio-ingressgateway       LoadBalancer   10.105.30.214    localhost     80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31361/TCP,8060:30030/TCP,15030:32411/TCP,3000:31399/TCP
只是为了显示下面的网关设置,正在通过头盔更新it值

kubectl cluster info显示了以下内容:

kubectl cluster-info
Kubernetes master is running at https://localhost:6443
KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
我已安装了带头盔的Istio,并已将
端口添加到Istio头盔安装
值中。yaml

  gateways:
    enabled: true

    istio-ingressgateway:
      enabled: true
      labels:
        app: istio-ingressgateway
        istio: ingressgateway
      replicaCount: 1
      autoscaleMin: 1
      autoscaleMax: 5
      resources: {}
        # limits:
        #  cpu: 100m
        #  memory: 128Mi
        #requests:
        #  cpu: 1800m
        #  memory: 256Mi

      loadBalancerIP: ""
      serviceAnnotations: {}
      type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be

      ports:
        ## You can add custom gateway ports
      - port: 80
        targetPort: 80
        name: http
        nodePort: 31380
      - port: 443
        name: https
        nodePort: 31390
      - port: 31400
        name: tcp
        nodePort: 31400
      # Pilot and Citadel MTLS ports are enabled in gateway - but will only redirect
      # to pilot/citadel if global.meshExpansion settings are enabled.
      - port: 15011
        targetPort: 15011
        name: tcp-pilot-grpc-tls
      - port: 8060
        targetPort: 8060
        name: tcp-citadel-grpc-tls
      # Telemetry-related ports are enabled in gateway - but will only redirect if
      # the gateway configration for the various components are enabled.
      - port: 15030
        targetPort: 15030
        name: http2-prometheus
      - port: 15031
        targetPort: 15031
        name: http2-grafana
  # awear-grpc
        name: grpc
        nodePort: 31399
        port: 3000
        targetPort: 3000
还尝试创建自定义的
种类:网关

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: my-gateway
  labels:
    awear: my-gateway
spec:
  selector:
    istio: istio-ingressgateway # use istio default controller
  servers:
  - port:
      number: 3000
      name: grpc-my
      protocol: GRPC
    hosts:
    - "my-service-grpc.default.svc.cluster.local"

有什么想法吗?

试试
grpc_cli ls localhost:8060-l
并在这里发布答案。谢谢尼克-我刚刚更新了grpc_cli ls localhost:8060-lAs,你看到你有连接,一切正常。不是真的,但我会告诉你:-)那么如果我想通过端口3000使用grpc,你认为哪里出了问题?换句话说,你将如何调试这个?