Azure functions 使用GRPC在azure函数和aks群集之间通信

Azure functions 使用GRPC在azure函数和aks群集之间通信,azure-functions,hyperledger-fabric,grpc,azure-aks,Azure Functions,Hyperledger Fabric,Grpc,Azure Aks,我无法使用GRPC将azure功能与kubernetes通信,我有一个hyperledger结构客户端从该功能运行,连接到AKS中部署的区块链 这就是我得到的错误: [Query]: evaluate: Query ID "[object Object]" of peer "peer1.xxxxxx.eastus.aksapp.io:443" failed: message=14 UNAVAILABLE: failed to connect to all

我无法使用GRPC将azure功能与kubernetes通信,我有一个hyperledger结构客户端从该功能运行,连接到AKS中部署的区块链

这就是我得到的错误:

[Query]: evaluate: Query ID "[object Object]" of peer "peer1.xxxxxx.eastus.aksapp.io:443" failed: message=14 UNAVAILABLE: failed to connect to all addresses, stack=Error: 14 UNAVAILABLE: failed to connect to all addresses
at Object.exports.createStatusError (/home/site/wwwroot/node_modules/grpc/src/common.js:91:15)
at Object.onReceiveStatus (/home/site/wwwroot/node_modules/grpc/src/client_interceptors.js:1209:28)
at InterceptingListener._callNext (/home/site/wwwroot/node_modules/grpc/src/client_interceptors.js:568:42)
at InterceptingListener.onReceiveStatus (/home/site/wwwroot/node_modules/grpc/src/client_interceptors.js:618:8)
at callback (/home/site/wwwroot/node_modules/grpc/src/client_interceptors.js:847:24), code=14, , flags=0, details=failed to connect to all addresses
azure功能使用服务计划在node js 10-LTS环境中运行,在带有防火墙的专用vnet中运行,并且azure功能和AKS之间的通信已启用

奇怪的是:

  • 使用VPN从我的本地计算机上运行azure函数和
    npm start
    ,我可以像往常一样执行链码
  • 在一个不太安全的系统中从消费计划运行azure函数到AKS,我可以像往常一样执行链码
  • 使用https从secure azure函数调用端点到AKS也是可行的,我称之为CA,它不会从azure函数使用GRPC
  • 使用azure函数中的SSH,我可以成功运行telnet peer1.xxxxxx.eastus.aksapp.io 443
我对这个问题有点迷茫,它似乎不是防火墙,那里的人看不到任何被阻止的东西,但我无法使它工作

你知道我错过了什么吗?我还能试什么

谢谢

更新

以下是我的tsdump的一个片段:

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:03:10.148836 IP (tos 0x0, ttl 64, id 38696, offset 0, flags [DF], proto TCP (6), length 52)
    peer1-769d849b7f-5x782.7052 > 10.250.35.4.60612: Flags [.], cksum 0x5c26 (incorrect -> 0x8949), ack 1783377787, win 501, options [nop,nop,TS val 608661542 ecr 2929700376], length 0
15:03:10.148923 IP (tos 0x0, ttl 63, id 59430, offset 0, flags [DF], proto TCP (6), length 52)
    10.250.35.4.60612 > peer1-769d849b7f-5x782.7052: Flags [.], cksum 0x5c26 (incorrect -> 0x7511), ack 1, win 501, options [nop,nop,TS val 2929715480 ecr 608586078], length 0
15:03:10.149285 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10.250.35.35 is-at 12:34:56:78:9a:bc (oui Unknown), length 28
15:03:10.149535 IP (tos 0x0, ttl 64, id 50400, offset 0, flags [DF], proto UDP (17), length 70)
    peer1-769d849b7f-5x782.40006 > kube-dns.kube-system.svc.cluster.local.53: 46740+ PTR? 4.35.250.10.in-addr.arpa. (42)
15:03:10.161544 IP (tos 0x0, ttl 64, id 61799, offset 0, flags [DF], proto UDP (17), length 175)
    kube-dns.kube-system.svc.cluster.local.53 > peer1-769d849b7f-5x782.40006: 46740 NXDomain* 0/1/0 (147)
15:03:10.161990 IP (tos 0x0, ttl 64, id 50402, offset 0, flags [DF], proto UDP (17), length 71)
    peer1-769d849b7f-5x782.36622 > kube-dns.kube-system.svc.cluster.local.53: 53155+ PTR? 35.35.250.10.in-addr.arpa. (43)
15:03:10.162032 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 10-250-35-47.kube-dns.kube-system.svc.cluster.local tell 10.250.35.4, length 28
15:03:10.162348 ARP, Ethernet (len 6), IPv4 (len 4), Reply 10-250-35-47.kube-dns.kube-system.svc.cluster.local is-at 12:34:56:78:9a:bc (oui Unknown), length 28
15:03:10.249331 IP (tos 0x0, ttl 63, id 53694, offset 0, flags [DF], proto UDP (17), length 176)
    kube-dns.kube-system.svc.cluster.local.53 > peer1-769d849b7f-5x782.36622: 53155 NXDomain* 0/1/0 (148)
15:03:10.249594 IP (tos 0x0, ttl 64, id 50418, offset 0, flags [DF], proto UDP (17), length 71)
    peer1-769d849b7f-5x782.45127 > kube-dns.kube-system.svc.cluster.local.53: 49955+ PTR? 10.45.250.10.in-addr.arpa. (43)
15:03:10.254082 IP (tos 0x0, ttl 63, id 53695, offset 0, flags [DF], proto UDP (17), length 148)
    kube-dns.kube-system.svc.cluster.local.53 > peer1-769d849b7f-5x782.45127: 49955*- 1/0/0 10.45.250.10.in-addr.arpa. PTR kube-dns.kube-system.svc.cluster.local. (120)
15:03:10.254322 IP (tos 0x0, ttl 64, id 50419, offset 0, flags [DF], proto UDP (17), length 71)
    peer1-769d849b7f-5x782.45289 > kube-dns.kube-system.svc.cluster.local.53: 13200+ PTR? 47.35.250.10.in-addr.arpa. (43)
15:03:10.254932 IP (tos 0x0, ttl 64, id 61808, offset 0, flags [DF], proto UDP (17), length 161)
...
422 packets captured
422 packets received by filter
0 packets dropped by kernel

您是否可以通过
tcpdump
(在对等主机上)捕获网络流量并将其发布到此处?这可能是TLS问题,您的应用程序没有配置正确的TLS CA证书。您好@yacovm我添加了我的转储的一个片段,我们可以看到它在回复应答时丢失了。看起来它正在尝试进行反向DNS查找。您是否可以在应用程序上使用IP地址而不是hotname,并尝试一下它是否有效?它有效!!谢谢@yacovm嗨,它工作了一段时间,但后来停止工作,开始产生握手错误,有什么线索吗?Tks