Amazon web services aws sdk中的Goroutine泄漏是否正常?

Amazon web services aws sdk中的Goroutine泄漏是否正常?,amazon-web-services,go,amazon-s3,aws-sdk-go,Amazon Web Services,Go,Amazon S3,Aws Sdk Go,我有以下使用当前aws sdk go版本1.7.9的代码片段 sess, _ := session.NewSession() s3client := s3.New(sess) location, err := s3client.GetBucketLocation(&s3.GetBucketLocationInput{Bucket: &bucket}) 我在调用GetBucketLocation()前后记录调用堆栈。我看到goroutine的总数增加了两个,之后会运行这两个额外的

我有以下使用当前aws sdk go版本1.7.9的代码片段

sess, _ := session.NewSession()
s3client := s3.New(sess)
location, err := s3client.GetBucketLocation(&s3.GetBucketLocationInput{Bucket: &bucket})
我在调用GetBucketLocation()前后记录调用堆栈。我看到goroutine的总数增加了两个,之后会运行这两个额外的goroutine:

goroutine 45 [IO wait]:
net.runtime_pollWait(0x2029008, 0x72, 0x8)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/runtime/netpoll.go:160 +0x59
net.(*pollDesc).wait(0xc420262610, 0x72, 0xc42003e6f0, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_poll_runtime.go:73 +0x38
net.(*pollDesc).waitRead(0xc420262610, 0xbcb200, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_poll_runtime.go:78 +0x34
net.(*netFD).Read(0xc4202625b0, 0xc42022fc00, 0x400, 0x400, 0x0, 0xbcb200, 0xc4200121b0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/fd_unix.go:243 +0x1a1
net.(*conn).Read(0xc42023c068, 0xc42022fc00, 0x400, 0x400, 0x0, 0x0, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/net.go:173 +0x70
crypto/tls.(*block).readFromUntil(0xc42017c060, 0x2029248, 0xc42023c068, 0x5, 0xc42023c068, 0xc400000000)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:476 +0x91
crypto/tls.(*Conn).readRecord(0xc42029a000, 0x840917, 0xc42029a108, 0xc420116ea0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:578 +0xc4
crypto/tls.(*Conn).Read(0xc42029a000, 0xc420196000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/crypto/tls/conn.go:1113 +0x116
net/http.(*persistConn).Read(0xc42000ba00, 0xc420196000, 0x1000, 0x1000, 0x23d3b0, 0xc42003eb58, 0x7a8d)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1261 +0x154
bufio.(*Reader).fill(0xc42000cba0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/bufio/bufio.go:97 +0x10c
bufio.(*Reader).Peek(0xc42000cba0, 0x1, 0xc42003ebbd, 0x1, 0x0, 0xc42000cc00, 0x0)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/bufio/bufio.go:129 +0x62
net/http.(*persistConn).readLoop(0xc42000ba00)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1418 +0x1a1
created by net/http.(*Transport).dialConn
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1062 +0x4e9

goroutine 46 [select]:
net/http.(*persistConn).writeLoop(0xc42000ba00)
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1646 +0x3bd
created by net/http.(*Transport).dialConn
        /usr/local/Cellar/go/1.7.4_2/libexec/src/net/http/transport.go:1063 +0x50e
这些例程不会随着时间的推移而消失,并且随着对GetBucketLocation()的调用增多,它们会继续累积

我是否做错了什么(忽略关闭某些资源),或者aws sdk go中是否发生了goroutine泄漏


注意:在s3manager.Downloader::Download()函数中也观察到了相同的行为。

事实证明,我说例程不会随时间消失是错误的。如果我在调用GetBucketLocation之后,在打印出goroutine堆栈之前,添加10秒睡眠,那么额外的例程确实会消失

我相信这是因为golang的net/http包维护了某种可以重用的连接池。见以下讨论:


等待足够长的时间似乎最终会关闭连接并停止goroutines。

事实证明,我说的例程不会随着时间的推移而消失是错误的。如果我在调用GetBucketLocation之后,在打印出goroutine堆栈之前,添加10秒睡眠,那么额外的例程确实会消失

我相信这是因为golang的net/http包维护了某种可以重用的连接池。见以下讨论:


等待足够长的时间似乎最终会关闭连接并停止goroutines。

活动连接对于HTTP客户端来说是正常的,并且您可能有多个到您所联系的每个远程主机的连接。这些是否会无限增长?活动连接对于HTTP客户端来说是正常的,并且您可能有多个到您所联系的每个远程主机的连接。这些是无限增长的吗?