Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/loops/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Go Gzip头强制文件下载_Go_Http Headers_Gzip_Mux - Fatal编程技术网

Go Gzip头强制文件下载

Go Gzip头强制文件下载,go,http-headers,gzip,mux,Go,Http Headers,Gzip,Mux,我正在尝试gzip所有的回应。 总的来说,去吧 mux := mux.NewRouter() mux.Use(middlewareHeaders) mux.Use(gzipHandler) 然后我有了中间产品: func gzipHandler(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { gz := gzi

我正在尝试gzip所有的回应。 总的来说,去吧

mux := mux.NewRouter()
mux.Use(middlewareHeaders)
mux.Use(gzipHandler)
然后我有了中间产品:

func gzipHandler(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        gz := gzip.NewWriter(w)
        defer gz.Close()
        gzr := gzipResponseWriter{Writer: gz, ResponseWriter: w}
        next.ServeHTTP(gzr, r)
    })
}

func middlewareHeaders(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        w.Header().Set("Cache-Control", "max-age=2592000") // 30 days
        w.Header().Set("Content-Encoding", "gzip")
        w.Header().Set("Strict-Transport-Security", "max-age=63072000; includeSubDomains; preload")
        w.Header().Set("Access-Control-Allow-Headers", "Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token")
        w.Header().Set("Access-Control-Allow-Methods", "POST")
        w.Header().Set("Access-Control-Allow-Origin", "origin")
        w.Header().Set("Access-Control-Allow-Credentials", "true")
        w.Header().Set("Access-Control-Expose-Headers", "AMP-Access-Control-Allow-Source-Origin")
        w.Header().Set("AMP-Access-Control-Allow-Source-Origin", os.Getenv("DOMAIN"))
        next.ServeHTTP(w, r)
    })
}
当我卷曲站点时,我得到

curl -v https://example.com
*   Trying 44.234.222.27:443...
* TCP_NODELAY set
* Connected to example.com (XX.XXX.XXX.XX) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=example.com
*  start date: Mar 16 00:00:00 2021 GMT
*  expire date: Apr 16 23:59:59 2022 GMT
*  subjectAltName: host "example.com" matched cert's "example.com"
*  issuer: C=GB; ST=Greater Manchester; L=Salford; O=Sectigo Limited; CN=Sectigo RSA Domain Validation Secure Server CA
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55cadcebfe10)
> GET / HTTP/2
> Host: example.com
> user-agent: curl/7.68.0
> accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200 
< date: Mon, 07 Jun 2021 20:13:19 GMT
< access-control-allow-credentials: true
< access-control-allow-headers: Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token
< access-control-allow-methods: POST
< access-control-allow-origin: origin
< access-control-expose-headers: AMP-Access-Control-Allow-Source-Origin
< amp-access-control-allow-source-origin: example.com
< cache-control: max-age=2592000
< content-encoding: gzip
< strict-transport-security: max-age=63072000; includeSubDomains; preload
< vary: Accept-Encoding
< 
Warning: Binary output can mess up your terminal. Use "--output -" to tell 
Warning: curl to output it to your terminal anyway, or consider "--output 
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 3506)
* stopped the pause stream!
* Connection #0 to host example.com left intact
curl-vhttps://example.com
*正在尝试44.234.222.27:443。。。
*TCP_节点集
*已连接到example.com(XX.XXX.XXX.XX)端口443(#0)
*阿尔卑斯山,提供h2
*ALPN,提供http/1.1
*已成功设置证书验证位置:
*CAfile:/etc/ssl/certs/ca-certificates.crt
CApath:/etc/ssl/certs
*TLSv1.3(输出),TLS握手,客户端问候(1):
*TLSv1.3(IN)、TLS握手、服务器hello(2):
*TLSv1.2(IN),TLS握手,证书(11):
*TLSv1.2(IN)、TLS握手、服务器密钥交换(12):
*TLSv1.2(IN),TLS握手,服务器完成(14):
*TLSv1.2(输出)、TLS握手、客户端密钥交换(16):
*TLSv1.2(OUT),TLS更改密码,更改密码规范(1):
*TLSv1.2(输出),TLS握手,完成(20):
*TLSv1.2(IN),TLS握手,完成(20):
*使用TLSv1.2/ECDHE-RSA-AES128-GCM-SHA256的SSL连接
*ALPN,服务器接受使用h2
*服务器证书:
*主题:CN=example.com
*开始日期:格林威治标准时间2021年3月16日00:00:00
*过期日期:4月16日23:59:59格林威治标准时间2022
*subjectAltName:主机“example.com”匹配证书的“example.com”
*发行人:C=GB;ST=大曼彻斯特;L=萨尔福德;O=Sectigo有限公司;CN=Sectigo RSA域验证安全服务器CA
*SSL证书验证正常。
*使用HTTP2,服务器支持多用途
*连接状态已更改(HTTP/2已确认)
*升级后正在将流缓冲区中的HTTP/2数据复制到连接缓冲区:len=0
*使用流ID:1(易处理0x55cadcebfe10)
>GET/HTTP/2
>主持人:example.com
>用户代理:curl/7.68.0
>接受:*/*
> 
*连接状态已更改(最大并发流==128)!
启用gzip处理程序和gzip头时,浏览器希望下载一个文件


有人能发现我的错误吗。只有当客户端请求时,才应
gzip

Accept Encoding:gzip
从未被请求,但您仍然可以
gzip
响应

所以
curl
会按原样将其返回给您

二,。考虑到浏览器的行为,这听起来像是双重压缩。也许您有一些HTTP反向代理,它已经处理对浏览器的压缩,但不压缩后端流量。因此,您可能根本不需要在后端执行任何gzip操作—请尝试
curl--compressed
来确认这一点

三,。您应该从响应中筛选出
内容长度
。Content Length是压缩HTTP响应的最终大小,因此该值在压缩过程中会发生变化

四,。您不应该盲目地对所有URI应用压缩。一些处理程序已经执行了gzip压缩(例如prometheus
/metrics
),而有些处理程序对压缩毫无意义(例如
.png
.zip
.gz
)。在将请求向下传递到处理程序链之前,至少从请求中剥离
Accept Encoding:gzip
,以避免双重gzip

五,。Go中的透明gzip以前已经实现过。快速搜索显示(根据上述第4点进行了调整):


注意-以上不支持分块编码和预告片。因此,仍有改进的机会。

1。只有当客户端请求时,才应
gzip

Accept Encoding:gzip
从未被请求,但您仍然可以
gzip
响应

所以
curl
会按原样将其返回给您

二,。考虑到浏览器的行为,这听起来像是双重压缩。也许您有一些HTTP反向代理,它已经处理对浏览器的压缩,但不压缩后端流量。因此,您可能根本不需要在后端执行任何gzip操作—请尝试
curl--compressed
来确认这一点

三,。您应该从响应中筛选出
内容长度
。Content Length是压缩HTTP响应的最终大小,因此该值在压缩过程中会发生变化

四,。您不应该盲目地对所有URI应用压缩。一些处理程序已经执行了gzip压缩(例如prometheus
/metrics
),而有些处理程序对压缩毫无意义(例如
.png
.zip
.gz
)。在将请求向下传递到处理程序链之前,至少从请求中剥离
Accept Encoding:gzip
,以避免双重gzip

五,。Go中的透明gzip以前已经实现过。快速搜索显示(根据上述第4点进行了调整):


注意-以上不支持分块编码和预告片。因此仍然有改进的机会。

我建议您设置一个清晰的
内容类型
标题,这样客户就不必猜测内容的类型。那么问题可能就不存在了。gzip处理程序不会检查客户端的
接受编码
头,默认情况下,
curl
也不会请求压缩内容,也不会对其进行解压缩。我建议您设置一个清晰的
内容类型
头,以便客户端不必猜测内容的类型。也许问题就解决了。你的gzip处理程序
package main

import (
    "compress/gzip"
    "io"
    "io/ioutil"
    "net/http"
    "strings"
    "sync"
)

var gzPool = sync.Pool{
    New: func() interface{} {
        w := gzip.NewWriter(ioutil.Discard)
        return w
    },
}

type gzipResponseWriter struct {
    io.Writer
    http.ResponseWriter
}

func (w *gzipResponseWriter) WriteHeader(status int) {
    w.Header().Del("Content-Length")
    w.ResponseWriter.WriteHeader(status)
}

func (w *gzipResponseWriter) Write(b []byte) (int, error) {
    return w.Writer.Write(b)
}

func Gzip(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") {
            next.ServeHTTP(w, r)
            return
        }

        w.Header().Set("Content-Encoding", "gzip")

        gz := gzPool.Get().(*gzip.Writer)
        defer gzPool.Put(gz)

        gz.Reset(w)
        defer gz.Close()

        r.Header.Del("Accept-Encoding")
        next.ServeHTTP(&gzipResponseWriter{ResponseWriter: w, Writer: gz}, r)
    })
}