Caching Nginx:在哪个订单中执行速率限制和缓存?

Caching Nginx:在哪个订单中执行速率限制和缓存?,caching,nginx,rate-limiting,nginx-reverse-proxy,Caching,Nginx,Rate Limiting,Nginx Reverse Proxy,我想使用nginx进行速率限制和缓存 nginx应用它们的顺序是什么?换句话说,它是仅限制对上游服务器的请求还是限制所有请求(包括缓存命中) 如何更改此订单?我认为可以通过两个server上下文来改变它。例如,在一台服务器上执行缓存。它将第二个服务器上下文作为上游。第二种方法将请求限制在“真正的”上游。但这可能不是最有效的方法…如果您有网络负载均衡器,那么您必须在两台服务器上都实现速率和缓存 利率限制是这样的 location /login/ { limit_req zone=mylim

我想使用nginx进行速率限制和缓存

nginx应用它们的顺序是什么?换句话说,它是仅限制对上游服务器的请求还是限制所有请求(包括缓存命中)


如何更改此订单?我认为可以通过两个
server
上下文来改变它。例如,在一台
服务器上执行缓存。它将第二个
服务器
上下文作为上游。第二种方法将请求限制在“真正的”上游。但这可能不是最有效的方法…

如果您有网络负载均衡器,那么您必须在两台服务器上都实现速率和缓存

利率限制是这样的

location /login/ {
    limit_req zone=mylimit burst=20;
 
    proxy_pass http://my_upstream;
}
burst参数定义了一个客户机可以发出的请求数量超过区域指定的速率(对于我们的示例mylimit区域,速率限制为每秒10个请求,或者每100毫秒1个请求)。在前一个请求放入队列后100毫秒之前到达的请求,这里我们将队列大小设置为20

这意味着,如果有21个请求同时从给定的IP地址到达,NGINX会立即将第一个请求转发给上游服务器组,并将其余20个请求放入队列。然后,它每100毫秒转发一个排队请求,仅当传入请求使排队请求数超过20时,才会将503返回给客户端。 现在,若您只在一台服务器上限制分级,那个么两个随后的请求将出现问题,这两个请求将发送到不同的服务器。 所有缓存变量也需要同步。您需要Redis或持久存储来进行缓存

请求处理 Nginx请求处理是通过许多不同的阶段完成的,每个阶段都有一个或多个处理程序。模块可以注册以在特定阶段运行

速率限制 速率限制由
ngx\u http\u limit\u req\u模块在预访问阶段应用。发件人:

缓存 我认为缓存是在内容阶段完成的

通过查看代码或文档,我无法完全理解这一点。但我能够通过调试构建来演示这一点。我的配置具有每秒1个请求的速率限制,并启用缓存。请参阅我的日志中的以下摘录

缓存请求

...
2020/08/01 11:11:07 [debug] 17498#0: *7 http header done
2020/08/01 11:11:07 [debug] 17498#0: *7 rewrite phase: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 test location: "/"
2020/08/01 11:11:07 [debug] 17498#0: *7 using configuration "=/"
2020/08/01 11:11:07 [debug] 17498#0: *7 http cl:-1 max:1048576
2020/08/01 11:11:07 [debug] 17498#0: *7 rewrite phase: 2
2020/08/01 11:11:07 [debug] 17498#0: *7 post rewrite phase: 3
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 4
2020/08/01 11:11:07 [debug] 17498#0: *7 http script var: ....
2020/08/01 11:11:07 [debug] 17498#0: shmtx lock
2020/08/01 11:11:07 [debug] 17498#0: shmtx unlock
2020/08/01 11:11:07 [debug] 17498#0: *7 limit_req[0]: 0 0.000
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 5
2020/08/01 11:11:07 [debug] 17498#0: *7 access phase: 6
2020/08/01 11:11:07 [debug] 17498#0: *7 access phase: 7
2020/08/01 11:11:07 [debug] 17498#0: *7 post access phase: 8
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 9
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 10
2020/08/01 11:11:07 [debug] 17498#0: *7 http init upstream, client timer: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http cache key: "http://127.0.0.1:9000"
2020/08/01 11:11:07 [debug] 17498#0: *7 http cache key: "/"
2020/08/01 11:11:07 [debug] 17498#0: *7 add cleanup: 00005609F7C51578
2020/08/01 11:11:07 [debug] 17498#0: shmtx lock
2020/08/01 11:11:07 [debug] 17498#0: shmtx unlock
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache exists: 0 e:1
2020/08/01 11:11:07 [debug] 17498#0: *7 cache file: "/home/poida/src/nginx-1.15.6/objs/cache/157d4d91f488c05ff417723d74d65b36"
2020/08/01 11:11:07 [debug] 17498#0: *7 add cleanup: 00005609F7C46810
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache fd: 12
2020/08/01 11:11:07 [debug] 17498#0: *7 read: 12, 00005609F7C46890, 519, 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http upstream cache: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy status 200 "200 OK"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Server: SimpleHTTP/0.6 Python/3.8.5"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Date: Sat, 01 Aug 2020 01:11:03 GMT"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Content-type: text/html; charset=utf-8"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Content-Length: 340"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header done
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache send: /home/poida/src/nginx-1.15.6/objs/cache/157d4d91f488c05ff417723d74d65b36
2020/08/01 11:11:07 [debug] 17498#0: *7 posix_memalign: 00005609F7C46DC0:4096 @16
2020/08/01 11:11:07 [debug] 17498#0: *7 HTTP/1.1 200 OK
...
...
2020/08/01 11:17:04 [debug] 17498#0: *10 http header done
2020/08/01 11:17:04 [debug] 17498#0: *10 rewrite phase: 0
2020/08/01 11:17:04 [debug] 17498#0: *10 test location: "/"
2020/08/01 11:17:04 [debug] 17498#0: *10 using configuration "=/"
2020/08/01 11:17:04 [debug] 17498#0: *10 http cl:-1 max:1048576
2020/08/01 11:17:04 [debug] 17498#0: *10 rewrite phase: 2
2020/08/01 11:17:04 [debug] 17498#0: *10 post rewrite phase: 3
2020/08/01 11:17:04 [debug] 17498#0: *10 generic phase: 4
2020/08/01 11:17:04 [debug] 17498#0: *10 http script var: ....
2020/08/01 11:17:04 [debug] 17498#0: shmtx lock
2020/08/01 11:17:04 [debug] 17498#0: shmtx unlock
2020/08/01 11:17:04 [debug] 17498#0: *10 limit_req[0]: -3 0.707
2020/08/01 11:17:04 [error] 17498#0: *10 limiting requests, excess: 0.707 by zone "mylimit", client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8080"
2020/08/01 11:17:04 [debug] 17498#0: *10 http finalize request: 503, "/?" a:1, c:1
2020/08/01 11:17:04 [debug] 17498#0: *10 http special response: 503, "/?"
2020/08/01 11:17:04 [debug] 17498#0: *10 http set discard body
2020/08/01 11:17:04 [debug] 17498#0: *10 HTTP/1.1 503 Service Temporarily Unavailable
...
速率限制请求

...
2020/08/01 11:11:07 [debug] 17498#0: *7 http header done
2020/08/01 11:11:07 [debug] 17498#0: *7 rewrite phase: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 test location: "/"
2020/08/01 11:11:07 [debug] 17498#0: *7 using configuration "=/"
2020/08/01 11:11:07 [debug] 17498#0: *7 http cl:-1 max:1048576
2020/08/01 11:11:07 [debug] 17498#0: *7 rewrite phase: 2
2020/08/01 11:11:07 [debug] 17498#0: *7 post rewrite phase: 3
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 4
2020/08/01 11:11:07 [debug] 17498#0: *7 http script var: ....
2020/08/01 11:11:07 [debug] 17498#0: shmtx lock
2020/08/01 11:11:07 [debug] 17498#0: shmtx unlock
2020/08/01 11:11:07 [debug] 17498#0: *7 limit_req[0]: 0 0.000
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 5
2020/08/01 11:11:07 [debug] 17498#0: *7 access phase: 6
2020/08/01 11:11:07 [debug] 17498#0: *7 access phase: 7
2020/08/01 11:11:07 [debug] 17498#0: *7 post access phase: 8
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 9
2020/08/01 11:11:07 [debug] 17498#0: *7 generic phase: 10
2020/08/01 11:11:07 [debug] 17498#0: *7 http init upstream, client timer: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http cache key: "http://127.0.0.1:9000"
2020/08/01 11:11:07 [debug] 17498#0: *7 http cache key: "/"
2020/08/01 11:11:07 [debug] 17498#0: *7 add cleanup: 00005609F7C51578
2020/08/01 11:11:07 [debug] 17498#0: shmtx lock
2020/08/01 11:11:07 [debug] 17498#0: shmtx unlock
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache exists: 0 e:1
2020/08/01 11:11:07 [debug] 17498#0: *7 cache file: "/home/poida/src/nginx-1.15.6/objs/cache/157d4d91f488c05ff417723d74d65b36"
2020/08/01 11:11:07 [debug] 17498#0: *7 add cleanup: 00005609F7C46810
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache fd: 12
2020/08/01 11:11:07 [debug] 17498#0: *7 read: 12, 00005609F7C46890, 519, 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http upstream cache: 0
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy status 200 "200 OK"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Server: SimpleHTTP/0.6 Python/3.8.5"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Date: Sat, 01 Aug 2020 01:11:03 GMT"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Content-type: text/html; charset=utf-8"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header: "Content-Length: 340"
2020/08/01 11:11:07 [debug] 17498#0: *7 http proxy header done
2020/08/01 11:11:07 [debug] 17498#0: *7 http file cache send: /home/poida/src/nginx-1.15.6/objs/cache/157d4d91f488c05ff417723d74d65b36
2020/08/01 11:11:07 [debug] 17498#0: *7 posix_memalign: 00005609F7C46DC0:4096 @16
2020/08/01 11:11:07 [debug] 17498#0: *7 HTTP/1.1 200 OK
...
...
2020/08/01 11:17:04 [debug] 17498#0: *10 http header done
2020/08/01 11:17:04 [debug] 17498#0: *10 rewrite phase: 0
2020/08/01 11:17:04 [debug] 17498#0: *10 test location: "/"
2020/08/01 11:17:04 [debug] 17498#0: *10 using configuration "=/"
2020/08/01 11:17:04 [debug] 17498#0: *10 http cl:-1 max:1048576
2020/08/01 11:17:04 [debug] 17498#0: *10 rewrite phase: 2
2020/08/01 11:17:04 [debug] 17498#0: *10 post rewrite phase: 3
2020/08/01 11:17:04 [debug] 17498#0: *10 generic phase: 4
2020/08/01 11:17:04 [debug] 17498#0: *10 http script var: ....
2020/08/01 11:17:04 [debug] 17498#0: shmtx lock
2020/08/01 11:17:04 [debug] 17498#0: shmtx unlock
2020/08/01 11:17:04 [debug] 17498#0: *10 limit_req[0]: -3 0.707
2020/08/01 11:17:04 [error] 17498#0: *10 limiting requests, excess: 0.707 by zone "mylimit", client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost:8080"
2020/08/01 11:17:04 [debug] 17498#0: *10 http finalize request: 503, "/?" a:1, c:1
2020/08/01 11:17:04 [debug] 17498#0: *10 http special response: 503, "/?"
2020/08/01 11:17:04 [debug] 17498#0: *10 http set discard body
2020/08/01 11:17:04 [debug] 17498#0: *10 HTTP/1.1 503 Service Temporarily Unavailable
...
对于速率受限的请求,在服务器尝试生成内容或检查缓存之前,处理将停止

TL;博士在缓存之前,首先应用速率限制。

NGINX

NGINX速率限制使用漏桶算法,该算法被广泛使用 用于电信和分组交换计算机网络,以 处理带宽有限时的突发性。这是一个比喻 水桶,水从顶部倒入,从底部漏出; 如果水的注入速率超过其注入速率 泄漏时,铲斗溢出。在请求处理方面,水 表示来自客户端的请求,bucket表示队列 其中请求等待根据先进先出进行处理 (FIFO)调度算法。漏水代表要求 正在退出缓冲区以供服务器处理,并且溢出 表示已丢弃且从不提供服务的请求

从我的一台服务器添加配置快照:

ngx_http_limit_conn_模块
用于限制每个定义密钥的连接数,特别是来自单个IP地址的连接数

并非所有连接都被计算在内。只有当服务器正在处理一个请求并且整个请求头已经被读取时,连接才会被计数

因此,基本上您可以为实际服务器和所有其他单独的虚拟服务器执行两种设置

通过限制IP 通过限制连接

可能有几个限制连接指令。例如 以下配置将限制到的连接数 每个客户端IP的服务器数,同时 与虚拟服务器的连接:

下面是同样的例子

limit_conn_zone $binary_remote_addr zone=perip:10m;
limit_conn_zone $server_name zone=perserver:10m;

server {
    ...
    limit_conn perip 10;
    limit_conn perserver 100;
}
一起前进

达潘


如何进行缓存和速率限制?