Amazon web services AWS EC2与Nginx&;PHP-FPM-无法将CPU推到50%以上

Amazon web services AWS EC2与Nginx&;PHP-FPM-无法将CPU推到50%以上,amazon-web-services,nginx,amazon-ec2,kernel,php,Amazon Web Services,Nginx,Amazon Ec2,Kernel,Php,我正在尝试测试AWS自动缩放,为此,我需要将EC2实例推到一个触发点(比如CPU在80%以上持续几分钟)将引导另一个实例启动 我发现的问题是,CPU的使用率不能超过50% 我正在使用Nginx,我已经将worker_连接的数量从1024调整到更大的数量。我已将worker_进程设置为自动。此外,fastcgi_参数设置如下: fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcg

我正在尝试测试AWS自动缩放,为此,我需要将EC2实例推到一个触发点(比如CPU在80%以上持续几分钟)将引导另一个实例启动

我发现的问题是,CPU的使用率不能超过50%

我正在使用Nginx,我已经将worker_连接的数量从1024调整到更大的数量。我已将worker_进程设置为自动。此外,fastcgi_参数设置如下:

fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 16k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_max_temp_file_size 0;
fastcgi_intercept_errors off;
我使用dynamic an设置了php fpm,如下所示,但我也对这些数字进行了更多调整,没有任何实际差异:

pm.max_children = 50
pm.start_servers = 3
pm.min_spare_servers = 3
pm.max_spare_servers = 50
我正在运行攻城,我可以在30秒内持续获得1000个并发连接,回复大约3500条,100%(没有退出),并且没有显示错误。我还运行了3个ec2实例,所有实例都有1000个并发连接,发现有时会出现一些套接字错误,但cpu的峰值从未超过50%。通常,我发现结果仍然是3500条回复散布在3台服务器上(因此每个服务器的回复都较少)

我的php缓慢登录了10秒,其中有一些查询,所以我放在了一个更大的数据库上(AWS RDS实例-最大可能的IOPS只是为了测试),这没有任何区别。我还放了一个更大的EC2实例,看看会发生什么,cpu会超过50%

最后是my/etc/sysctl.conf

# Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Controls IP packet forwarding
net.ipv4.ip_forward = 0

# Controls source route verification
net.ipv4.conf.default.rp_filter = 1

# Do not accept source routing
net.ipv4.conf.default.accept_source_route = 0

# Controls the System Request debugging functionality of the kernel
kernel.sysrq = 0

# Controls whether core dumps will append the PID to the core filename.
# Useful for debugging multi-threaded applications.
kernel.core_uses_pid = 1

# Controls the use of TCP syncookies
net.ipv4.tcp_syncookies = 1

# Disable netfilter on bridges.
# net.bridge.bridge-nf-call-ip6tables = 0
# net.bridge.bridge-nf-call-iptables = 0
# net.bridge.bridge-nf-call-arptables = 0

# Controls the default maxmimum size of a mesage queue
kernel.msgmnb = 65536

# Controls the maximum size of a message, in bytes
kernel.msgmax = 65536

# Controls the maximum shared segment size, in bytes
kernel.shmmax = 68719476736

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296

# Adam Added Below
kernel.pid_max = 262144
net.ipv4.tcp_window_scaling = 1
vm.max_map_count = 262144


# Do less swapping
fs.file-max = 2097152
vm.swappiness = 10
vm.dirty_ratio = 60
vm.dirty_background_ratio = 2


### GENERAL NETWORK SECURITY OPTIONS ###

# Number of times SYNACKs for passive TCP connection.
net.ipv4.tcp_synack_retries = 2

# Allowed local port range
#net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.ip_local_port_range = 2000 65535

# Protect Against TCP Time-Wait
net.ipv4.tcp_rfc1337 = 1

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Decrease the time default value for connections to keep alive
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15


### TUNING NETWORK PERFORMANCE ###

# Default Socket Receive Buffer
net.core.rmem_default = 31457280

# Maximum Socket Receive Buffer
net.core.rmem_max = 12582912

# Default Socket Send Buffer
net.core.wmem_default = 31457280

# Maximum Socket Send Buffer
net.core.wmem_max = 12582912

# Maximum Number of Packets
net.core.netdev_max_backlog = 30000

# Increase the maximum total buffer-space allocatable
# This is measured in units of pages (4096 bytes)
net.ipv4.tcp_mem = 65536 131072 262144
net.ipv4.udp_mem = 65536 131072 262144

# Increase the read-buffer space allocatable
net.ipv4.tcp_rmem = 8192 87380 16777216
net.ipv4.udp_rmem_min = 16384
net.core.rmem_default = 131072
net.core.rmem_max = 16777216

# Increase the write-buffer-space allocatable
net.ipv4.tcp_wmem = 8192 65536 16777216
net.ipv4.udp_wmem_min = 16384
net.core.wmem_default = 131072
net.core.wmem_max = 16777216

# Increase number of incoming connections
net.core.somaxconn = 32768

# Increase number of incoming connections backlog
net.core.netdev_max_backlog = 65536

# Increase the maximum amount of option memory buffers
net.core.optmem_max = 25165824

# Increase the tcp-time-wait buckets pool size to prevent simple DOS attacks
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
*是否有可能将服务器的cpu使用率限制在50%左右?(内存使用率仅为25%左右)。我从来没有让cpu达到55%

我希望能够将服务器推到90%以上,然后启动另一个EC2实例。通过这种方式,我从服务器上得到的钱是物有所值的

任何关于我为什么会有这种限制以及我可以尝试什么的建议


谢谢

如果您只想增加服务器负载以触发自动缩放,请运行以下命令:

loadGen() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; loadGen; read; killall dd
要生成更多的负载(在多核机器上),只需在管道之间添加更多表达式:

loadGen() { dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & }; loadGen; read; killall dd
它会完成任务的

[]s

Auro

使用压力命令

Stress -c 1

-c标志表示cpu数量

如果您不使用很多AWS服务,并且可以使用您自己的数据库(而不是RDS),并且您可以组织自己的扩展,那么避免使用amazon.yer会更划算。我刚刚读到。。。有点难看。。。英雄联盟我听你讲成本效益。。。我会考虑的…谢谢,我已经研究了专用于“基本”负载和EC2用于扩展。这在财务上效果明显更好。我还没有找到任何处理将专用服务器与EC2(或其他云服务)集成的可扩展服务(例如:API)。我一直在写我自己的扩展代码和外部监控。如果你想看到你的应用程序在给定的工作负载下使用更多的CPU,你需要在较小的实例上测试,而不是在较大的实例上测试。。。但是,您使用什么值来确定您报告的50%的数字(您是如何获得该值的)?