我们都知道http1.1是支持keepalive,这样就可以减少建连的次数,可以提高性能。
于是简单在单机上做了一个测试。
我们先使用一个没有keepalive的配置。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| upstream test_server { server 127.0.0.1:8999; server 127.0.0.1:8997; keepalive 32; }
server { listen 8998; server_name _; access_log /opt/server/nginx/log/8998-access.log main; error_log /opt/server/nginx/log/8998-error.log;
location / { proxy_pass http://test_server;
} }
server { listen 8999; server_name _; access_log /opt/server/nginx/log/8999-access.log main; error_log /opt/server/nginx/log/8999-error.log;
location / { return 200 '{"status":"OK","entities":[]}'; } } server { listen 8997; server_name _; access_log /opt/server/nginx/log/8997-access.log main; error_log /opt/server/nginx/log/8997-error.log;
location / { return 200 '{"status":"OK","entities":[]}'; } }
|
测试结果如下:
1 2 3 4 5 6 7 8 9
| $ wrk -t1 -c32 -d60s http://127.0.0.1:8998/ Running 1m test @ http://127.0.0.1:8998/ 1 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.61ms 2.14ms 76.87ms 99.52% Req/Sec 21.31k 2.22k 26.73k 69.00% 1272221 requests in 1.00m, 260.85MB read Requests/sec: 21202.84 Transfer/sec: 4.35MB
|
我们再打开keepalive的设置
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
| upstream test_server { server 127.0.0.1:8999; server 127.0.0.1:8997; keepalive 32; }
server { listen 8998; server_name _; access_log /opt/server/nginx/log/8998-access.log main; error_log /opt/server/nginx/log/8998-error.log;
location / { proxy_pass http://test_server; proxy_http_version 1.1; proxy_set_header Connection ""; } }
server { listen 8999; server_name _; access_log /opt/server/nginx/log/8999-access.log main; error_log /opt/server/nginx/log/8999-error.log;
location / { return 200 '{"status":"OK","entities":[]}'; } } server { listen 8997; server_name _; access_log /opt/server/nginx/log/8997-access.log main; error_log /opt/server/nginx/log/8997-error.log;
location / { return 200 '{"status":"OK","entities":[]}'; } }
|
这次测试结果如下:
1 2 3 4 5 6 7 8 9
| $ wrk -t1 -c32 -d60s http://127.0.0.1:8998/ Running 1m test @ http://127.0.0.1:8998/ 1 threads and 32 connections Thread Stats Avg Stdev Max +/- Stdev Latency 817.00us 3.10ms 107.27ms 99.47% Req/Sec 49.73k 6.16k 74.11k 75.50% 2968158 requests in 1.00m, 608.58MB read Requests/sec: 49468.57 Transfer/sec: 10.14MB
|
我们可以看到一个测试结果是每秒2万次,一个是每秒49000次。差了大概2.5倍左右了。
原因我们应该很容易明白,一个是有很多的时间花在了3次握手和4次挥手上了。
虽然在localhost上这些时间都非常短,但是一旦实际中花的时间是不会少的。
而keepalive的我们就可以很明显的看到一个tcp连接进行了多少次的get请求
数了一下是100次GET请求,这个估计是因为我是nginx 1.16.1的缘故,在1.19.1以后是可以单独设置的。keepalive_time时长也可以单独来进行设置。