Concurrency Rust vs Go并发Web服务器,为什么Rust在这里速度慢?

Concurrency Rust vs Go并发Web服务器,为什么Rust在这里速度慢?,concurrency,rust,webserver,Concurrency,Rust,Webserver,我在Rust book中尝试了一些基准测试,为了进行比较,我在Go中构建了一些类似的东西,并使用ApacheBench运行了一个基准测试。虽然这只是一个简单的例子,但差别太大了。Go web服务器执行相同操作的速度快了10倍。因为我期望Rust更快或达到相同的水平,所以我尝试了使用futures和smol进行多次修订,尽管我的目标是比较仅使用标准库的实现,但结果几乎相同。这里有人能建议对Rust实现进行更改,使其更快,而不使用大量线程数吗 以下是我使用的代码: tokio http版本速度最慢,

我在Rust book中尝试了一些基准测试,为了进行比较,我在Go中构建了一些类似的东西,并使用ApacheBench运行了一个基准测试。虽然这只是一个简单的例子,但差别太大了。Go web服务器执行相同操作的速度快了10倍。因为我期望Rust更快或达到相同的水平,所以我尝试了使用futures和smol进行多次修订,尽管我的目标是比较仅使用标准库的实现,但结果几乎相同。这里有人能建议对Rust实现进行更改,使其更快,而不使用大量线程数吗

以下是我使用的代码:

tokio http版本速度最慢,其他3个rust版本给出的结果几乎相同

以下是基准:

8条螺纹生锈,100条螺纹生锈数字更接近:

去:


我只比较了你的rustws和Go版本。在Go中,您有无限的Goroutine,即使您将它们都限制为只有一个CPU核心,而在rustws中,您创建了一个包含8个线程的线程池


由于您的请求处理程序每10个请求睡眠2秒,因此您将rustws版本限制为每秒80/2=40个请求,这就是您在ab结果中看到的。Go不受这个任意瓶颈的影响,因此它显示了单个CPU内核上的最大it烛光句柄。

我终于能够使用async_std lib在Rust中获得类似的结果

❯ ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        176 bytes

Concurrency Level:      100
Time taken for tests:   2.094 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      195000 bytes
HTML transferred:       176000 bytes
Requests per second:    477.47 [#/sec] (mean)
Time per request:       209.439 [ms] (mean)
Time per request:       2.094 [ms] (mean, across all concurrent requests)
Transfer rate:          90.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.7      2       7
Processing:     0  202 599.7      2    2002
Waiting:        0  201 600.1      1    2002
Total:          0  205 599.7      5    2007

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      9
  80%      9
  90%   2000
  95%   2003
  98%   2004
  99%   2006
 100%   2007 (longest request)
下面是实现

use async_std::net::TcpListener;
use async_std::net::TcpStream;
use async_std::prelude::*;
use async_std::task;
use std::fs;
use std::time::Duration;

#[async_std::main]
async fn main() {
    let mut count = 0;

    let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap(); // set listen port

    loop {
        count = count + 1;
        let count_n = Box::new(count);
        let (stream, _) = listener.accept().await.unwrap();
        task::spawn(handle_connection(stream, count_n)); // spawn a new task to handle the connection
    }
}

async fn handle_connection(mut stream: TcpStream, count: Box<i64>) {
    // Read the first 1024 bytes of data from the stream
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await.unwrap();

    // add 2 second delay to every 10th request
    if (*count % 10) == 0 {
        println!("Adding delay. Count: {}", count);
        task::sleep(Duration::from_secs(2)).await;
    }

    let contents = fs::read_to_string("hello.html").unwrap(); // read html file

    let response = format!("{}{}", "HTTP/1.1 200 OK\r\n\r\n", contents);
    stream.write(response.as_bytes()).await.unwrap(); // write response
    stream.flush().await.unwrap();
}

tokio_minihttp只是一个概念证明,没有维护。书中的多线程Web服务器章节只是一个例子。Go标准库有一个经过良好修改和优化的http实现。为了得到一个相等的比较,我会尝试使用Hyper或Actix-Web。这篇文章不错,但基准不好。值得更新或关闭/删除。@mh cbon你的意思是什么,你能详细说明一下PLZ吗?因为即使有些人会说ab不是基准测试的正确工具,至少这篇文章包含清晰和可复制的内容。虽然由于生锈的版本不如它应有的好,它是错误的。苹果和梨的比较。我在下面看到了你的答案。这改进了这一点。拜托,这不是重点,这是一个问题,而不是一个陈述。如果生锈的版本是最好的,我就不必问正确的问题了。以下是我能想到的解决瓶颈的方法:1。异步代码2。线程3。过程。您可以将线程限制增加到1000这样的大值,然后看看它是如何运行的。线程的效率不如goroutines或异步代码。事实上,根据对Reddit的评论,我能够从rustws_async示例中获得相同的性能,并且在运行10k请求时,rust async实现总体上比Go版本快几秒钟。这个rust版本应该比Go版本快,因为它实际上没有做同样的事情。它只接受TCP连接并将文件写回,而不像Go版本中那样代表真正的Web服务器。这种比较是不公平的。所以,如果我使用HTTP库,比如《铁锈》中的hyper,那会更公平吗?或者你可以为围棋提出更好的建议?我在围棋中写了一个TCP版本,使它更公平。可能会有一些小的改进。我将不得不运行ab基准测试几次,以确认http版本比纯TCP版本快0.1秒,对于10000个请求和100个并发请求,它仍然比Rust慢0.1秒
❯ ab -c 100 -n 1000 http://localhost:8080/
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        176 bytes

Concurrency Level:      100
Time taken for tests:   2.094 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      195000 bytes
HTML transferred:       176000 bytes
Requests per second:    477.47 [#/sec] (mean)
Time per request:       209.439 [ms] (mean)
Time per request:       2.094 [ms] (mean, across all concurrent requests)
Transfer rate:          90.92 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    2   1.7      2       7
Processing:     0  202 599.7      2    2002
Waiting:        0  201 600.1      1    2002
Total:          0  205 599.7      5    2007

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      9
  80%      9
  90%   2000
  95%   2003
  98%   2004
  99%   2006
 100%   2007 (longest request)
use async_std::net::TcpListener;
use async_std::net::TcpStream;
use async_std::prelude::*;
use async_std::task;
use std::fs;
use std::time::Duration;

#[async_std::main]
async fn main() {
    let mut count = 0;

    let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap(); // set listen port

    loop {
        count = count + 1;
        let count_n = Box::new(count);
        let (stream, _) = listener.accept().await.unwrap();
        task::spawn(handle_connection(stream, count_n)); // spawn a new task to handle the connection
    }
}

async fn handle_connection(mut stream: TcpStream, count: Box<i64>) {
    // Read the first 1024 bytes of data from the stream
    let mut buffer = [0; 1024];
    stream.read(&mut buffer).await.unwrap();

    // add 2 second delay to every 10th request
    if (*count % 10) == 0 {
        println!("Adding delay. Count: {}", count);
        task::sleep(Duration::from_secs(2)).await;
    }

    let contents = fs::read_to_string("hello.html").unwrap(); // read html file

    let response = format!("{}{}", "HTTP/1.1 200 OK\r\n\r\n", contents);
    stream.write(response.as_bytes()).await.unwrap(); // write response
    stream.flush().await.unwrap();
}