Tcp 如何通过单个网络连接高效地发送大型文件?

Tcp 如何通过单个网络连接高效地发送大型文件?,tcp,rust,network-programming,Tcp,Rust,Network Programming,我通过网络发送大型对象,注意到使用单个网络连接要比使用多个网络连接慢得多 服务器代码: use async_std::{ io::{BufWriter, Write}, net::TcpListener, prelude::*, task, }; use bench_utils::{end_timer, start_timer}; use futures::stream::{FuturesOrdered, StreamExt}; async fn send(bu

我通过网络发送大型对象,注意到使用单个网络连接要比使用多个网络连接慢得多

服务器代码:

use async_std::{
    io::{BufWriter, Write},
    net::TcpListener,
    prelude::*,
    task,
};
use bench_utils::{end_timer, start_timer};
use futures::stream::{FuturesOrdered, StreamExt};

async fn send(buf: &[u8], writer: &mut (impl Write + Unpin)) {
    // Send the message length
    writer.write_all(&(buf.len() as u64).to_le_bytes()).await.unwrap();
    // Send the rest of the message
    writer.write_all(&buf).await.unwrap();
    writer.flush().await.unwrap();
}

fn main() {
    task::block_on(async move {
        let listener = TcpListener::bind("0.0.0.0:8000").await.unwrap();
        let mut incoming = listener.incoming();
        let mut writers = Vec::with_capacity(16);
        for _ in 0..16 {
            let stream = incoming.next().await.unwrap().unwrap();
            writers.push(BufWriter::new(stream))
        };

        let buf = vec![0u8; 1 << 30];
        
        let send_time = start_timer!(|| "Sending buffer across 1 connection");
        send(&buf, &mut writers[0]).await;
        end_timer!(send_time);

        let send_time = start_timer!(|| "Sending buffer across 16 connections");
        writers
            .iter_mut()
            .zip(buf.chunks(buf.len() / 16))
            .map(|(w, chunk)| {
                send(chunk, w)
            })
            .collect::<FuturesOrdered<_>>()
            .collect::<Vec<_>>()
            .await;
        end_timer!(send_time);
    });
}
客户结果:

Start:   Reading buffer from 1 connection
End:     Reading buffer from 1 connection......................................55.396s
Start:   Reading buffer from 16 connections
End:     Reading buffer from 16 connections....................................3.914s
我假设这种差异是由于发送连接在TCP缓冲区填满时必须等待ACK(两台机器都启用了TCP窗口缩放)造成的?Rust似乎没有提供API来修改这些东西的大小


在单个连接上是否有实现类似吞吐量的方法?由于所有这些都是通过单个网络接口进行的,因此必须传递多个流似乎很烦人。

的答案表明一个流应该足够大的窗口大小,并且操作系统应该相应地缩放窗口大小。您可以尝试在接收端手动设置更大的大小,但您必须使用tokio的实现,因为我认为async-std不支持该选项。这是有意义的-考虑到任何主要网络机箱都不支持设置缓冲区大小,我认为可能有其他方法(tokio似乎删除了0.3.0版中的
TcpSocket::set\u recv\u buffer\u size
方法)直到之前它才被添加。类似的板条箱中可能不存在类似的方法,因为一般来说它确实不必要,但可能值得研究。使用wireshark查看流量会很有趣
Start:   Sending buffer across 1 connection
End:     Sending buffer across 1 connection....................................55.134s
Start:   Sending buffer across 16 connections
End:     Sending buffer across 16 connections..................................4.19s
Start:   Reading buffer from 1 connection
End:     Reading buffer from 1 connection......................................55.396s
Start:   Reading buffer from 16 connections
End:     Reading buffer from 16 connections....................................3.914s