Windows TCP窗口缩放过早达到高原

前端之家收集整理的这篇文章主要介绍了Windows TCP窗口缩放过早达到高原前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
场景:我们有许多 Windows客户端定期将大文件(FTP / SVN / HTTP PUT / SCP)上传到距离大约100-160毫秒的 Linux服务器.我们在办公室拥有1Gbit / s的同步带宽,服务器是AWS实例或物理托管在美国DC.

最初的报告是上传到新服务器实例的速度比它们慢得多.这在测试中从多个位置进行;客户端从Windows系统看到主机稳定的2-5Mbit / s.

我在一个AWS实例上打破了iperf -s,然后从办公室的Windows客户端打开了:

iperf -c 1.2.3.4

[  5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55185
[  5]  0.0-10.0 sec  6.55 MBytes  5.48 Mbits/sec

iperf -w1M -c 1.2.3.4

[  4] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 55239
[  4]  0.0-18.3 sec   196 MBytes  89.6 Mbits/sec

后一个测试(AWS的Vagaries)可能会有很大差异,但通常在70到130Mbit / s之间,这对我们的需求来说已经足够了.通过Wiresharking会话,我可以看到:

> iperf -c Windows SYN – Window 64kb,Scale 1 – Linux SYN,ACK:Window 14kb,Scale:9(* 512)

> iperf -c -w1M Windows SYN – Windows 64kb,Scale:9

显然,链接可以维持这种高吞吐量,但我必须明确设置窗口大小以便使用它,大多数现实世界的应用程序都不会让我这样做. TCP握手在每种情况下都使用相同的起点,但强制的起点可以缩放

相反,从同一网络上的Linux客户端直接,iperf -c(使用系统默认85kb)给我:

[  5] local 10.169.40.14 port 5001 connected with 1.2.3.4 port 33263
[  5]  0.0-10.8 sec   142 MBytes   110 Mbits/sec

没有任何强制,它按预期扩展.这不是介入的跳跃或我们的本地交换机/路由器中的东西,似乎影响Windows 7和8客户端.我已经阅读了很多关于自动调整的指南,但这些通常是关于完全禁用扩展以解决糟糕可怕的家庭网络工具包.

谁能告诉我这里发生了什么,并给我一个解决方法? (最好能通过GPO粘贴到注册表中.)

笔记

有问题的AWS Linux实例在sysctl.conf中应用了以下内核设置:

net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.rmem_default = 1048576
net.core.wmem_default = 1048576
net.ipv4.tcp_rmem = 4096 1048576 16777216
net.ipv4.tcp_wmem = 4096 1048576 16777216

我用过dd if = / dev / zero | nc在服务器端重定向到/ dev / null以排除iperf并删除任何其他可能的瓶颈,但结果大致相同.使用ncftp(Cygwin,Native Windows,Linux)进行测试的方式与上述各自平台上的iperf测试大致相同.

编辑

我在这里发现了可能相关的另一个一致的事情:

这是1MB捕获的第一秒,放大.当窗口向上扩展并且缓冲区变大时,您可以看到Slow Start正在运行.那么这个微小的高原约为0.2s,正好是默认窗口iperf测试永远变平的点.这个当然可以扩展到更加眩目的高度,但是很奇怪,在它出现这种情况之前,缩放中的这个暂停(值为1022bytes * 512 = 523264).

更新 – 6月30日.

跟进各种回复

>启用CTCP – 这没有区别;窗口缩放是相同的. (如果我理解这一点,此设置会增加拥塞窗口放大的速度,而不是它可以达到的最大大小)
>启用TCP时间戳. – 这里也没有变化.
> Nagle的算法 – 这是有道理的,至少它意味着我可以忽略图中的特定blip作为问题的任何指示.
> pcap文件:Zip文件可用:https://www.dropbox.com/s/104qdysmk01lnf6/iperf-pcaps-10s-Win%2BLinux-2014-06-30.zip(用bittwiste匿名,提取到~150MB,因为每个OS客户端都有一个用于比较)

更新2 – 6月30日

O,所以在关于Kyle的建议后,我启用了ctcp和禁用的烟囱卸载:
TCP全局参数

----------------------------------------------
Receive-Side Scaling State          : enabled
Chimney Offload State               : disabled
NetDMA State                        : enabled
Direct Cache Acess (DCA)            : disabled
Receive Window Auto-Tuning Level    : normal
Add-On Congestion Control Provider  : ctcp
ECN Capability                      : disabled
RFC 1323 Timestamps                 : enabled
Initial RTO                         : 3000
Non Sack Rtt Resiliency             : disabled

但遗憾的是,吞吐量没有变化.

我确实在这里有一个因果问题:图表是服务器对客户端的ACK中设置的RWIN值.对于Windows客户端,我是否正确地认为Linux没有将此值扩展到超出该低点,因为客户端的有限CWIN会阻止该缓冲区被填充?还有其他一些原因导致Linux人为地限制了RWIN吗?

注意:我已经尝试过让ECN搞砸了;那里没有变化.

更新3 – 6月31日.

禁用启发式和RWIN自动调整后无变化.已将英特尔网络驱动程序更新到最新版本(12.10.28.0),软件通过设备管理器选项卡公开功能调整.该卡是一个82579V芯片组板载NIC – (我将与来自realtek或其他供应商的客户进行更多测试)

关注NIC片刻,我尝试了以下内容(主要是排除不可能的罪魁祸首):

>将接收缓冲区从256增加到2k,并将缓冲区从512传输到2k(两者现在最大) – 无变化
>禁用所有IP / TCP / UDP校验和卸载. – 没变.
>禁用大型发送卸载 – Nada.
>关闭IPv6,QoS调度 – Nowt.

更新3 – 7月3日

为了消除Linux服务器端,我启动了一个Server 2012R2实例并使用iperf(cygwin binary)和NTttcp重复测试.

使用iperf,我必须在连接扩展超过~5Mbit / s之前在两端明确指定-w1m. (顺便说一下,我可以检查一下,大约5kb的BDP在91ms延迟几乎精确到64kb.发现极限……)

ntttcp二进制文件现在显示出这样的限制.在服务器上使用ntttcpr -m 1,1.2.3.5,在客户端上使用ntttcp -s -m 1,1.2.3.5 -t 10,我可以看到更好的吞吐量:

Copyright Version 5.28
Network activity progressing...


Thread  Time(s) Throughput(KB/s) Avg B / Compl
======  ======= ================ =============
     0    9.990         8155.355     65536.000

#####  Totals:  #####

   Bytes(MEG)    realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
       79.562500      10.001       1442.556            7.955

Throughput(Buffers/s) Cycles/Byte       Buffers
===================== =========== =============
              127.287     308.256      1273.000

DPCs(count/s) Pkts(num/DPC)   Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
     1868.713         0.785        9336.366          0.157

Packets Sent Packets Received Retransmits Errors Avg. cpu %
============ ================ =========== ====== ==========
       57833            14664           0      0      9.476

8MB / s将它提升到我在iperf中明确大窗口所获得的水平.奇怪的是,1273缓冲区中的80MB = 64kB缓冲区.另一个wireshark显示了一个好的,可变的RWIN从服务器回来(比例因子256),客户似乎满足;所以也许ntttcp误报了发送窗口.

更新4 – 7月3日

在@ karyhead的请求下,我做了一些测试并生成了更多的捕获,在这里:
https://www.dropbox.com/s/dtlvy1vi46x75it/iperf%2Bntttcp%2Bftp-pcaps-2014-07-03.zip

>两个以上的iperfs,从Windows到同一个Linux服务器(1.2.3.4):一个具有128k Socket大小和默认64k窗口(再次限制为~5Mbit / s)和一个具有1MB发送窗口和默认8kb套接字大小. (比例更高)
>从同一Windows客户端到Server 2012R2 EC2实例(1.2.3.5)的一个ntttcp跟踪.在这里,吞吐量很好.注意:NTttcp在打开测试连接之前在端口6001上执行奇怪的操作.不知道那里发生了什么.
>一个FTP数据跟踪,使用Cygwin ncftp将20MB / dev / urandom上传到几乎相同的linux主机(1.2.3.6).再次限制就在那里.使用Windows Filezilla时,模式大致相同.

更改iperf缓冲区长度确实会对时间序列图产生预期的差异(更多垂直剖面),但实际吞吐量不变.

您是否尝试在Windows 7/8客户端中启用Compound TCP(CTCP).

请阅读:

提高高BDP传输的发送方性能

http://technet.microsoft.com/en-us/magazine/2007.01.cableguy.aspx

These algorithms work well for small BDPs and smaller receive window
sizes. However,when you have a TCP connection with a large receive
window size and a large BDP,such as replicating data between two
servers located across a high-speed WAN link with a 100ms round-trip
time
,these algorithms do not increase the send window fast enough to
fully utilize the bandwidth of the connection
.

To better utilize the bandwidth of TCP connections in these
situations,the Next Generation TCP/IP stack includes Compound TCP
(CTCP). CTCP more aggressively increases the send window for
connections with large receive window sizes and BDPs. CTCP attempts to
maximize throughput on these types of connections by monitoring delay
variations and losses. In addition,CTCP ensures that its behavior
does not negatively impact other TCP connections.

CTCP is enabled by default in computers running Windows Server 2008 and disabled by
default in computers running Windows Vista. You can enable CTCP with the netsh
interface tcp set global congestionprovider=ctcp
command. You can disable CTCP with
the netsh interface tcp set global congestionprovider=none command.

编辑2014年6月30日

看看CTCP是否真的“开启”

> netsh int tcp show global

PO said:

If I understand this correctly,this setting increases the rate at
which the congestion window is enlarged rather than the maximum size
it can reach

CTCP积极地增加发送窗口

http://technet.microsoft.com/en-us/library/bb878127.aspx

Compound TCP

The existing algorithms that prevent a sending TCP peer from
overwhelming the network are known as slow start and congestion
avoidance. These algorithms increase the amount of segments that the
sender can send,known as the send window,when initially sending data
on the connection and when recovering from a lost segment. Slow start
increases the send window by one full TCP segment for either each
acknowledgement segment received (for TCP in Windows XP and Windows
Server 2003) or for each segment acknowledged (for TCP in Windows
Vista and Windows Server 2008). Congestion avoidance increases the
send window by one full TCP segment for each full window of data that
is acknowledged.

These algorithms work well for LAN media speeds and smaller TCP window
sizes. However,when you have a TCP connection with a large receive
window size and a large bandwidth-delay product (high bandwidth and
high delay),such as replicating data between two servers located
across a high-speed WAN link with a 100 ms round trip time,these
algorithms do not increase the send window fast enough to fully
utilize the bandwidth of the connection. For example,on a 1 Gigabit
per second (Gbps) WAN link with a 100 ms round trip time (RTT),it can
take up to an hour for the send window to initially increase to the
large window size being advertised by the receiver and to recover when
there are lost segments.

To better utilize the bandwidth of TCP connections in these
situations,the Next Generation TCP/IP stack includes Compound TCP
(CTCP). CTCP more aggressively increases the send window for
connections with large receive window sizes and large bandwidth-delay
products. CTCP attempts to maximize throughput on these types of
connections by monitoring delay variations and losses. CTCP also
ensures that its behavior does not negatively impact other TCP
connections.

In testing performed internally at Microsoft,large file backup times were reduced by almost half for a 1 Gbps connection with a 50ms RTT. Connections with a larger bandwidth delay product can have even better performance. CTCP and Receive Window Auto-Tuning work together for increased link utilization and can result in substantial performance gains for large bandwidth-delay product connections.

原文链接:https://www.f2er.com/windows/370736.html

猜你在找的Windows相关文章