home bbs files messages ]

Forums before death by AOL, social media and spammers... "We can't have nice things"

   comp.protocols.tcp-ip      TCP and IP network protocols.      14,669 messages   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]

   Message 13,271 of 14,669   
   Rick Jones to chris   
   Re: TCP slow restart?   
   14 Dec 09 23:03:24   
   
   24e12c4b   
   From: rick.jones2@hp.com   
      
   chris  wrote:   
   > When the application is actually sending, things go well: Data rates   
   > are acceptable, and I can clearly see TCP expanding the transmit   
   > window one segment at a time.  The transmission rate, and gap   
   > between segments is quite smooth and steady during these intervals.   
   > In-flight data tends to peak around 100KB.  Packets are not   
   > frequently lost during steady-state operation.   
      
   What is the precise definition of of "not frequently lost" in this   
   situation?   
      
   > Then transmission then stops abrubtly.  After some time, the   
   > transmission restarts, just as abruptly as it stopped.  This is   
   > where things get ugly.  Instead of these segments being spaced out   
   > nicely, the whole congestion window is dumped onto the wire at the   
   > server's line rate (100Mb/s).   
      
   > These bursts tend to be 16 segments (just under 24KB) long.  ...But   
   > they're enough to overflow the buffer allocated for them on a   
   > downstream device.   
      
   "Allocated for them" - do you mean to imply that there are flow-specific   
   buffers in this device?   
      
   > Usually the last few segments are delivered, but the 12 or so   
   > preceeding segments get lost.   
      
   I take it the network on the "other side" of the downstream device is   
   not 100 Mb/s?  Or that the downstream device is incapable of keeping   
   up with a 100 Mb/s data flow?   
      
   > The loss has a devistating effect on the (previously optimistic)   
   > congestion window.  Throughput after one of these events takes a   
   > long time to recover because the round-trip latency is around 300ms.   
      
   > A change to the queue sizes and drop scheme of the downstream   
   > routers is an obvious strategy, but the devices aren't under my   
   > control.  Further, the circuit is always busy, so we have to assume   
   > that other TCP streams have filled the void created when this   
   > application ceased transmission.  When it starts back up without   
   > slow-start, something is going to have to give.   
      
   Even if it did start back up with slow start, that the burst after the   
   idle is large enough to overflow the downstream device indicates that   
   the sending TCP had already calculated a cwnd larger than the   
   downstream device's buffering... so while following slow start after   
   idle might make the effects less, TCP is still going to be trying to   
   operate along the ragged edge of what your downstream device can support.   
      
   > I'm looking for suggestions on how to smooth behavior of the sending   
   > TCP when the application resumes sending after a quiet period.   
   > Windows-specific or general TCP language tunning suggestions would   
   > both be useful.   
      
   > I'm also looking for a pointer about what's supposed to happen to the   
   > cwin after inactivity like this.  Is Windows doing the right thing   
   > here?   
      
   "Right" is subjective, and I think someone else has already   
   pointed-out the "SHOULD" for what TCP uses as a congestion window   
   after an idle.  Of course, if during these gaps the sending system   
   does not think itself idle...   
      
   One thing to consider is that a sending TCP will never send more data   
   at one time than the minimum of the cwnd and the receiver's advertised   
   window.  So, if the receiver advertised no more window than the size   
   of the buffer of this downstream device, the sending TCP would not be   
   able to send more data at one time than could fit in the buffers.   
      
   Now, what that means for performance over a 300 millisecond round-trip   
   path will depend on just how slow that "other side" of the downstream   
   device happens to be...   
      
   Throughput <= EffectiveWindow/RoundTripTime   
      
   where EffectiveWindow will be the minimum of:   
      
   a) receiver's advertised window (the window field in the TCP header)   
   b) the sending TCP's congestion window   
   c) the sending side's SO_SNDBUF size   
   d) the quantity of data the sending application will send before   
      waiting for a response from the remote application.   
      
   I cannot say with certainty it would help the large drops, but I will   
   ask if Selective ACKnowledgement (SACK) is enabled on these   
   connections?   
      
   rick jones   
      
   Perhaps overly simplified, but if there is only 24 KB of buffering   
   allocated to a TCP connection operating over a 300 ms path, that   
   suggests one is not expecting more than:   
      
   Tput < 24KB/ 0.3 s   
      
   or 80 KB/s - is that the expected/desired transfer rate?  If that is   
   known a priori, one might also consider rate-limiting the sending   
   application, epecially if there is no control over the receiver's   
   advertised window.   
      
   --   
   firebug n, the idiot who tosses a lit cigarette out his car window   
   these opinions are mine, all mine; HP might not want them anyway... :)   
   feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...   
      
   --- SoupGate-Win32 v1.05   
    * Origin: you cannot sedate... all the things you hate (1:229/2)   

[   << oldest   |   < older   |   list   |   newer >   |   newest >>   ]


(c) 1994,  bbs@darkrealms.ca