I'm going through some revision and I've been stumped by a TCP question. Maybe someone can give me a quick hint or push in the right direction, just so I can get passed this section.
"Why does the sending entity in TCP need to consider the size of the congestion window when determining the sliding window size? "
"Why does the sending entity in TCP need to consider the size of the congestion window when determining the sliding window size? "
This is because the size of the congestion window represents the possible congestion in the network. This is one of the key features offered by TCP. This window is updated in three stages.
In the first stage, when TCP starts, it starts with congestion windows as 1 MSS (Max Segment Size) and then ramps it up in a slow-start manner. TCP sender starts with this value because it is "estimating" how many packets it can send in the network. This phase is also known as slow-start phase. Btw, even though it is called slow-start, TCP increases the packet by doubling the congestion window and the increase happens upon reception of ACKs.
In the second stage, when the congestion window reaches slow-start (ss) threshold (yep, there is one!), TCP sender grows its cogestion window additively -- this is congestion avoidance phase. Here, the sender becomes more cautious. Once again, the increase happens upon reception of ACKs.
In the third stage, when a packet is dropped (one reason would be that a retransmission timeout happened), then TCP cuts its congestion window back to 1 MSS and restarts to grow it again. This is done because a likely congestion was encountered and so cutting back the congestion window would likely freeup the congestion situation along the path. Unlike other stages, the decrease happens due to lack of reception of ACKs.