Computer Networking: Transport Layer (PDF)

Summary

This document covers the transport layer in computer networking, including services, protocols, and congestion control. It focuses on concepts of reliable data transfer in connectionless and connection-oriented transports, TCP congestion control, and flow control.

Full Transcript

Chapter 3 Transport Layer Computer Networking: A Top- Down Approach 8th edition Jim Kurose, Keith Ross Pearson, 2020 Transport Layer: 3-1 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Pr...

Chapter 3 Transport Layer Computer Networking: A Top- Down Approach 8th edition Jim Kurose, Keith Ross Pearson, 2020 Transport Layer: 3-1 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Principles of reliable data transfer  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-2 ** transport-layer Transport services and protocols protocols are implemented in the end systems but not in  provide logical application transport network network routers. mobile network communication between data link physical national or global ISP application processes l og  running transport on different hosts ica protocols actions in end le systems: nd sender: breaks application messages -en into segments, passes to network dt local or layer. This is done by (possibly) regional breaking the application messages ra n ISP into smaller chunks and adding a sp transport-layer header to each home network content ort chunk to create the transport-layer provider network segment. datacenter application receiver: reassembles segments into network transport network messages, passes to application layer data link physical  two transport protocols available to Internet enterprise network applications ** network routers do not examine the fields of the TCP, UDP transport-layer segment encapsulated with the Transport vs. network layer services and protocols household analogy: 12 kids in Ann’s house sending letters to 12 kids in Bill’s house:  hosts (also called end systems) = houses  processes = kids  app messages = letters in envelopes  transport protocol = Ann and Bill who demux to in-house siblings  network-layer protocol = postal service Transport Layer: 3-4 Transport vs. network layer services and protocols  network layer: logical household analogy: communication 12 kids in Ann’s house between hosts sending letters to 12 kids  transport layer: in Bill’s house:  hosts (also called end systems) = logical houses  processes = kids communication  app messages = letters in between processes envelopes relies on, enhances,  transport protocol = Ann and Bill who demux to in-house network layer services siblings ** the postal service provides logical  network-layer protocol = communication between the two houses—the postal service moves mail from house to house, postal service not from person to person. On the other hand, Ann and Bill provide logical communication Transport Layer: 3-5 Transport Layer Actions Sender: application  is passed an app. msg application application-layer  determines message segment TT hh app. msg transport transport header fields values  creates segment network network (IP)  passes segment to IP (IP) link link physical physical Transport Layer: 3-6 Transport Layer Actions Receiver: application  receives segment application  from IP header values checks app. transport  extracts application- transport msg layer message network (IP)  demultiplexes network message up to (IP) link link application via socket physical physical Th app. msg Transport Layer: 3-7 Two principal Internet transport protocols application  TCP: Transmission Control transport network mobile network data link Protocol physical national or global ISP reliable, in-order delivery l og congestion control (prevents excessive ica le amount of traffic ) nd flow control -en connection setup dt local or regional ra n  UDP: User Datagram Protocol ISP sp unreliable, unordered delivery home network content ort provider no-frills extension of “best-effort” IP network datacenter application network transport  services not available: network data link delay guarantees physical bandwidth guarantees enterprise network ** When designing a network application, the application developer must specify one of these two transport Transport Layer: 3-8 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Principles of reliable data transfer  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-9 Connectionless transport UDP: User Datagram Protocol  “no frills,” “bare bones” Why is there a Internet transport protocol  no UDP? connection  IP “best effort” service, UDP establishment (which segments may be: can add RTT delay) lost  simple: no connection state at sender, delivered out-of-order to receiver app  small header size  connectionless: no handshaking between (only 8 bytes of UDP sender, receiver overhead). each UDP segment  no congestion handled independently of control others  UDP can blast away as fast as desired! Transport Layer: 3-10 UDP: User Datagram Protocol  UDP use:  streaming multimedia apps (loss tolerant, rate sensitive), the lack of congestion control in UDP can result in high loss rates between a UDP sender and receiver, and the crowding out of TCP sessions.  DNS (DNS runs over UDP, thereby avoiding TCP’s connection- establishment delays).  SNMP (UDP is used to carry network management data).  HTTP/3 (in Google’s Chrome browser, uses UDP as its underlying transport protocol and implements reliability in an application-layer protocol on top of UDP). Transport Layer: 3-11 UDP: User Datagram Protocol [RFC 768] Transport Layer: 3-12 UDP: Transport Layer Actions SNMP client SNMP server application application transport transport (UDP) (UDP) network (IP) network (IP) link link physical physical Transport Layer: 3-13 UDP: Transport Layer Actions SNMP client UDP sender actions: SNMP server application  is passed an SNMP application application-layer msg transport  determines message UDP transport UDP UDPhh SNMP (UDP) segment header (UDP) msg  creates fields values UDP segment network network (IP)  passes segment to IP (IP) link link physical physical Transport Layer: 3-14 UDP: Transport Layer Actions SNMP client UDP receiver SNMP server application  receives segment actions: application  checks from IP UDP transport transport SNMP checksum header (UDP)  value extracts application- (UDP) msg UDPh SNMP(IP) network layer message network msg  demultiplexes (IP) link message up to link physical application physical Transport Layer: 3-15 UDP segment header 32 bits source port # dest port # length checksum application length, in bytes of data UDP segment, (payload) including header data to/from UDP segment format application layer Transport Layer: 3-16 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Principles of reliable data transfer  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-17 Principles of reliable data transfer sending receivin ** With a reliable channel, no process g application dat process dat transferred data bits are corrupted transport a a (flipped from 0 to 1, or vice versa) or reliable channel lost, and all are delivered in the order in which they were sent. reliable service ** It is the responsibility of a reliable abstraction data transfer protocol to implement this service abstraction. ows through reliable data transfer channel is just one way – reliably send from sender to receiv Transport Layer: 3-18 Principles of reliable data transfer sending receivin sending receivin process g process g application dat process dat application dat process dat transport a a transport a a reliable channel sender-side of receiver-side reliable service reliable data of reliable transfer data transfer abstraction protocol protocol transport ** Communication over unreliable network unreliable channel channel is TWO-way: sender and receiver will exchange messages back and forth to IMPLEMENT one-way reliable data reliable service transfer. implementation Transport Layer: 3-19 Principles of reliable data transfer sending receivin process g application dat process dat transport a a sender-side of receiver-side Complexity of reliable reliable data transfer of reliable data transfer data transfer protocol protocol protocol will depend (strongly) on transport network characteristics of unreliable channel unreliable channel (lose, reliable service corrupt, reorder data?) implementation Transport Layer: 3-20 Principles of reliable data transfer sending receivin process g application dat process dat transport a a sender-side of receiver-side reliable data of reliable Sender, receiver do not transfer data transfer know the “state” of each protocol protocol other, e.g., was a transport network message received? unreliable channel  unless communicated reliable service via a message implementation Transport Layer: 3-21 Principles of reliable data transfer  The key point here is that one side does NOT know what is going on at the other side – it’s as if there’s a curtain between them. Everything they know about the other can ONLY be learned by sending/receiving messages.  Sender process wants to make sure a segment got through. But it can just somehow magically look through curtain to see if receiver got it. It will be up to the receiver to let the sender KNOW that it (the receiver) has correctly received the segment.  How will the sender and receiver do that – that’s the PROTOCOL. Transport Layer: 3-22 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Principles of reliable data transfer  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-23 Connection-oriented transport TCP  always point-to-point:  cumulative ACKs one sender, one  pipelining: receiver TCP congestion and flow  reliable, in-order byte control set window size steam: no “message  connection-oriented: boundaries" handshaking (exchange of  full duplex data: control messages) initializes bi-directional data flow in sender, receiver state same connection. before data exchange MSS: maximum segment size  flow controlled: sender will not overwhelm receiver Transport Layer: 3-24 TCP segment structure 32 bits source port # dest port # segment seq #: ACK: seq # of next sequence number counting bytes of data expected byte; A bit: into bytestream (not this is an ACK acknowledgement number segments!) length (of TCP header) head not len usedC E U A P R S F receive window flow control: # Internet checksum checksum Urg data pointer bytes receiver willing to accept options (variable length) C, E: congestion notification TCP options application data sent by RST, SYN, FIN: data application connection management (variable length) (used for connection into TCP setup and teardown ) socket Transport Layer: 3-25 TCP sequence numbers, ACKs outgoing segment from sender Sequence numbers: source port # dest port # byte stream “number” sequence number acknowledgement number rwnd of first byte in checksum urg pointer segment’s data window size Acknowledgements: N seq # of next byte expected from other sender sequence number space side sent sent, not- usable not cumulative ACK ACKed yet ACKed but not usable Q: how receiver handles (“in-flight”) yet sent out-of-order segments? outgoing segment from receiver A: TCP spec doesn’t source port # dest port # sequence number say, - up to acknowledgement number A rwnd implementor checksum urg pointer Transport Layer: 3-26 TCP sequence numbers, ACKs Host A Host B User types‘C’ Seq=42, ACK=79, data = ‘C’ host ACKs receipt of‘C’, echoes back Seq=79, ACK=43, data = ‘C’‘C’ The key thing to note here is that host ACKs the ACK number (43) on the B-to- receipt of A segment is one more than the echoed ‘C’ Seq=43, ACK=80 sequence number (42) on the A- toB segment that triggered that ACK simple telnet scenarioSimilarly, the ACK number (80) on the last A-to-B segment is one more than the sequence number Transport Layer: 3-27 TCP round trip time, timeout ** TCP uses a timeout/retransmit mechanism to recover from lost segments. ** Clearly, the timeout should be larger than the connection’s round-trip time (RTT), that is, the time from when a segment is sent until it is acknowledged. Q: how to estimate RTT? Q: how to set TCP  SampleRTT:measured time timeout value? from segment transmission  longer than RTT, but RTT until ACK receipt varies! ignore retransmissions  too short: premature  SampleRTT will vary, want timeout, unnecessary estimated RTT “smoother” retransmissions average several recent measurements, not just  too long: slow reaction to current SampleRTT segment loss Transport Layer: 3-28 TCP Sender (simplified) event: data received from event: timeout application  retransmit segment  create segment with seq # that caused timeout  seq # is byte-stream  restart timer number of first data byte in segment event: ACK received  start timer if not already  if ACK acknowledges running previously unACKed think of timer as for oldest unACKed segment segments expiration interval: update what is known to TimeOutInterval be ACKed start timer if there are still unACKed segments Transport Layer: 3-29 TCP Receiver: ACK generation [RFC 5681] Event at receiver TCP receiver action delayed ACK. Wait up to 500ms arrival of in-order segment with expected seq #. All data up for to next segment. If no next segment, expected seq # already ACKed send ACK immediately send single cumulative arrival of in-order segment with expected seq #. One other ACK, ACKing both in-order segments segment has ACK pending immediately send duplicate ACK, arrival of out-of-order segment higher-than-expect seq. #. indicating seq. # of next expected byte Gap detected arrival of segment that immediate send ACK, provided that segment starts at lower end of gap partially or completely fills gap Transport Layer: 3-30 Rather than immediately ACKnowledig this segment, many TCP implementations will wait for half a second for another in-order segment to arrive, and then generate a single cumulative ACK for both segments – thus decreasing the amount of ACK traffic. The arrival of this second in-order segment and the cumulative ACK generation that covers both segments is the second row in this table. Transport Layer: 3-31 TCP: retransmission scenarios Host A Host B Host A Host B SendBase=92 Seq=92, 8 bytes of data Seq=92, 8 bytes of data timeout timeout Seq=100, 20 bytes of data ACK=100 X ACK=100 ACK=120 Seq=92, 8 bytes of data Seq=92, 8 SendBase=100 bytes of data send cumulative SendBase=120 ACK for 120 ACK=100 ACK=120 SendBase=120 lost ACK scenario premature timeout Transport Layer: 3-32 TCP: retransmission scenarios Host A Host B And in this last example, two Seq=92, 8 bytes of data segments are again transmitted, Seq=100, 20 bytes of data the first ACK is lost but the ACK=100 X second ACK, a cumulative ACK ACK=120 arrives at the sender, which then can transmit a third segment, Seq=120, 15 bytes of data knowing that the first two have arrived, even though the ACK for the first segment was lost. cumulative ACK covers for earlier lost ACK Transport Layer: 3-33 TCP fast retransmit Host A Host B TCP fast retransmit if sender receives 3 additional ACKs for same Se q= 9 2, 8 by Seq= data tes of data (“triple duplicate 100, 2 0 byt e s of X data ACKs”), resend unACKed segment with smallest 100 seq # ACK= timeout  likely that unACKed ACK =100 segment lost, so don’t wait ACK =100 for timeout ACK =100 Receipt of three duplicate ACKs indicates 3 Seq=100, 20 bytes of data segments received after a missing segment – lost segment is likely. So retransmit! Transport Layer: 3-34 This may help you to understand; TCP flow control https://www.youtube.com/watch?v=E4I6t0mI_is application Q: What happens if Application process network layer delivers removing data from TCP socket buffers data faster than TCP socket application layer receiver buffers removes data from socket buffers? TCP ** the hosts on each side of a TCP code Network layer connection set aside a receive delivering IP buffer for the connection. When the datagram payload TCP connection receives bytes that IP into TCP socket code are correct and in sequence, it buffers places the data in the receive buffer. ** The associated application from sender process will read data from this receiver protocol stack buffer, but not necessarily at the instant the data arrives. Transport Layer: 3-35 TCP flow control application Q: What happens if Application process network layer delivers removing data from TCP socket buffers data faster than TCP socket application layer receiver buffers removes data from socket buffers? TCP code Network layer delivering IP datagram payload IP into TCP socket code buffers from sender receiver protocol stack Transport Layer: 3-36 TCP flow control application Q: What happens if Application process network layer delivers removing data from TCP socket buffers data faster than TCP socket application layer receiver buffers removes data from socket buffers? TCP code receive window flow control: # bytes receiver willing to accept IP code ** TCP provides a flow-control service to its applications to eliminate the possibility of the sender overflowing the receiver’s buffer. from sender ** Flow control is thus a speed-matching service—matching the rate at which the receiver protocol stack sender is sending against the rate at which the receiving application is Transport Layer: 3-37 TCP flow control application Q: What happens if Application process network layer delivers removing data from TCP socket buffers data faster than TCP socket application layer receiver buffers removes data from socket buffers? TCP flow control code receiver controls sender, so sender won’t overflow IP code receiver’s buffer by transmitting too much, too fast from sender receiver protocol stack Transport Layer: 3-38 ** TCP provides flow control by having the TCP flow control sender maintain a variable called the receive window. Informally, the receive window is used to give the sender an idea of how much free buffer space is available  TCP receiver “advertises” free at the receiver. buffer space in rwnd field in to application process TCP header RcvBuffer size set via socket RcvBuffer buffered data options (typical default is 4096 bytes) many operating systems rwnd free buffer space autoadjust RcvBuffer  sender limits amount of TCP segment payloads unACKed (“in-flight”) data to received rwnd TCP receiver-side buffering  guarantees receive buffer will ** receiver allocates a receive buffer to the not overflow connection; denote its size by RcvBuffer. ** The receive window, denoted rwnd is set to the amount of spare room in the buffer. Transport Layer: 3-39 TCP flow control flow control: # bytes receiver willing to accept  TCP receiver “advertises” free buffer space in rwnd field in TCP header receive window RcvBuffer size set via socket options (typical default is 4096 bytes) many operating systems autoadjust RcvBuffer  sender limits amount of unACKed (“in-flight”) data to received rwnd  guarantees receive buffer will TCP segment format not overflow ** By keeping the amount of unacknowledged data less than the value of rwnd, sender is assured that it is not overflowing the receive Transport Layer: 3-40 TCP connection management before exchanging data, sender/receiver “handshake”:  agree to establish connection (each knowing the other willing to establish connection)  agree on connection parameters (e.g., starting seq #s) application application connection state: connection state: ESTAB ESTAB connection variables: connection Variables: seq # client-to- seq # client-to- server server server-to-client server-to-client rcvBuffer size rcvBuffer size network at server,client at network server,client Transport Layer: 3-41 TCP 3-way handshake Server state serverSocket = socket(AF_INET,SOCK_STREAM) Client state serverSocket.bind((‘’,serverPort)) serverSocket.listen(1) clientSocket = socket(AF_INET, SOCK_STREAM) connectionSocket, addr = serverSocket.accept() LISTEN clientSocket.connect((serverName,serverPort)) LISTEN choose init seq num, x 1- The client-side send TCP SYN msg 2- TCP SYN TCP first sends a segment arrives at special TCP SYNSENT SYNbit=1, Seq=x the server, server segment (SYN) to choose init seq num, y allocates the TCP the server-side TCP send TCP SYNACK buffers and contains randomly msg, acking SYN SYN RCVD variables to the chooses an initial connection, and sequence number x SYNbit=1, Seq=y sends a connection- (contains no ACKbit=1; ACKnum=x+1 granted segment application -layer received SYNACK(x) (no application- data ) and SYN bit, ESTAB indicates server is live; layer data) that is set to 1. send ACK for SYNACK; contains three this segment may contain ACKbit=1, ACKnum=y+1 important pieces of client-to-server data information (SYN bit 3- Receive the SYNACK segment, received ACK(y) is set to 1, allocates buffers and variables to the indicates client is live acknowledgment connection. Then sends the server last ESTAB field (x+1), and segment acknowledges the server’s server’s initial connection-granted segment, SYN bit is sequence Transport number) Layer: 3-42 set to zero. to the client TCP. Closing a TCP connection  client, server each close their side of connection send TCP segment with FIN bit = 1  respond to received FIN with ACK on receiving FIN, ACK can be combined with own FIN  simultaneous FIN exchanges can be handled Transport Layer: 3-43 Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Principles of reliable data transfer  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-44 Principles of congestion control Congestion:  informally: “too many sources sending too much data too fast for network to handle”  manifestations: long delays (queueing in router buffers) packet loss (buffer overflow at routers)  different from flow control! congestion control: too many senders, sending too fast flow control: one sender too fast for one receiver Transport Layer: 3-45 Approaches towards congestion control End-end congestion control:  no explicit feedback from network layer.  congestion inferred from ACKs data data ACKs observed loss, delay  approach taken by TCP ** the sender enforce congestion by lost packet indications that might be timeouts or triple duplicate ACKs or by the measured RTT Transport Layer: 3-46 Approaches towards congestion control Network-assisted congestion control:  routers provide direct explicit congestion info feedback to sending/receiving hosts with flows passing through ACKs data data ACKs congested router.  may indicate congestion level or explicitly set sending rate 1- Direct feedback may 2- The second and more common be sent from a network form of notification occurs when a  TCP ECN, ATM, DECbit router to the sender. This form of notification router marks/updates a field in a packet flowing from sender to protocols typically takes the form of a choke packet receiver to indicate congestion. Upon receipt of a marked packet, (essentially saying, “I’m the receiver then notifies the congested!”). sender of Transport the Layer: 3-47 congestion Chapter 3: roadmap  Transport-layer services  Connectionless transport: UDP  Connection-oriented transport: TCP  Principles of congestion control  TCP congestion control Transport Layer: 3-48 TCP congestion control: AIMD  approach: senders can increase sending rate until packet loss (congestion) occurs, then decrease sending rate on loss event Additive Multiplicative Increasesending rate by 1 increase Decrease cut sending rate in half maximum segment size at each loss event every RTT until loss detected TCP sender Sending rate AIMD saw tooth ** TCP’s congestion control consists of linear (additive) increase in cwnd of 1 MSS per RTT and then a halving (multiplicative decrease) of behavior: probing cwnd on a triple duplicate- ACK event. for bandwidth time Transport Layer: 3-49 TCP AIMD: more Multiplicative decrease detail: sending rate is  Cut in half on loss detected by triple duplicate ACK (TCP Reno)  Cut to 1 MSS (maximum segment size) when loss detected by timeout (TCP Tahoe) Why AIMD?  AIMD – a distributed, asynchronous algorithm – has been shown to: optimize congested flow rates network wide! have desirable stability properties Transport Layer: 3-50 TCP congestion control: details sender sequence number space cwnd TCP sending behavior:  roughly: send cwnd bytes, wait RTT for ACKS, then send more last byte bytes ACKed sent, but available TCP rate~~ cwndbytes/sec not-yet but not RTT ACKed lastused byte (“in-flight”) sent  TCP sender limits transmission: LastByteSent- LastByteAcked < cwnd  cwnd is dynamically adjusted in response to observed network congestion (implementing TCP congestion control) Transport Layer: 3-51 TCP congestion control: details  Congestion Detection Using the occurrence of two events: 1- time out (RTO) with no ACK. 2- receiving three duplicates ACKs  Congestion Policies algorithms 1- Slow Start Taho 2- Congestion Avoidance TCP 3- Fast Recovery Reno  TCP versions TCP 1- Taho TCP (treated both events similarly). 2- Reno TCP (treated both events differently). Transport Layer: 3-52 TCP slow start Host A Host B  when connection begins, increase rate one s e gm exponentially until ent RTT first loss event: two segm ents initially cwnd = 1 MSS double cwnd every RTT done by incrementing four segm ents cwnd for every  summary: ACK initial rate received is slow, but ramps up time exponentially fast Transport Layer: 3-53 TCP: from slow start to congestion avoidance Q: when should the exponential increase switch to linear? X A: when cwnd gets to 1/2 of its value before timeout. Implementation:  variable ssthresh  on loss event, ssthresh is set to 1/2 of cwnd just before loss event ** ssthresh is half the value of cwnd when congestion was last detected * Check out the online interactive exercises for more examples: h ttp://gaia.cs.umass.edu/kurose_ross/interactive/ Transport Layer: 3-54 Summary: TCP congestion control New New ACK!. ACK! new ACK duplicate ACK cwnd = cwnd + MSS (MSS/cwnd) dupACKcount++ new ACK dupACKcount = 0 cwnd = cwnd+MSS transmit new segment(s), as allowed dupACKcount = 0 L transmit new segment(s), as allowed cwnd = 1 MSS ssthresh = 64 KB cwnd > ssthresh dupACKcount = 0 slow L congestion start timeout avoidance ssthresh = cwnd/2 cwnd = 1 MSS duplicate ACK timeout dupACKcount = 0 dupACKcount++ ssthresh = cwnd/2 retransmit missing segment cwnd = 1 MSS dupACKcount = 0 retransmit missing segment New timeout ACK! ssthresh = cwnd/2 cwnd = 1 New ACK dupACKcount = 0 cwnd = ssthresh dupACKcount == 3 dupACKcount == 3 retransmit missing segment dupACKcount = 0 ssthresh= cwnd/2 ssthresh= cwnd/2 cwnd = ssthresh + 3 cwnd = ssthresh + 3 retransmit missing segment fast retransmit missing segment recovery duplicate ACK cwnd = cwnd + MSS transmit new segment(s), as allowed Transport Layer: 3-55 TCP and the congested “bottleneck link”  TCP (classic) increase TCP’s sending rate until packet loss occurs at some router’s output: the bottleneck link ** let’s look at where this loss source destination occurs at a congested router somewhere on the source to application application destination path will refer to TCP TCP this congested link as the network network bottleneck link a flows link link packets are generally passing physical physical through other routers and packet queue almost links fine but it's here at the bottleneck link where never empty, they get lost and remember sometimes they’re actually multiple overflows packet flows experiencing loss at (loss) this bottleneck link. bottleneck link (almost always busy) Transport Layer: 3-56 TCP and the congested “bottleneck link”  TCP (classic) increase TCP’s sending rate until packet loss occurs at some router’s output: the bottleneck link  understanding congestion: useful to focus on congested bottleneck link insight: increasing TCP sending source rate will not increase end- destination end throughout with application application congested bottleneck TCP TCP network network link link physical physical insight: increasing TCP sending rate will increase measured RTT Goal: “keep the end-end pipe just full, but no RTT fuller” Transport Layer: 3-57 Delay-based TCP congestion control Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep bottleneck link busy transmitting, but avoid high delays/buffering # bytes sent measured in last RTT RTTmeasured throughput= interval RTTmeasured Delay-based approach:  RTTmin - minimum observed RTT (uncongested path)  uncongested throughput with congestion window cwnd is cwnd/RTTmin - if measured throughput “very close” to uncongested throughput => increase cwnd linearly - else if measured throughput “far below” uncongested Transport Layer: 3-58 Delay-based TCP congestion control  congestion control without inducing/forcing loss  maximizing throughout (“keeping the just pipe full… ”) while keeping delay low (“…but not fuller”)  a number of deployed TCPs take a delay-based approach  BBR (Bottleneck Bandwidth and RTT ) deployed on Google’s (internal) backbone network Transport Layer: 3-59 Explicit congestion notification (ECN) TCP deployments often implement network-assisted congestion control:  two bits in IP header (ToS field) marked by network router to indicate congestion policy to determine marking chosen by network operator  congestion indication carried to destination  destination sets ECE bit on ACK segment to notify sender of congestion  involves both IP (IP header ECN bit marking) and TCP (TCP header C,E bit marking) source TCP ACK segment destination application application ECE=1 TCP TCP network network link link physical physical ECN=10 ECN=11 IP datagram Transport Layer: 3-60

Use Quizgecko on...
Browser
Browser