Lecture 07: Transport Layer PDF
Document Details
Uploaded by ChivalrousOnomatopoeia
Alexandria University
2024
Dr. Sahar M. Ghanem
Tags
Summary
This document is a lecture on the Transport Layer in computer networks, focusing on principles of congestion control. The lecture includes scenarios with different buffer sizes and congestion control mechanisms.
Full Transcript
Computer Networks Lecture 07: Transport Layer Prof. Dr. Sahar M. Ghanem Associate Professor Computer and Systems Engineering Department Faculty of Engineering, Alexandria University Outline Introduction and Transport-Layer Services Multiplexing and Demu...
Computer Networks Lecture 07: Transport Layer Prof. Dr. Sahar M. Ghanem Associate Professor Computer and Systems Engineering Department Faculty of Engineering, Alexandria University Outline Introduction and Transport-Layer Services Multiplexing and Demultiplexing Connectionless Transport: UDP Principles of Reliable Data Transfer Connection-Oriented Transport: TCP Principles of Congestion Control TCP Congestion Control Evolution of Transport-Layer Functionality Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 2 Principles of Congestion Control Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 3 The Causes and the Costs of Congestion Packet retransmission treats a symptom of network congestion (the loss of a segment) but does not treat the cause of network congestion. Congestion: too many sources attempting to send data at too high a rate. Q1: Why congestion occurs and what is the cost of congestion? Q2: How to react to, or avoid, congestion? Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 4 Scenario 1: Two Senders and a Router with Infinite Buffers Assume that Host A is sending data at an average rate of 𝝀𝒊𝒏 bytes/sec and Host B operates in a similar manner. A router has a shared outgoing link of capacity 𝑹 and an infinite amount of buffer space. For a sending rate between 0 and 𝑹/𝟐, the throughput at the receiver equals the sender’s sending rate. When the sending rate is above 𝑹/𝟐, however, the receiver throughput is only R/2 (because of 2 hosts sharing the link). As the sending rate approaches 𝑹/𝟐, the average delay becomes larger and larger (infinite). Lesson (cost of congestion): Large queuing delays are experienced as the packet-arrival rate nears the link capacity. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 5 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 6 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 7 Scenario 2: Two Senders and a Router with Finite Buffers (1/3) Packets will be dropped when arriving to an already full buffer and retransmitted by the sender. Let the rate at which the application sends original data be 𝝀𝒊𝒏 bytes/sec. Because of retransmission, let the rate at which the transport layer sends segments into the network be 𝝀′𝒊𝒏 bytes/sec (referred to as offered load). 1. If Host A is able to somehow determine whether or not a buffer is free in the router and thus sends a packet only when a buffer is free. In this case, no loss would occur, throughput = 𝝀𝒊𝒏 = 𝝀′𝒊𝒏 (max is 𝑹/𝟐). Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 8 Scenario 2: Two Senders and a Router with Finite Buffers (2/3) 2. The sender retransmits only when a packet is known for certain to be lost using a timer. Assume the offered load, 𝝀′𝒊𝒏 = 𝑹/𝟐, then the rate at which data are delivered to the receiver application is 𝑹/𝟑 (0.333R are original data and 0.166R are retransmitted data). 3. The sender may time out prematurely and retransmit a packet that has been delayed in the queue but not yet lost. If each packet is forwarded twice, the receiver throughput will have an asymptotic value of 𝑹/𝟒 as the offered load approaches 𝑹/𝟐. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 9 Scenario 2: Two Senders and a Router with Finite Buffers (3/3) Lesson (cost of congestion): The sender must perform retransmissions in order to compensate for dropped (lost) packets due to buffer overflow. Lesson (cost of congestion): Unneeded retransmissions by the sender in the face of large delays may cause a router to use its link bandwidth to forward unneeded copies of a packet. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 10 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 11 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 12 Scenario 3: Four Senders, Routers with Finite Buffers, and Multihop Paths (1/2) All hosts have the same value of 𝝀𝒊𝒏 , and that all router links have capacity 𝑹 bytes/sec. The A–C connection shares router R1 with the D–B connection and shares router R2 with the B–D connection. For small values of 𝝀𝒊𝒏 , an increase in 𝝀𝒊𝒏 results in an increase in 𝝀𝒐𝒖𝒕. The case that 𝝀𝒊𝒏 (and hence 𝝀′𝒊𝒏 ) is extremely large? e.g.: Because the A–C and B–D traffic must compete at router R2 for the limited amount of buffer space, the amount of A–C traffic that successfully gets through R2 becomes smaller and smaller as the offered load from B–D gets larger and larger. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 13 Scenario 3: Four Senders, Routers with Finite Buffers, and Multihop Paths (2/2) In the limit, as the offered load approaches infinity, an empty buffer at R2 is immediately filled by a B–D packet, and the throughput of the A–C connection at R2 goes to zero due to wasted work done by the network. Lesson (cost of congestion): When a packet is dropped along a path, the transmission capacity that was used at each of the upstream links to forward that packet to the point at which it is dropped ends up having been wasted. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 14 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 15 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 16 Approaches to Congestion Control End-to-end congestion control: TCP takes this end-to-end approach. Network-assisted congestion control: routers provide explicit feedback to the sender and/or receiver regarding the congestion state of the network. Congestion information is typically fed back from the network to the sender in one of two ways: direct feedback or a router marks/updates a field in a packet flowing from sender to receiver. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 17 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 18 TCP Congestion Control Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 19 Outline Classic TCP Congestion Control Slow Start Congestion Avoidance Fast Recovery TCP Cubic TCP Reno Throughput Network-Assisted Congestion Control Fairness Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 20 Classic TCP Congestion Control Uses end-to-end congestion control rather than network-assisted congestion control. Each sender limit the rate at which it sends traffic into its connection as a function of perceived network congestion. Questions 1. How does a TCP sender limit the rate? 2. How does a TCP sender perceive that there is congestion? 3. What algorithm should the sender use to change its send rate? Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 21 TCP rate limit The TCP congestion-control mechanism operating at the sender keeps track of an additional variable, the congestion window (𝒄𝒘𝒏𝒅). The amount of unacknowledged data at a sender may not exceed the minimum of 𝒄𝒘𝒏𝒅 and 𝒓𝒘𝒏𝒅, that is: 𝑳𝒂𝒔𝒕𝑩𝒚𝒕𝒆𝑺𝒆𝒏𝒕 – 𝑳𝒂𝒔𝒕𝑩𝒚𝒕𝒆𝑨𝒄𝒌𝒆𝒅 ≤ 𝒎𝒊𝒏{𝒄𝒘𝒏𝒅, 𝒓𝒘𝒏𝒅} Assume the amount of unacknowledged data at the sender is solely limited by 𝒄𝒘𝒏𝒅 (i.e. TCP receive buffer is so large). Assume that the sender always has data to send. The sender’s send rate is roughly 𝒄𝒘𝒏𝒅/𝑹𝑻𝑻 bytes/sec. By adjusting the value of 𝒄𝒘𝒏𝒅, the sender can therefore adjust the rate at which it sends data into its connection. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 22 TCP congestion detection A dropped datagram results in a loss event at the sender—either a timeout or the receipt of three duplicate ACKs—which is taken by the sender to be an indication of congestion on the sender-to-receiver path. TCP will take the arrival of the acknowledgments as an indication that all is well will increase its congestion window size (self-clocking). The TCP sender increases its transmission rate to probe for the rate that at which congestion onset begins, backs off from that rate, and then to begins probing again to see if the congestion onset rate has changed. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 23 TCP congestion control If TCP senders collectively send too fast, they can congest the network, leading to congestion collapse. If TCP senders are too cautious and send too slowly, they could under utilize the bandwidth in the network. The algorithm has three major components: (1) slow start (mandatory), (2) congestion avoidance (mandatory), and (3) fast recovery (recommended). Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 24 Slow Start In the slow-start state, the value of 𝒄𝒘𝒏𝒅 begins at 1 MSS and increases by 1 MSS every time a transmitted segment is first acknowledged. This process results in a doubling of the sending rate every 𝑹𝑻𝑻. The TCP send rate starts slow but grows exponentially during the slow start phase. When should this exponential growth end? Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 25 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 26 End of slow start 1. If there is a loss event indicated by timeout, the TCP sender sets the value of 𝒄𝒘𝒏𝒅 to 1 and begins the slow start process anew. It also sets the value of a second state variable, 𝒔𝒔𝒕𝒉𝒓𝒆𝒔𝒉 = 𝒄𝒘𝒏𝒅/𝟐. 2. When the value of 𝒄𝒘𝒏𝒅 equals 𝒔𝒔𝒕𝒉𝒓𝒆𝒔𝒉, slow start ends and TCP transitions into congestion avoidance mode. 3. If three duplicate ACKs are detected, in which case TCP performs a fast retransmit and enters the fast recovery state. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 27 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 28 Congestion Avoidance On entry to the congestion-avoidance state, the value of 𝒄𝒘𝒏𝒅 is approximately half its value when congestion was last encountered. Rather than doubling the value of 𝒄𝒘𝒏𝒅 every 𝑹𝑻𝑻, increase the value of 𝒄𝒘𝒏𝒅 by just a single MSS every RTT. For example, If MSS is 1,460 bytes and cwnd is 14,600 bytes, then 10 segments are being sent within an RTT Each arriving ACK increases 𝒄𝒘𝒏𝒅 by 1/10 MSS the value of 𝒄𝒘𝒏𝒅 will have increased by one MSS after ACKs when all 10 segments have been received Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 29 End of congestion avoidance When a timeout occurs, the value of 𝒄𝒘𝒏𝒅 is set to 1 MSS, and the value of 𝒔𝒔𝒕𝒉𝒓𝒆𝒔𝒉 is updated to half the value of 𝒄𝒘𝒏𝒅. The slow start- state is then entered. To account for the triple duplicate ACKs received, TCP halves the value of 𝒄𝒘𝒏𝒅 adding in 3 MSS and records the value of 𝒔𝒔𝒕𝒉𝒓𝒆𝒔𝒉 to be half the value of 𝒄𝒘𝒏𝒅. The fast-recovery state is then entered. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 30 Fast Recovery Fast recovery is a recommended, but not a required, component of TCP. An early version of TCP, known as TCP Tahoe, unconditionally cut its congestion window to 1 MSS and entered the slow-start phase after either a timeout-indicated or triple-duplicate-ACK-indicated loss event. The newer version of TCP, TCP Reno, incorporated fast recovery. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 31 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 32 TCP Congestion Control: Retrospective TCP congestion control is often referred to as an additive-increase, multiplicative decrease (AIMD) form of congestion control. After TCP’s development, theoretical analyses showed that TCP’s congestion-control algorithm serves as a distributed asynchronous- optimization algorithm that results in several important aspects of user and network performance being simultaneously optimized. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 33 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 34 TCP Cubic TCP CUBIC differs only slightly from TCP Reno. The congestion window is increased only on ACK receipt, and the slow start and fast recovery phases remain the same. CUBIC only changes the congestion avoidance phase and attempts to maintain the flow for as long as possible just below the (unknown to the sender) congestion threshold. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 35 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 36 TCP Reno Throughput What is the average throughput of a long-lived TCP Reno connection might be? When the window size is 𝒘 bytes and the current round-trip time is 𝑹𝑻𝑻 seconds, then TCP’s transmission rate is roughly 𝒘/𝑹𝑻𝑻. Denote by 𝑾 the value of 𝒘 when a loss event occurs. The TCP transmission rate ranges from 𝑾/(𝟐 · 𝑹𝑻𝑻) to 𝑾/𝑹𝑻𝑻. TCP’s throughput increases linearly between the two extreme values, we have average throughput of a connection = 𝟎. 𝟕𝟓 × 𝑾/𝑹𝑻𝑻 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 37 Network-Assisted Congestion Control Explicit Congestion Notification (ECN): at the network layer, two bits in the Type of Service field of the IP datagram header are used for ECN. A router can set the congestion indication bit to signal congestion onset to senders before full buffers cause packets to be dropped at that router. TCP Vegas takes a delay-based approach to proactively detect congestion onset before packet loss occurs. The BBR congestion control protocol builds on ideas in TCP Vegas, and incorporates mechanisms that allows it compete fairly with TCP non-BBR senders. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 38 Fairness (1/2) Consider 𝑲 TCP connections, each with a different end-to-end path, but all passing through a bottleneck link with transmission rate 𝑹 bps. Suppose each connection is transferring a large file and there is no UDP traffic passing through the bottleneck link. A congestion control mechanism is said to be fair if the average transmission rate of each connection is approximately 𝑹/𝑲. Assume there is two connections have the same MSS and 𝑹𝑻𝑻 and the TCP connections are operating in CA mode (AIMD) at all times. The bandwidth realized by the two connections eventually fluctuates along the equal bandwidth share line. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 39 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 40 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 41 Fairness (2/2) Although a number of idealized assumptions lie behind the previous scenario, it still provides an intuitive feel for why TCP results in an equal sharing of bandwidth among connections. It has been shown that when multiple connections share a common bottleneck, those sessions with a smaller RTT are able to grab the available bandwidth at that link more quickly as it becomes free and thus will enjoy higher throughput than those connections with larger RTTs Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 42 Fairness and UDP When running over UDP, applications can pump their audio and video into the network at a constant rate and occasionally lose packets, rather than reduce their rates to “fair” levels at times of congestion and not lose any packets. From the perspective of TCP, the multimedia applications running over UDP are not being fair. It is possible for UDP sources to crowd out TCP traffic. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 43 Fairness and Parallel TCP Connections Web browsers often use multiple parallel TCP connections to transfer the multiple objects within a Web page. When an application uses multiple parallel connections, it gets a larger fraction of the bandwidth in a congested link. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 44 Evolution of Transport-Layer Functionality Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 45 TCP Versions Classic versions of TCP are TCP Tahoe and Reno. There are several newer versions of TCP including TCP CUBIC, DCTCP, CTCP, BBR, and more. CUBIC and CTCP are more widely deployed on Web servers. BBR is being deployed in Google’s internal B4 network. There are versions of TCP specifically designed for use over wireless links, over high-bandwidth paths with large RTTs, for paths with packet re-ordering, and for short paths strictly within data centers. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 46 QUIC: Quick UDP Internet Connections QUIC is a new application-layer protocol designed from the ground up to improve the performance of transport-layer services for secure HTTP. More than 7% of Internet traffic today now being QUIC. Google has deployed QUIC on many of its public-facing Web servers, in its mobile video streaming YouTube app, in its Chrome browser, and in Android’s Google Search app. QUIC is an application-layer protocol, using UDP as its underlying transport-layer protocol, and is designed to interface above specifically to a simplified but evolved version of HTTP/2. Changes can be made to QUIC at “application-update timescales,” that is, much faster than TCP or UDP update timescales. Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 47 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 48 QUIC’s features Connection-Oriented and Secure: combines the handshakes needed to establish connection state with those needed for authentication and encryption. Streams: allows several different application-level “streams” to be multiplexed through a single QUIC connection. Reliable, TCP-friendly congestion-controlled data transfer (NewReno). Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 49 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 50 Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 51 Summary Introduction and Transport-Layer Services Multiplexing and Demultiplexing Connectionless Transport: UDP Principles of Reliable Data Transfer Connection-Oriented Transport: TCP Principles of Congestion Control TCP Congestion Control Evolution of Transport-Layer Functionality Computer Networks, 2024 (c) Dr. Sahar M. Ghanem 52