Podcast
Questions and Answers
What is the primary role of the transport layer in network communication?
What is the primary role of the transport layer in network communication?
- To manage physical connections between devices.
- To define the hardware specifications for network devices.
- To provide end-to-end connection between applications on different hosts. (correct)
- To route data packets across different networks.
Why is an additional layer needed between the application and network layers?
Why is an additional layer needed between the application and network layers?
- To manage the encryption of data for secure transmission.
- To optimize routing paths for faster data transfer.
- To handle physical addressing of devices on the network.
- To ensure guaranteed delivery and data integrity, which the network layer does not provide. (correct)
Which of the following is a key functionality provided by the transport layer?
Which of the following is a key functionality provided by the transport layer?
- Managing network security protocols.
- Enabling a host to run multiple applications simultaneously over the network. (correct)
- Defining the physical characteristics of network cables.
- Converting domain names to IP addresses.
How do the transport layer protocols differentiate between multiple processes sharing the same IP address?
How do the transport layer protocols differentiate between multiple processes sharing the same IP address?
What is the main difference between connection-oriented and connectionless multiplexing?
What is the main difference between connection-oriented and connectionless multiplexing?
Which task is performed by demultiplexing at the receiving host?
Which task is performed by demultiplexing at the receiving host?
For UDP sockets, what constitutes the identifier used for connectionless multiplexing and demultiplexing?
For UDP sockets, what constitutes the identifier used for connectionless multiplexing and demultiplexing?
What additional information is included in the identifier for a TCP socket compared to a UDP socket?
What additional information is included in the identifier for a TCP socket compared to a UDP socket?
Why do UDP and TCP use port numbers instead of process IDs to identify applications?
Why do UDP and TCP use port numbers instead of process IDs to identify applications?
What are the primary advantages of using UDP over TCP in certain applications?
What are the primary advantages of using UDP over TCP in certain applications?
Which of the following is NOT a field in the UDP packet header?
Which of the following is NOT a field in the UDP packet header?
What is the purpose of the checksum field in the UDP packet header?
What is the purpose of the checksum field in the UDP packet header?
In the TCP three-way handshake, what is the purpose of the SYN bit?
In the TCP three-way handshake, what is the purpose of the SYN bit?
What is the main purpose of the TCP three-way handshake?
What is the main purpose of the TCP three-way handshake?
What is the purpose of the FIN bit in TCP?
What is the purpose of the FIN bit in TCP?
Why does a client send an ACK after receiving a FIN segment in the TCP connection teardown process?
Why does a client send an ACK after receiving a FIN segment in the TCP connection teardown process?
What functionality does TCP provide to ensure in-order delivery of application-layer data without any loss or corruption?
What functionality does TCP provide to ensure in-order delivery of application-layer data without any loss or corruption?
What is the Automatic Repeat Request (ARQ) mechanism?
What is the Automatic Repeat Request (ARQ) mechanism?
What problem does 'Stop and Wait ARQ' have?
What problem does 'Stop and Wait ARQ' have?
How does TCP use selective acknowledgements (ACKing) to improve reliability?
How does TCP use selective acknowledgements (ACKing) to improve reliability?
What is 'fast retransmit' in TCP?
What is 'fast retransmit' in TCP?
What is the primary purpose of flow control in TCP?
What is the primary purpose of flow control in TCP?
What does the 'receive window' (rwnd) variable represent in TCP flow control?
What does the 'receive window' (rwnd) variable represent in TCP flow control?
What is the main goal of congestion control?
What is the main goal of congestion control?
What are the desirable properties of a congestion control algorithm?
What are the desirable properties of a congestion control algorithm?
Flashcards
Transport Layer
Transport Layer
An end-to-end connection between two applications running on different hosts, providing a logical connection regardless of the network.
Multiplexing
Multiplexing
The process of running multiple applications on a host using the network simultaneously.
Ports
Ports
Additional identifiers used by the transport layer to distinguish between different processes sharing the same IP address.
Demultiplexing
Demultiplexing
Signup and view all the flashcards
Multiplexing (sending host)
Multiplexing (sending host)
Signup and view all the flashcards
Socket Identifiers
Socket Identifiers
Signup and view all the flashcards
UDP Socket Identifier
UDP Socket Identifier
Signup and view all the flashcards
TCP Socket Identifier
TCP Socket Identifier
Signup and view all the flashcards
User Datagram Protocol (UDP)
User Datagram Protocol (UDP)
Signup and view all the flashcards
Reliable Transmission (TCP)
Reliable Transmission (TCP)
Signup and view all the flashcards
Automatic Repeat Request (ARQ)
Automatic Repeat Request (ARQ)
Signup and view all the flashcards
Window Size
Window Size
Signup and view all the flashcards
Selective Acknowledging (TCP)
Selective Acknowledging (TCP)
Signup and view all the flashcards
Fast Retransmit (TCP)
Fast Retransmit (TCP)
Signup and view all the flashcards
Flow Control
Flow Control
Signup and view all the flashcards
Congestion Control
Congestion Control
Signup and view all the flashcards
Fairness (congestion control)
Fairness (congestion control)
Signup and view all the flashcards
Network-Assisted Congestion Control
Network-Assisted Congestion Control
Signup and view all the flashcards
End-to-End Congestion Control
End-to-End Congestion Control
Signup and view all the flashcards
Congestion Window
Congestion Window
Signup and view all the flashcards
Additive Increase/Multiplicative Decrease (AIMD)
Additive Increase/Multiplicative Decrease (AIMD)
Signup and view all the flashcards
Multiplicative Decrease
Multiplicative Decrease
Signup and view all the flashcards
Congestion Window Sawtooth Pattern
Congestion Window Sawtooth Pattern
Signup and view all the flashcards
Slow Start
Slow Start
Signup and view all the flashcards
TCP CUBIC
TCP CUBIC
Signup and view all the flashcards
Study Notes
- The transport layer provides an end-to-end connection between applications on different hosts.
- The transport layer provides a logical connection regardless of the network.
How the Transport Layer Works
- The transport layer on the sender host receives a message from the application layer and adds its header.
- This combined message becomes a segment.
- The transport layer segment is sent to the network layer, which encapsulates it with its header information.
- The network layer sends it to the receiving host via routers, bridges, switches, etc.
Transport Layer Functionality
- The network layer uses a best-effort delivery service model, without guaranteed delivery or data integrity
- The transport layer provides functionalities for application programmers, allowing applications to run over diverse networks.
Transport Layer Protocols
- User Datagram Protocol (UDP)
- Transmission Control Protocol (TCP)
- These protocols differ based on the functionality they offer to application developers.
- UDP provides basic functionality, relying on the application layer for the remaining.
- TCP offers strong primitives for reliable and cost-effective end-to-end communication.
- TCP has become ubiquitous and is used for most applications.
Multiplexing
- The transport layer enables a host to run multiple applications using the network simultaneously.
- Network layer uses only the IP address, needing an addressing mechanism to distinguish processes on the same host.
- The transport layer uses ports as additional identifiers for multiplexing.
- Applications bind to a unique port number, opening sockets and listening for remote data.
Types of Multiplexing
- Connectionless multiplexing
- Connection-oriented multiplexing
- Multiplexing depends on whether a connection is established.
Demultiplexing
- The receiving host forwards an incoming transport-layer segment to the appropriate socket.
- The appropriate socket is identified by examining fields in the segment
- Delivering data in a transport-layer segment to the correct socket, as defined in the segment fields
- Sending host gathers data from different sockets and encapsulates data with header information for demultiplexing, creating segments for the network layer.
Socket Identifiers
- Sockets are identified by fields such as the source port number and destination port number.
Connectionless Multiplexing/Demultiplexing Identifier
- A two-tuple including a destination IP address and destination port number.
- Host A sends data to Host B
- Transport layer in Host A creates a segment with source/destination ports.
- The network layer encapsulates the segment and sends it to Host B.
- Transport layer at Host B identifies the correct socket by looking at the destination port field.
- In case Host B runs multiple processes, each has its own UDP socket and associated port number
- Host B uses this information to demultiplex data to the correct socket.
- Host B forwards UDP segments with the destination port number to the same process, even with different source hosts or port numbersr.
Connection-Oriented Multiplexing and Demultiplexing
- The identifier for a Transmission Control Protocol (TCP) socket includes:
- Source IP
- Source Port
- Destination IP
- Destination Port
- A TCP server has a listening socket for connection requests from TCP clients.
- A TCP client creates a socket, sending a connection request with a source port number, a destination port number of 12000, and a connection-establishment bit.
- The TCP server creates a socket identified by the four-tuple: source IP, source port, destination IP, and destination port.
- The server uses the socket identifier to demultiplex incoming data and forward it to the socket.
- TCP connection is established for client/server communication.
Web Servers and Persistent HTTP
- A webserver listens on port 80 for connection requests.
- Clients send requests and data with destination port 80.
- The webserver demultiplexes data based on unique source IP addresses and source port numbers.
- In persistent HTTP, the client and server exchange messages via a server socket.
- In non-persistent HTTP, a new Transmission Control Protocol (TCP) connection and socket are created/closed for each response/request
- Busy webservers may experience severe performance impact.
IDs
- UDP and Transmission Control Protocol (TCP) use port numbers to identify sending and destination applications
- Process IDs would make the protocol operating system dependent.
- A single process can have multiple channels of communications with Process IDs that cannot demultiplex properly.
- Processes listening on "well-known ports" (like 80 for HTTP) is an important convention.
User Datagram Protocol (UDP)
- The lecture is primarily focused on Transmission Control Protocol (TCP).
- UDP is an unreliable and connectionless protocol
- UDP doesn't require connection establishment before sending packets.
User Datagram Protocol (UDP) Advantages
- UDP offers less delays and better control over sending data because:
- No congestion control or similar mechanisms
- UDP encapsulates data and sends it over to the network layer immediately, without "intervening."
- TCP mechanisms (congestion control, retransmissions) cause delays.
- No connection management overhead
- UDP does not require connection establishment or tracking connection state.
- No congestion control or similar mechanisms
- Real-time applications sensitive to delays may prefer UDP, despite higher potential losses (e.g., DNS).
User Datagram Protocol (UDP) Packet Structure
- The header has 64 bits and the following fields:
- Source and destination ports
- Length of the UDP segment (header and data)
- Checksum for error checking
- The UDP sender adds the src port, destination port, and packet length
- It sums and performs a 1s complement
- all 0s become 1 and all 1s become 0
- Overflows are wrapped around
- Receiver adds four 16-bit words (including checksum)
- The result should be all 1s unless there has been an error
Checksums
- UDP and Transmission Control Protocol (TCP) use 1's complement
- The receiver adds the four words, if the sum contains a zero there has been an error
- UDP does not detect all errors.
Transmission Control Protocol (TCP) Three-Way Handshake
- Step 1: The TCP client sends a special segment with the SYN bit set to 1 (containing no data). The client includes an initial sequence number (client_isn) in the TCP SYN segment.
- Step 2: The server allocates resources and sends back a 'connection-granted' segment called SYNACK. Its packet has the SYN bit set to 1, acknowledgement field with (client_isn+1), and randomly chosen initial sequence number.
- Step 3: The client receives the SYNACK segment, allocates buffer, and sends an acknowledgment with SYN bit set to 0.
Connection Teardown
- Step 1: The client sends a segment with the FIN bit set to 1
- Step 2: The server acknowledges that it has received the close request.
- Step 3: The server sends a segment with FIN bit set to 1, indicating the connection is closed
- Step 4: The client sends an ACK to the server and waits to resend this acknowledgment if the first ACK segment is lost.
Reliable Transmission
- Recall that the network layer is unreliable and can cause lost or out-of-order packets
- Downloading a corrupted file over the Internet is an undesirable outcome.
Reliability Options
- Allow application developers handle network losses, similar to UDP.
- Implement reliability in the transport layer.
- TCP implementation guarantees in-order delivery of the application-layer data without loss or corruption.
How TCP Implements Reliability
- Sender needs to know which segments were received and which were lost by remote host.
- One method includes the receiver sending acknowledgements after it receives a specific segment
- If the sender does not receive an acknowledgement, it assumes packet loss and resends it
- The process of using acknowledgements and timeouts is known as Automatic Repeat Request or ARQ
Stop and Wait ARQ
- Sender sends a packet and waits for acknowledgement
- Algorithm has to determine when packet has to be resent
- A small timeout leads to re-transmissions
- Large timeout lead to unnecessary delays
- The timeout value is a function of the estimated round trip time of the connection
Packet Sending
- Sender can send multiple packets without acknowledgements
- Sender sends N unacknowledged packets also known as the window size
- When the it receives acknowledgement from the receiver, it sends more packets based on the window size
- In implementing this, key considerations are:
- Receiver has to identify and notify the sender of a missing packet.
- Packets are tagged with an increased byte sequence number for subsequent packets in the flow based on the size of the packet.
- Sender and receiver would need to buffer packets.
- Sender buffers packets that have been transmitted but not acknowledged.
- Receiver buffers packets due to differing arrival and consumption rates.
Missing segment Notification
- Receiver has to send an ACK for the most recently received in-order packet.
- The sender has to send all packets from the most recently received in-order packet.
- Receiver discards out-of-order received packets.
- In the Go-back-N method, if packet 7 is lost, the receiver discards the subsequent packets
- The sender sends all packets starting from 7 again.
Selective ACKing.
- TCP retransmits packets that it suspects were received in error.
- Receiver acknowledges a correctly received packet even out-of-order.
- The out-of-order packets are buffered until missing packets have been received
- The batch of the packets can be delivered to the application layer.
- TCP needs timeout as ACKs can get los
Detecting Loss
- Timeout
- Duplicate acknowledgements.
- A duplicate ACK acknowledges a segment for which sender already received acknowledgement
- When the sender receives 3 duplicate ACKs for a packet, it retransmits without waiting for the timeout.
- This implementation is knows as fast retransmit
Transmission Rate Mechanisms
- The transport-layer has mechanisms to control the transmission rate.
Transmission Rate Needs
- A wants to transfer a 1 Gb file to B
- Link has a 100 Mbps rate.
- Transfer is capped at 100 Mbps
- Other users using same link affect transfer
- B can also be receiving files from other users
Transmission Control Function
- Transmission control is a function for most applications Transmission Control implemented in transport layer handles fairness in network usage.
- TCP provides mechanism.
Flow Control: Receiver Protection
- Protect receiver buffer from overflowing
- Buffer packets that have not been transmitted to the application.
- The receiver is involved with multiple processes and cannot read data instantly, causing accumulation and buffer overflow.
Transport Control Protocol (TCP) Solution
- Sender maintains "receive window" variable
- Provides sender with knowledge of how much data receiver can handle.
- Hosts A and B communicate over TCP.
- A wants to send a file to Host B.
- Host B allocates RcvBuffer to the connection
- Receiving host has two variables:
- LastByteRead: the number of the last byte read from the buffer
- LastByteRcvd: contains last byte number arrived from the sender
- TCP makes sure of the following so the buffer doesn't overflow:
- LastByteRcvd – LastByteRead <= RcvBuffer
- The extra space term is called the receive window.
- rwnd = RcvBuffer - [LastByteRcvd – LastByteRead]
- The receiver markets a value of rwnd in segment/ACK sent to the sender
- The sender has two variables:
- LastByteSent
- LastByteAcked
- Unacknowledged Data Sent = LastByteSent - LastByteAcked
Preventing Overflow of Data:
- Sender needs to ensure maximum unacknowledged bytes are not more than rwnd
- Thus we need:
- LastByteSent - LastByteAcked <= rwnd
- When rwnd = 0 and the sender stops sending data
- Receiver buffer clears but sender is never aware of full receiver buffer and may always be blocked.
- TCP resolves this problem by having sender send 1 byte segment even when rwnd = 0
- When receiver acknowledges the segments, it specifies the rwnd value as there is room.
Congestion Control Introduction
- Congestion control is used for controlling transmission rate to the network from congestion.
Transmission Control Reasons to Avoid Congestion:
- Set of senders share 1 link with C capacity
- Other links have more capacity as it sends multiple packets to other units
- Combining rates could cause issues such as longer queues and packet drops.
Network Dynamics
- Networks change due to joining/existing users. Dynamic mechanisms for network congestion.
Desirable properties of a congestion control algorithm:
Efficiency: Should get maximum throughput of network
Fairness: Share network bandwith. The contexts assumes every flow in botleneck gets the same bandwidth
Low-delay: Packets will be stored if you continually keep sending packets to network.
Fast convergence: Flow shares bandwidth fairly
Congestion Control Flavors
- End-to-end
- Network-assisted
Network Assisted Congestion
- Rely on the network layer providing the sender feedback about network congestion.
- For instance, routers might use ICMP source quench to tell the source when the network is congested, the ICMP packets can be lost.
End-to-End Congestion Control:
- No explicit network feedback.
- Hosts detect congestion from network behavior and adjust the transmission rate.
- TCP uses an end-to-end approach, with congestion control in the transport layer instead of the network layer controlled which controls routers
- Routers provide feedback using ECN and QCN protocols.
Signs of Congestion
- Commonly, there are two signals of congestion
- Packet Delay: Routers have congested queues that lead to packer delays. The round trip time can be estimated.
- In network packets will fluctuate, and this variance makes congestion inference hard.
- Packet Loss. Routers start dropping packets with congested network. Note packets can be lost due to hardware failure, routing errors
- Earliest Transmission Control Protocol (TCP) implementations use loss to signal congestion with packet losses detected and handled
Transmission Rate Limit
- TCP determines available network capacity along with how many packets without congestion.
- Source uses ACKs as mechanisms. The ACk informs if previous packet was delivered so we can continue releasing them
- Congestion window: Used for maximum unacknowledged data a host can transit
Transmission Control Protocol: Probing
- TCP adapts the congestion window
- Increases the window under normal conditions
- If congestion is deteced the Congestion windows is decreased
Minimizing of unacknowledged data
- LastByteSent – LastByteAcked <= min{cwnd, rwnd}
- TCP sender cannot send faster than the slowest component, which is either network or receiving host
Congestion Control
- TCP decrease with congestion goes up and increases when levels decrease
- Combined mechanisms is combined as Additive Increase/Multiplicative Decrease (AIDM)
AIDM - Additive Increase
- Connection starts with a window - typically two and additively increases
- Window increases to 1 packet every (Round Trip Time). The additive increase part of Congestion Control Every sending host sends cwnf packages which add one to queue
Incremental Increase:
- Increase Happens Incrementally TCP does not ACKS Size increases with ACKS Arriving Increments are portions of the MSS
- CongestionWindow += Increment
TCP Reno (Congestion Control Algorithm)
- Sets congestionwindow(Cwnd) to half its previous
- Decreases congestion window for each timeout and corresponds to the “Multiplicative decrease”
- Reduces the rate at whick the sender transmits (loss event occurs)
- A loss event occurs when W decreases when packets are congested -
- cwnd cannot be reduced further than 1 packet.
- Can detect that a timeout has occurred and set current to 16 packets
- Further lasses make cwnd reduct to 8 packets - reduction
Signals of congestion
- TCP Reno- two pact loss with duplicate ACK (Mild congestion, Reduce contestion window by half)
- Timeout if ACK not set in time
Sawtooth Pattern
Continuously increase and decreases during the lifetime of link The connection we observe follows the graph on the SAWTOOTH pattern
- Decreased from the exponential start
- Slow Strat - Used in prior topics where host is operating
ADDITIVE INCREASE MULTIPLACITIVE DECRESE
- The AIMD approach that we saw in the previous topic is useful when the sending host is operating very close to the network capacity
- The approach is useful in that it reduces the congestion window to a larger extent than it is possible in increasing the window - due to the fact that any increase would be a loss
Slow Start
- Slow start is a congestion-control algorithm for Transmission Control Protocol (TCP) that is used to avoid causing new congestion on the network when starting a connection.
- With this, TCP Reno has a slow start phase where the congestion window increased exponentially instead of the case of AIMD
- The system starts off by setting cwnd go 1 packet
- This is then called called 'slow start'
Fairness
- Fairness is one of the desirable goals in congested control Average through the R/k (of capacity R bps)
AIDN policy
AIDM is helpful for fairness in congestion control - considers others to get increase of Mulipicative decrease etc
Cautions
Cases when TCP is not fair (Different RTT/connections)
- TCP Reno (ACK adaptation)
- Application uses connections with TCP
Congestion Control
- Networks have improved vastly (Increase the speed of links)
- Improvements lead to link utilization
- In order to make TCP efficient, improvements of congestion is proposed and looked at
- This system will be able to improve the amount of Bandwith that is allocated.
- The goal is where it can maintain approximate winodwq
Cubic
- Window decreases with congestion experienced - and uses multi decreasement
- The optimal size will be in between - it approaches the increase slowly since it is related last time
Windon growth concept
- Approximate in Transmission Control Protocol (TCP) in cubic by used by functional cubic
- C is the constant scale and “T” represents the amount of time to increase
- Time - that function increases no further loss - determined via Wmax
The TCP Protocol - Throughput
- Consetion follows Sawtooth pattern
- Window increased by "p" constant till the max value is cut in half
Factors
- Have to predict throughput
- To the make more realistic probability
- Model deliver "p" packets before congestion
- Can find value by using data
TCP Timeouts and Practices for Windows
- In practice windows such small receivers and "constant" TCP is usually less < 1
- DC is for the popular examples - Loss and gradient with network adjust the window
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.