Transport Layer Lecture PDF

Summary

This document provides a deep dive into the transport layer, focusing on TCP and UDP. It outlines their functionalities, features, and differences in network communication.

Full Transcript

Transport Layer in depth Transport Layer recap Transport Layer Protocols Two primary protocols of this layer are TCP and UDP. TCP (Transmission Control Protocol) UDP (User Datagram Protocol) Transport layer protocols receive data from application and segments it. Provide transm...

Transport Layer in depth Transport Layer recap Transport Layer Protocols Two primary protocols of this layer are TCP and UDP. TCP (Transmission Control Protocol) UDP (User Datagram Protocol) Transport layer protocols receive data from application and segments it. Provide transmission error detection and correction mechanisms. UDP vs TCP User Datagram Protocol Connectionless Protocol Single packet transfer(will not perform segmentation as TCP ) Detect error but will not provide error correction mechanism No congestion control mechanism No retransmission delay; hence useful for real time application , VOIP,online games. Commonly used for broadcast Why do we need UDP? Fast connection Latency is not an issue.(No connection establishment, hence no delay.) Simple: no connection state at sender, receiver Small segment header: less overhead per packet(size 8bytes) Example protocols that use UDP DHCP,RIP,DNS,SNMP UDP segment format UDP segment format The fields in a UDP header are: Source port – The port of the device sending the data. Destination port – The port of the device receiving the data. UDP port numbers can be between 0 and 65,535. Length – Specifies the number of bytes comprising the UDP header and the UDP payload data. The limit for the UDP length field is 8bytes Checksum – Ensure the correctness of message.It is optional in IPv4 but was made mandatory in IPv6. TCP The transmission control protocol (TCP) is the internet standard ensuring the successful exchange of data packets between devices over a network. TCP is the underlying communication protocol for a wide variety of applications, including web servers and websites, email applications, FTP and peer-to-peer apps. Features of TCP Connection oriented protocol Reliable point-to-point data transfer Full duplex Flow control(receiver can control sending rate) Congestion control(stop sender from overwhelming of network) Three Way Hand Shake Protocol Connection establishment Three Way Hand Shake Protocol Connection Establishment A client sends the server a SYN packet—a connection request from its source port to a server’s destination port. The server responds with a SYN/ACK packet, acknowledging the receipt of the connection request. The client receives the SYN/ACK packet and responds with an ACK packet of its own. After the connection is established, TCP works by breaking down transmitted data into segments, each of which is packaged into a datagram and sent to its destination. TCP Header TCP Header Source port – The sending device’s port. Destination port – The receiving device’s port. Sequence number – A device initiating a TCP connection must choose a random initial sequence number, which is then incremented according to the number of transmitted bytes. Acknowledgment number – The receiving device maintains an acknowledgment number starting with zero. It increments this number according to the number of bytes received. TCP data offset – This specifies the size of the TCP header, expressed in 32-bit words. One word represents four bytes. Reserved data – The reserved field is always set to zero. Control flags – TCP uses nine control flags to manage data flow in specific situations, such as the initiating of a reset. Window size – specifies the amount of data the destination is willing to accept. TCP checksum – The sender generates a checksum and transmits it in every packet header. The receiving device can use the checksum to check for errors in the received header and payload. Urgent pointer – Indicates data that is to be delivered as quickly as possible. mTCP optional data – These are optional fields for setting maximum segment sizes, selective acknowledgments and enabling window scaling for more efficient use of high-bandwidth networks. Padding-The TCP header padding is used to ensure that the TCP header ends, and data begins, on a 32-bit boundary. The padding is composed of zeros. Summary Reliable Best -Effort Connection type Connection-oriented Connectionless Protocol TCP UDP Sequencing Yes No Uses Email. Voice streaming. File sharing. Video streaming. Downloading. Gaming. Quality Of Service Traffic Shaping Leaky Bucket Algorithm The Leaky Bucket algorithm is a traffic shaping algorithm that is used to convert bursty traffic into smooth traffic by averaging the data rate sent into the network. The leaky bucket algorithm is a method of congestion control where multiple packets are stored temporarily. These packets are sent to the network at a constant rate that is decided between the sender and the network. This algorithm is used to implement congestion control through traffic shaping in data networks. Quality Of Service Traffic Shaping Leaky Bucket Algorithm Quality Of Service Traffic Shaping- Leaky Bucket Algorithm Consider that, each network interface has a leaky bucket. Now, when the sender wants to transmit packets, the packets are thrown into the bucket. These packets get accumulated in the bucket present at the network interface. If the bucket is full, the packets are discarded by the buckets and are lost. This bucket will leak at a constant rate. This means that the packets will be transmitted to the network at a constant rate. This constant rate is known as the Leak Rate or the Average Rate. In this way, bursty traffic is converted into smooth, fixed traffic by the leaky bucket. Queuing and releasing the packets at different intervals help in reducing network congestion and increasing overall performance. Quality Of Service Token Bucket Algorithm The leaky bucket algorithm enforces output patterns at the average rate, no matter how busy the traffic is. So, to deal with the more traffic, we need a flexible algorithm so that the data is not lost. One such approach is the token bucket algorithm. Step 1 − In regular intervals tokens are thrown into the bucket. Step 2 − The bucket has a maximum capacity. Step 3 − If the packet is ready, then a token is removed from the bucket, and the packet is sent. Step 4 − Suppose, if there is no token in the bucket, the packet cannot be sent. Quality Of Service Token Bucket Algorithm Quality Of Service Token Bucket Algorithm In figure (a) the bucket holds two tokens, and three packets are waiting to be sent out of the interface. In Figure (b) two packets have been sent out by consuming two tokens, and 1 packet is still left. When compared to Leaky bucket the token bucket algorithm is less restrictive that means it allows more traffic. The limit of busyness is restricted by the number of tokens available in the bucket at a particular instant of time. Quality Of Service Token Bucket Algorithm Advantages of token Bucket over leaky bucket If a bucket is full in tokens, tokens are discard not packets. While in leaky bucket, packets are discarded. Token Bucket can send large bursts at a faster rate while leaky bucket always sends packets at constant rate. Token bucket is easy to implement than leaky bucket.

Use Quizgecko on...
Browser
Browser