Summary

This document provides an introduction to the data link layer in computer networks, along with explanations of error detection and correction techniques. It discusses various aspects of the data link layer, including error correction techniques such as forward and backward error correction. The document also presents different controlled-access protocols.

Full Transcript

1. Write about introduction to Link Layer. Data Link Layer o In the OSI model, the data link layer is a 4 th layer from the top and 2nd layer from the bottom. o The communication channel that connects the adjacent nodes is known as links, and in order to move the datagram fr...

1. Write about introduction to Link Layer. Data Link Layer o In the OSI model, the data link layer is a 4 th layer from the top and 2nd layer from the bottom. o The communication channel that connects the adjacent nodes is known as links, and in order to move the datagram from source to the destination, the datagram must be moved across an individual link. o The main responsibility of the Data Link Layer is to transfer the datagram across an individual link. o The Data link layer protocol defines the format of the packet exchanged across the nodes as well as the actions such as Error detection, retransmission, flow control, and random access. o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP. o An important characteristic of a Data Link Layer is that datagram can be handled by different link layer protocols on different links in a path. For example, the datagram is handled by Ethernet on the first link, PPP on the second link. 2. Explain about error detection and correction techniques. Error Correction codes are used to detect and correct the errors when data is transmitted from the sender to the receiver. Error Correction can be handled in two ways: o Backward error correction: Once the error is discovered, the receiver requests the sender to retransmit the entire data unit. o Forward error correction: In this case, the receiver uses the error-correcting code which automatically corrects the errors. A single additional bit can detect the error, but cannot correct it. For correcting the errors, one has to know the exact position of the error. For example, If we want to calculate a single-bit error, the error correction code will determine which one of seven bits is in error. To achieve this, we have to add some additional redundant bits. Suppose r is the number of redundant bits and d is the total number of the data bits. The number of redundant bits r can be calculated by using the formula: 2r>=d+r+1 The value of r is calculated by using the above formula. For example, if the value of d is 4, then the possible smallest value that satisfies the above relation would be 3. To determine the position of the bit which is in error, a technique developed by R.W Hamming is Hamming code which can be applied to any length of the data unit and uses the relationship between data units and redundant units. Hamming Code Parity bits: The bit which is appended to the original data of binary bits so that the total number of 1s is even or odd. Even parity: To check for even parity, if the total number of 1s is even, then the value of the parity bit is 0. If the total number of 1s occurrences is odd, then the value of the parity bit is 1. Odd Parity: To check for odd parity, if the total number of 1s is even, then the value of parity bit is 1. If the total number of 1s is odd, then the value of parity bit is 0. Error Detecting Techniques: The most popular Error Detecting Techniques are: o Single parity check o Two-dimensional parity check o Checksum o Cyclic redundancy check 3. What is switched local area network? Explain. LAN switching is a form of packet switching used in local area networks (LAN). Switching technologies are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using fast and hardware-based methods. LAN switching uses different kinds of network switches 3.B)Discuss about Link Virtualization. Link Virtualization link virtualization is a powerful technology that enhances the flexibility, scalability, and efficiency of network link layer operations. However, understanding its benefits and limitations is crucial for making informed decisions and implementing it effectively in your network infrastructure. 4. What is Data Center Networking? Explain Data center networking is the integration of a constellation of networking resources — switching, routing, load balancing, analytics, etc. — to facilitate the storage and processing of applications and data. The core layer connects all the distribution layers, while the distribution layer connects to the access layer. This structure enables better management, scalability, and fault tolerance by segregating traffic and minimizing network congestion. 4.b) Explain about the day in the life of web page request. Day in the Life of a Web Page Request Getting Started: DHCP, UDP, IP, and Ethernet Let’s suppose that Bob boots up his laptop and then connects it to an Ethernet cable connected to the school’s Ethernet switch, which in turn is connected to the school’s router, as shown in Figure 5.32. The school’s router is connected to an ISP, in this example, comcast.net. In this example, comcast.net is providing the DNS service for the school; thus, the DNS server resides in the Comcast network rather than the school network. We’ll assume that the DHCP server is running within the router, as is often the case. When Bob first connects his laptop to the network, he can’t do anything (e.g., download a Web page) without an IP address. Thus, the first network-related action taken by Bob’s laptop is to run the DHCP protocol to obtain an IP address, as well as other information, from the local DHCP server: 1. The operating system on Bob’s laptop creates a DHCP request message (Sec- tion 4.4.2) and puts this message within a UDP segment (Section 3.3) with destination port 67 (DHCP server) and source port 68 (DHCP client). The UDP segment is then placed within an IP datagram (Section 4.4.1) with a broadcast IP destination address (255.255.255.255) and a source IP address of 0.0.0.0, since Bob’s laptop doesn’t yet have an IP address. 2. The IP datagram containing the DHCP request message is then placed within an Ethernet frame (Section 5.4.2). The Ethernet frame has a destination MAC addresses of FF:FF:FF:FF:FF:FF so that the frame will be broadcast to all devices connected to the switch (hopefully including a DHCP server); the frame’s source MAC address is that of Bob’s laptop, 00:16:D3:23:68:8A. 3. The broadcast Ethernet frame containing the DHCP request is the first frame sent by Bob’s laptop to the Ethernet switch. The switch broadcasts the incom- ing frame on all outgoing ports, including the port connected to the router. 4. The router receives the broadcast Ethernet frame containing the DHCP request on its interface with MAC address 00:22:6B:45:1F:1B and the IP datagram is extracted from the Ethernet frame. The datagram’s broadcast IP destination address indicates that this IP datagram should be processed by upper layer proto- cols at this node, so the datagram’s payload (a UDP segment) is thus demulti- plexed (Section 3.2) up to UDP, and the DHCP request message is extracted from the UDP segment. The DHCP server now has the DHCP request message. 5. Discuss about multiple access links and protocols. Multiple Access Protocols in Computer Network Multiple Access Protocols are methods used in computer networks to control how data is transmitted when multiple devices are trying to communicate over the same network. These protocols ensure that data packets are sent and received efficiently, without collisions or interference. They help manage the network traffic so that all devices can share the communication channel smoothly and effectively. Who is Responsible for the Transmission of Data? The Data Link Layer is responsible for the transmission of data between two nodes. Its main functions are: Data Link Control Multiple Access Control Data Link Control The data link control is responsible for the reliable transmission of messages over transmission channels by using techniques like framing, error control and flow control. For Data link control refer to – Stop and Wait ARQ. Multiple Access Control If there is a dedicated link between the sender and the receiver then data link control layer is sufficient, however if there is no dedicated link present then multiple stations can access the channel simultaneously. Hence multiple access protocols are required to decrease collision and avoid crosstalk. For example, in a classroom full of students, when a teacher asks a question and all the students (or stations) start answering simultaneously (send data at same time) then a lot of chaos is created( data overlap or data lost) then it is the job of the teacher (multiple access protocols) to manage the students and make them answer one at a time. Thus, protocols are required for sharing data on non dedicated channels. Multiple access protocols can be subdivided further as 6. Explain about random access control protocols. Random Access Protocol In this, all stations have same superiority that is no station has more priority than another station. Any station can send data depending on medium’s state( idle or busy). It has two features: There is no fixed time for sending data There is no fixed sequence of stations sending data The Random access protocols are further subdivided as: ALOHA It was designed for wireless LAN but is also applicable for shared medium. In this, multiple stations can transmit data at the same time and can hence lead to collision and data being garbled. 7. discuss about controlled access protocols. Controlled Access Protocol It is a method of reducing data frame collision on a shared channel. In the controlled access method, each station interacts and decides to send a data frame by a particular station approved by all other stations. It means that a single station cannot send the data frames unless all other stations are not approved. It has three types of controlled access: Reservation, Polling, and Token Passing. Controlled Access: In controlled access, the stations seek information from one another to find which station has the right to send. It allows only one node to send at a time, to avoid the collision of messages on a shared medium. The three controlled- access methods are: 1. Reservation 2. Polling 3. Token Passing Reservation In the reservation method, a station needs to make a reservation before sending data. The timeline has two kinds of periods: 1. Reservation interval of fixed time length 2. Data transmission period of variable frames. If there are M stations, the reservation interval is divided into M slots, and each station has one slot. Polling Polling process is similar to the roll-call performed in class. Just like the teacher, a controller sends a message to each node in turn. In this, one acts as a primary station(controller) and the others are secondary stations. All data exchanges must be made through the controller. The message sent by the controller contains the address of the node being selected for granting access. Although all nodes receive the message the addressed one responds to it and sends data if any. If there is no data, usually a “poll reject”(NAK) message is sent back. Problems include high overhead of the polling messages and high dependence on the reliability of the controller. Token Passing In token passing scheme, the stations are connected logically to each other in form of ring and access to stations is governed by tokens. A token is a special bit pattern or a small message, which circulate from one station to the next in some predefined order. In Token ring, token is passed from one station to another adjacent station in the ring whereas incase of Token bus, each station uses the bus to send the token to the next station in some predefined order. In both cases, token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send that frame before it passes the token to the next station. If it has no queued frame, it passes the token simply. After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbours and the other N – 1 stations to send a frame, if they have one. There exists problems like duplication of token or token is lost or insertion of new station, removal of a station, which need be tackled for correct and reliable operation of this scheme. 8. Write about channelization protocols in detail. The channelization protocol allows numerous stations to access the same channel at the same time by sharing the link's available bandwidth according to time, frequency, and code. The three types of channelization are: Frequency Division Multiple Access, Time Division Multiple Access and Code Division Multiple Access

Use Quizgecko on...
Browser
Browser