Data Transmission on LAN PDF

Document Details

SolicitousOklahomaCity

Uploaded by SolicitousOklahomaCity

null

Tags

data transmission computer networks LAN data communication

Summary

This document covers data transmission on Local Area Networks (LANs). It discusses topics like data link control, framing, flow control, error control, and Ethernet wiring practices. The document also includes information on connecting devices.

Full Transcript

Data Transmission on LAN CHAPTER - 3 DATA TRANSMISSION ON LAN 3.0 INTRODUCTION In this chapter, functions associated with “Data Transmission on LAN “are discussed. And the Following Topics are covered: Data Link Control - Fram...

Data Transmission on LAN CHAPTER - 3 DATA TRANSMISSION ON LAN 3.0 INTRODUCTION In this chapter, functions associated with “Data Transmission on LAN “are discussed. And the Following Topics are covered: Data Link Control - Framing ,Line discipline ,Flow Control & Error Control Media Access Control , Ethernet Categories , Ethernet Wiring practices and Power over Ethernet Connecting Devices 3.1 DATA LINK CONTROL 3.1.1 FRAMING The data link layer needs to pack bits into frames, so that each frame is distinguishable from another. A frame in a character-oriented protocol is shown in fig 3.1(a) & a frame in a bit- oriented protocol is shown in fig 3.1(b) Fig. 3.1(a) Character-oriented protocol frame Fig. 3.1(b) Bit-oriented protocol frame The most important function of a data link layer is LINE DISCIPLINE, FLOW CONTROL and ERROR CONTROL 3.1.2 LINE DISCIPLINE Whatever the system, no device in it should be allowed to transmit until the device has the evidence that intended receiver is able to receive and is prepared to accept the transmission. What if the receiving device does not expect a transmission, is busy, or is out of commission? With no way to determine the status of the intended receiver, the transmitting device may waste its time sending data to a non functioning receiver or may interfere with signals already on the link. The line discipline functions of the data link layer oversee the establishment of links and the right of a particular device to transmit at a given time. Line discipline can be done in two ways: enquiry/acknowledgment (ENQ/ACK) and poll/select. The first method is used in peer-to-peer communication; the second method is used in primary- secondary communication. IRISET 29 TA2 – Data Communication & Networking Data Transmission on LAN 3.1.3 FLOW CONTROL The second aspect of data link control, following line discipline, is flow control. In most protocols flow control is a set of procedures that tell the sender how much data it can transmit before it must wait for an acknowledgment from the receiver. Two issues are at stake: The flow of data must not be allowed to overwhelm the receiver. Any receiving device has a limited speed at which it can process incoming data and a limited amount of memory, in which to store incoming data. The receiving device must be able to inform the sending device before those limits are reached and to request that the transmitting device send fewer frames or stop temporarily. Incoming data must be checked and processed before they can be used. The rate of such processing is often slower than the rate of transmission. For this reason, each receiving device has a block of memory, called a buffer, reserved for storing incoming data until they are processed. If the buffer begins to fill up, the receiver must be able to tell the sender to halt transmission until it is once again able to receive. As frames come in, they are acknowledged, either frame by frame or several frames at a time. If a frame arrives damaged, the receiver sends an error message (a NAK frame). Flow control refers to a set of procedures used to restrict the amount of data the sender can send before waiting for acknowledgment. Two methods have been developed to control the flow of data across communications links: Stop-and-Wait and Sliding Window. For efficient data packet transmission, the transmitter must not be forced to stop sending for an unnecessarily long time. This will happen if the receiving computer sends an acknowledgment signal to stop and does not send another signal to begin transmitting when its buffer has available space or is empty. Other considerations for efficient data packet transmission include: Round-trip delay time End-to-end delay Bandwidth delay 3.1.4 ERROR DETECTION & CORRECTION Data can be corrupted during transmission. Some applications require that errors be detected and corrected. Let us first discuss some issues related, directly or indirectly, to error detection and correction. In a single-bit error, only 1 bit in the data unit has changed as shown in fig 3.2 (a) Fig 3.2 (a) Single bit error A burst error as shown in fig 3.2 (b) means that 2 or more bits in the data unit have changed. Fig 3.2 (b) Burst errors IRISET 30 TA2 – Data Communication & Networking Data Transmission on LAN To detect or correct errors, we need to send extra (redundant) bits with data as shown in fig 3.3 below. Fig. 3.3 sending redundant bits with data There are different types of error detection methods / codlings are available. The popular methods are Parity checking, Block coding, CRC and Checksum etc. Block Coding: In block coding, we divide our message into blocks, each of k bits, called data words. We add redundant bits to each block to make the length n = k + r. The resulting n-bit blocks are called code words as shown in fig 3.4 below. Fig 3.4 Block coding (code words) The 4b/5b coding is a good example of block coding. In this coding scheme, k = 4 and n = 5. As we have 2k = 16 data words and 2n = 32 code word. We see that 16 out of 32 codeword are used for message transfer and the rest are either used for other purposes or unused. Table below shows the list of data words and code words. Later, we will see how to derive a codeword from a data word. Assume that k = 2 and n = 3. The sender encodes the data word 01 as 011 and sends it to the receiver. Consider the following cases as shown in Table 3.1. 1. The receiver receives 011. It is a valid codeword. The receiver extracts the data word 01 from it. 2. The codeword is corrupted during transmission, and 111 is received. This is not a valid codeword and is discarded. 3. The codeword is corrupted during transmission, and 000 is received. This is a valid codeword. The receiver correctly extracts the data word 00. Two corrupted bits have made the error undetectable. Table 3.1 Code for error detection IRISET 31 TA2 – Data Communication & Networking Data Transmission on LAN An error-detecting code can detect only the types of errors for which it is designed; other types of errors may remain undetected. A block diagram (fig. 3.5) is shown below. Fig.3.5 Error-detecting coding By adding more redundant bits to above Example, to see if the receiver can correct an error without knowing what was actually sent. We add 3 redundant bits to the 2-bit data word to make 5-bit code words. Table below shows the data words and code words. Assume the data word is 01. The sender creates the codeword 01011. The codeword is corrupted during transmission, and 01001 is received. First, the receiver finds that the received codeword is not in the table. This means an error has occurred. The receiver, assuming that there is only 1 bit corrupted, uses the following strategy to guess the correct data word as shown in Table 3.2 below. 1. Comparing the received codeword with the first codeword in the table (01001 versus 00000), the receiver decides that the first codeword is not the one that was sent because there are two different bits. 2. By the same reasoning, the original codeword cannot be the third or fourth one in the table. 3. The original codeword must be the second one in the table because this is the only one that differs from the received codeword by 1 bit. The receiver replaces 01001 with 01011 and consults the table to find the data word 01. Table 3.2 code for error detection (adding more redundant bits) Cyclic codes are special linear block codes with one extra property. In a cyclic code, if a codeword is cyclically shifted (rotated), the result is another codeword. 3.1.5 CYCLIC REDUNDANCY CHECK (CRC) The divisor in a cyclic code is normally called the generator polynomial or simply the generator as shown in Table 3.3. Table 3.3 Cyclic Redundancy Check (CRC) / Polynomial IRISET 32 TA2 – Data Communication & Networking Data Transmission on LAN The block diagram for encoder & decoder for cyclic redundant bits is shown in fig 3.6 below. Fig. 3.6 Encoder & decoder for cyclic redundant bits. An example for CRC is shown in fig 3.7, let a four bit data be 1001. If the divisor is a 1011 (four bit), then three 0’s are augmented to data word. We get quotient and remainder. Remainder is added to data word to form the code word. This method is more popular. Fig. 3.7 ,Example for Cyclic Redundancy check 3.1.6 CHECKSUM The last error detection method we are going to discuss here is called the checksum as shown in fig 3.8. The checksum is used in the Internet by several protocols although not at the data link layer. However, we briefly discuss it here to complete our discussion on error checking. Suppose our data is a list of five 4-bit numbers that we want to send to a destination. In addition to sending these numbers, we send the sum of the numbers. For example, if the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0, 6, 36), where 36 is the sum of the original numbers. The receiver adds the five numbers and compares the result with the sum. If the two are the same, the receiver assumes no error, accepts the five numbers, and discards the sum. Otherwise, there is an error somewhere and the data are not accepted. IRISET 33 TA2 – Data Communication & Networking Data Transmission on LAN We can make the job of the receiver easier if we send the negative (complement) of the sum, called the checksum. In this case, we send (7, 11, 12, 0, 6, −36). The receiver can add all the numbers received (including the checksum). If the result is 0, it assumes no error; otherwise, there is an error. How can we represent the number 21 in one’s complement arithmetic using only four bits? The number 21 in binary is 10101 (it needs five bits). We can wrap the leftmost bit and add it to the four rightmost bits. We have (0101 + 1) = 0110 or 6. How can we represent the number −6 in one’s complement arithmetic using only four bits? In one’s complement arithmetic, the negative or complement of a number is found by inverting all bits. Positive 6 is 0110; negative 6 is 1001. If we consider only unsigned numbers, this is 9. In other words, the complement of 6 is 9. Another way to find the complement of a number in one’s complement arithmetic is to subtract the number from 2n − 1 (16 − 1 in this case). Fig. 3.8 Example for Checksum Sender site:  The message is divided into 16-bit words.  The value of the checksum word is set to 0.  All words including the checksum are added using one’s complement addition.  The sum is complemented and becomes the checksum.  The checksum is sent with the data. Receiver site:  The message (including checksum) is divided into 16-bit words.  All words are added using one’s complement addition.  The sum is complemented and becomes the new checksum.  If the value of checksum is 0, the message is accepted; otherwise, it is rejected. 3.1.7 ERROR CONTROL There are three protocols in this section that use error control. Stop-and-Wait Automatic Repeat Request Go-Back-N Automatic Repeat Request Selective Repeat Automatic Repeat Request IRISET 34 TA2 – Data Communication & Networking Data Transmission on LAN 3.2 LAN PROTOCOLS: For data transmission on LAN, protocols are used to form frames. Basically there two types of protocols are used. They are  Character Oriented protocols  Bit oriented protocols Character – oriented protocols In character oriented protocols, data to be carried are 8 bit characters from a coding system such as ASCII. The header, which normally carries the source and destination address and other control information and trailer which carries error detection or error correction redundant bits are also multiples of 8 bits. To separate one frame from the next, an 8 bit flag is added at the beginning and end of the frame Character orientated protocols are inefficient. This is because a character is used to convey meaning. As the number of meanings increase, the overhead involved also increases, as a character is used to signal the meaning. Bit - oriented protocols In bit-orientated protocols, each bit has significance. The position and value of each bit in the data stream determines its function. Thus, a single character can hold 256 different meanings in a bit orientated protocol. This reduces the information needed to convey additional information, thus increasing the efficiency of the protocol. Examples of these types of protocols are, X.25 CCITT standard for packet data transmission HDLC high level data link control (adopted by ISO in 1970's) SDLC synchronous data link control (developed by IBM) Links between sender and receiver can be half duplex, full duplex or both. Information can be sent across the network in two different ways, traveling different routes to the receiver (datagram), or traveling the same route (virtual circuit). Information is packaged into an envelope, called a FRAME. Each frame has a similar format header containing routing and control information body containing the actual data to be transmitted to destination and tail containing checksum data Frames are responsible for transporting the data to the next point. Consider data that is to be sent from a source to a destination. This involves several intermediate points (called stations). The data is placed into a frame and sent to the next station, where the frame is checked for validity and if valid, the data extracted. The data is now repackaged into a new frame and sent by that station to the next station, and the process repeats till the data arrives at the destination. When a station transmits a frame, it keeps a copy of the frame contents till the frame is acknowledged as correctly received by the next station. When a station receives a frame, it is temporarily stored in a buffer and checked for errors. If the frame has errors, the station will ask the previous station to resend the frame. Frames that are received without errors are also acknowledged, at which point the sending station can erase its copy of the frame. A receiving station has a limited amount of buffer space to store incoming frames. When it runs out of buffer space, it signals other stations that it cannot receive any more frames. IRISET 35 TA2 – Data Communication & Networking Data Transmission on LAN Data is placed into frames for sending across a transmission link. The frame allows intelligent control of the transmission link, as well as supporting multiple stations, error recovery, intelligent (adaptive) routing and other important functions. For the purposes of sending data on a link, there are two types of stations Primary station (issues commands) Secondary station (responds to commands) Primary Station: The primary station is responsible for controlling the data link, initiating error recovery procedures, and handling the flow of transmitting data to and from the primary. In a conversation, there is one primary and one or more secondary stations involved. Secondary Station: A secondary station responds to requests from a primary station, but may under certain modes of operation, initiate transmission of its own. An example of this is when it runs out of buffer space, at which point it sends RNR (receiver not ready) to the primary station. When the buffer space is cleared, it sends RR (receiver ready) to the primary station, informing the primary that it is now ready to receive frames again. Because frames are numbered, it is possible for a primary station to transmit a number of frames without receiving an acknowledgement for each frame. The secondary can store the incoming frames and reply using a supervisory frame with the sequence number bits in the control field set so as to acknowledge a group of received frames. If the secondary runs out of buffer space to store incoming Information frames, it can transmit a supervisory frame informing primary stations of its status. Primary stations will thus keep their Information frames and wait till the secondary is again able to process Information frames. When a secondary cannot process Information frames, it must still be able to process incoming supervisory and unnumbered frames (because of status requests). At any one time, a number of Information frames can be unacknowledged by a secondary station, and this is called the sliding window value, which defaults to 2, but can be negotiated when a call is first established. 3.3 MEDIA ACCESS Classification of media access / multiple-access protocols is shown in fig 3.9 Fig. 3.9 Classification of multiple-access protocol In random access methods, no station is superior to another station and none is assigned the control over another. No station permits, or does not permit, another station to send. At each instance, a station that has data to send uses a procedure defined by the protocol to make a decision on whether or not to send. IRISET 36 TA2 – Data Communication & Networking Data Transmission on LAN In controlled access, the stations consult one another to find which station has the right to send. A station cannot send unless it has been authorized by other stations. Channelization is a multiple-access method in which the available bandwidth of a link is shared in time, frequency, or through code, between different stations. 3.3.1 MAC - Media Access Control The IEEE 802.3 Media Access Control layer as shown in fig 3.10 is physically located in the firmware (ROM) of the Network Interface Card. It is the link between the Data Link Layer and the Physical Layer of the OSI model and logically resides in the lower portion of the Data Link Layer. There is only 1 MAC layer for all IEEE 802.3 versions: 802.3, 802.3a, 802.3b, 802.3i, etc. OSI Model IEEE DataLink 802.2 LLC Layer 802.3 MAC – CSMA/CD 802.3 802.3a 802.3b 802.3e 802.3i Physical 10Base5 10Base2 10Broad 36 1Base5 10BaseT Layer Thick Coax Thin Coax Braodband StarLAN Twisted Pair Fig. 3.10 IEEE 802.3 MAC layer versions The most popular type of IEEE 802.3 Media Access Control protocol is Ethernet protocol and it uses CSMA/CD (Carrier Sense Multiple Access/Collision Detect) to determine Bus Arbitration. The MAC layer is concerned with the order of the bits and converting the Datagram from the Network Layer into Packets/Frames. Fig 3.11 shows the MAC layer / Ethernet frame format. Preamble SFD DA SA Length IF / FCS DATA Fig 3.11, MAC layer / Ethernet frame format Preamble: The Preamble is used to synchronize the receiving station's clock. It consists of 7 bytes of 10101010. Start Frame Delimiter (SFD): The Start Frame Delimiter indicates the start of the frame. It consists of 1 byte of 10101011. It is an identical bit pattern to the preamble except for the last bit. The Destination Address (DA): Indicates the destination (receiving station) of the frame. It is 6 octets long (48 bits), The DA field as shown in fig 3.13 consists of I/G U/L 46 Address bits Fig 3.12 DA Field I/G: Stands for Individual/Group. It indicates whether the destination is for an individual or for a multicast broadcast. It is one bit long: 0 = Individual 1 = Group IRISET 37 TA2 – Data Communication & Networking Data Transmission on LAN A multicast broadcast can be for everyone or for a group. For a multicast broadcast to all stations, the Destination Address = FFFFFFFFFFFFh (h - hexadecimal notation). To multicast to a specific group, the Network Administrator must assign unique addresses to each station. U/L: Stands for Universal/Local. It allows for unique addresses. It is used to indicate whether a local naming convention is used - administered by the Network Administrator (not recommended - incredible amount of work) or the burnt-in ROM address is used (recommended). 46 Bit Address: 46 bits address indicating the destination NIC cards address burnt into the firmware (ROM) of the card or the unique name assigned to the card during the card's initialization by the Network Administrator. Source Address (SA): The Source Address indicates the source or transmitting station of the frame. It is identical in format to the Destination Address but always has the I/G bit = 0 (Individual/Group Bit = Individual) Length (L): The Length field indicates the Length of the Information Field. It allows for variable length frames. The minimum Information Field size is 46 octets and the maximum size is 1500 octets. When the Information Field size is less than 46 octets, the Pad field is used. Due to the 802.3 MAC Frame having a Length field, there is no End Delimiter in the MAC Frame. The Length of the field is known and the receiving station counts the number of octets. Information Field (Data): The Information Field contains the Data from the next upper layer: Logical Link Control Layer. It is commonly referred to as the LLC Data. The minimum Information Field size is 46 octets and the maximum size is 1500 octets. Pad: The Pad is used to add octets to bring the Information Field up to the minimum size of 46 octets if the Info Field is less than the minimum. Frame Check Sequence (FCS): The Frame Check Sequence is used for error-checking at the bit level. It is based on 32 bit CRC (Cyclic Redundancy Checking) and consists of 4 octets (4 x 8 = 32 bits). The FCS is calculated according to the contents of the DA, SA, L, Data and Pad fields. Total breakup of MAC Frame Length is shown in table 3.4 Min Size Max. Size (octets) (octets) Preamble 7 7 Start Frame Delimiter 1 1 Destination Address 6 6 Source Address 6 6 Length 2 2 Information Field 46 1500 Frame Check Sequence 4 4 TOTAL: 72 1526 Octets Table 3.4 MAC frame Length breakup IRISET 38 TA2 – Data Communication & Networking Data Transmission on LAN MAC / Ethernet Frame structure is shown in fig 3.13 Fig 3.13 MAC / Ethernet frame structure 3.3.2 CSMA/CD (Carrier Sense Multiple Access/ Collision Detect) Bus arbitration is performed on all versions of Ethernet using the CSMA/CD (Carrier Sense Multiple Access/ Collision Detect) protocol. Bus arbitration is another way of saying how to control who is allowed to talk on the (medium) and when. Put simply, it is used to determine whose turn it is to talk. In CSMA/CD, all stations, on the same segment of cable, sense for the carrier signal. If the carrier is sensed, then it is treated that the segment of cable NOT free for communication. In the absence of carrier, it is treated that the segment of cable is free for communication. This principle of working is called the Carrier Sense portion of CSMA/CD. Sender has to keep trying to access the media by way of sensing the presence/absence of carrier. All stations share the same segment of cable and can talk on it similar to a party line. This is the Multiple Access portion of CSMA/CD. If 2 stations attempt to talk at the same time, a collision is detected and both stations back off for a random amount of time and then try again. 3.3.3 CSMA/CA (Carrier Sense Multiple Access/ Collision Avoidance) CSMA/CA stands for: Carrier Sense Multiple Access With Collision Avoidance. As per the CSMA/CA, a station wishing to transmit has to first listen to the channel for a predetermined amount of time so as to check for any activity on the channel. If the channel is sensed "idle" then the station is permitted to transmit. If the channel is sensed as "busy" the station has to defer its transmission. This is the essence of both CSMA/CA and CSMA/CD. In CSMA/CA (Local Talk), once the channel is clear, a station sends a signal telling all other stations not to transmit, and then sends its packet. Collision avoidance is used to improve the performance of CSMA by attempting to be less "greedy" on the channel. If the channel is sensed busy before transmission then the transmission is deferred for a "random" interval. This reduces the probability of collisions on the channel. IRISET 39 TA2 – Data Communication & Networking Data Transmission on LAN CSMA/CA is used where CSMA/CD cannot be implemented due to the nature of the channel. CSMA/CA is used in 802.11 based wireless LANs and it is not possible to listen while sending, therefore collision detection is not possible. 3.4 ETHERNET IEEE 802.3 supports a LAN standard originally developed by Xerox and later extended by a joint venture between Digital Equipment Corporation, Intel Corporation and Xerox. This was called Ethernet. The evolution of Ethernet is in fig 3.14 Fig 3.14 Ethernet evolution IEEE divides the standard Ethernet implementation into four different standards as shown in fig 3.15. 10Base5, 10Base2, 10Base-T and 10Base-F. The first number (10) indicates data rate in MBPS. The last number or letter (5, 2, T and F) indicates maximum cable length or the type of cable. However, the maximum cable length restriction can be changed using networking devices such as repeaters or bridges. Fig 3.15 Implementation of Ethernet The backbone of wired Ethernet is coaxial, UTP, fiber in 10Base5, 10Base-T, 10 Base- F respectively. Electrical Specification for Ethernet: Signaling: The baseband systems use Manchester digital encoding. Data Rate: Ethernet LANs can support data rates between 1 and 100 Mbps. Frame Format: IEEE 802.3 specifies one type of frame as shown in fig 3.13 containing seven fields: preamble, SFD, DA, SA, length/type of PDU, 802.3 frames, and the CRC. IRISET 40 TA2 – Data Communication & Networking Data Transmission on LAN 3.4.1 IEEE 802.3 defines 5 media types of IEEE 802.3 Ethernet variants: IEEE Std Name Cabling Transfer Methodol Distance rate ogy limit IEEE 802.3 10Base Thick 10 Mbps Baseban 500m 5 Coax d IEEE 802.3a 10Base Thin 10 Mbps Baseban 185m 2 Coax d IEEE803b 10Broa Broadb 10 Mbps Broadban 3600m d36 and d IEEE802.3e 1Base5 Star 1 Mbps Baseban 500m LAN d IEEE 802.3i 10Base Cat5 Twisted 10 Mbps Baseban 100m T Pair d IEEE 802.3u 100Bas Cat5 Twisted Baseban 100m eT Pair 1000Mbps Full d Duplex IEEE 802.3z 1Gbase Cat5e Twisted 1 Gbps Full Baseban 100m T Pair Duplex d Table 3.5 Ethernet media variants Baseband - only a single stream of intelligence or data is transmitted. Ex. A television station broadcasts one television channel from its transmitter. Broadband - multiple streams of intelligence or data is transmitted. Ex. A cable company broadcasts many television channels on its cable system. IEEE 802.3 - 10Base5 (Thick Coax or thick net) was the original Ethernet configuration. Hasn't been used since the early 1990s. 10Base5 was replaced by Thin Coax. IEEE 802.3a - 10Base2 (Thin Coax or thin net) was commonly used for new installations in the 1990s and was replaced by 10BaseT in the mid 1990s. IEEE 802.3b - 10Broad36 is rarely used; it combined analog and digital signals together. Broadband means that a mixture of signals can be sent on the same medium. I have never seen or heard of a 10Broad36 installation. IEEE 802.3e - Star LAN was a slow 1 Mbps standard that was used in the 1980s briefly. IEEE 802.3i - 10BaseT (cheaper net) was commonly used to connect workstations to network hubs starting in the mid 1990s until the early 2000s. The network uses Cat5 cabling (Twisted Pair) to connect to other Hubs. IEEE 802.3u - 100BaseT (fast Ethernet) is commonly used to connect workstations to network hubs and became common in the early 2000s. The network uses Cat5 cabling (Twisted Pair) to connect to other Hubs or switches. IEEE 802.3z - 1000BaseT or 1GbaseT (Gigabit Ethernet) is commonly used to connect servers to high speed backbone networks through Cat5e cabling (Twisted Pair). The standard defines auto-negotiation of speed between 10, 100 and 1000 Mbit/s so the speed will fall to the maximum supported by both ends - ensuring inter-working with existing installations. Gigabit Ethernet uses all 4 pairs (8 conductors). The transmission scheme is radically different (PAM-5 amplitude modulation scheme is used ) and each conductor is used for send and receive. IRISET 41 TA2 – Data Communication & Networking Data Transmission on LAN 3.4.2 Cables used for Ethernet and their Wiring Practices: Cables used for Ethernet for Local Area Network (LAN) are generically called twisted pair cables. There are two (2) types. One is UTP (Unshielded Twisted Pair) and the other is STP (Shielded Twisted Pair). UTP is predominantly used for indoor areas and whereas STP for outdoor & in special areas. These UTP cables are identified with a category rating. UTP comes in two forms SOLID or STRANDED. SOLID refers to the fact that each internal conductor is made up of a single (solid) wire, STRANDED means that each conductor is made up of multiple smaller wires. The only obvious benefit of using stranded cable (which is typically more expensive) is that it has a smaller 'bend- radius' (we can squeeze the cable round tighter corners with lower loss) or where we plug and unplug the cable frequently. All other things being equal the performance of both types of cable is the same. In general solid cable is used for backbone wiring and stranded for PC to wall plug cables. Refer fig.3.16 for UTP cables.  Technical name-RG-8  Used for 10base t and 100base t networks.  They have different categories from 1-6.  Total length of the segment is 100 meters.  Common implementation in Star Topology.  CAT -3 cable has speed of 10 Mbps.  CAT -5 cable has speed of 100Mbps and CAT -6 above 100Mbps Fig.3.16 UTP CABLE (4 -pair) Fig. 3.17 Use of crossed and straight cables Note: We show Straight cables as BLUE and Crossed as RED. As shown in fig 3.17. Straight cables are used for interconnecting dissimilar devices and Crossed cables are used between similar devices. IRISET 42 TA2 – Data Communication & Networking Data Transmission on LAN To avoid the need for Crossed cables many vendors provide UPLINK ports on Hubs or Switches - these are specially designed to allow the use of a STRAIGHT cable when connecting back-to-back Hubs or Switches. Category 5(e) (UTP) colour coding table The following Fig. 3.18 (a) , shows the normal color coding for category 5 cables (4 pair) based on the two standards 568A and 568B supported by TIA/EIA. Fig. 3.18 (a) TIA/EIA 568 Standard for straight wiring The following description shows the wiring at both ends (male RJ45 connectors) of the crossed cable. The diagrams Fig. 3.18 (b) below shows crossing of all 4 pairs. Crossing of pairs 4, 5 and 7, 8 is optional. Fig. 3.18 (b) TIA/EIA 568 Standard for crossed wiring. 3.5 PoE (Power over Ethernet): Power over Ethernet is a technology which allows a single cable to provide both data connection and electrical power to the devices as shown in fig 3.19 It is not necessary to use two individual lines for Data & Power supply. One Ethernet line is sufficient. This technology is applicable for wide range of network products such as Access Points, Routers, IP cameras, modems, switches, embedded computers or other network products. IRISET 43 TA2 – Data Communication & Networking Data Transmission on LAN Power over Ethernet is defined by standard IEEE 802.3af / 15.4 W, (at the same time it is defined by new prepared standard IEEE 802.3at / 25.5 W). Power over Ethernet products using these standards contain of two individual active pieces injector and splitter. Each active piece includes an electrical circuit which ensures the function of this solution. There is guaranteed selected supply to 100 m / 328 ft at these standards.. POE power classes are as follows shown in table 3.6 Table 3.6 ,POE classes Power over Ethernet, is a simple way of connecting the cables in order to transfer the data and power supply along the same Ethernet cable at the same time. Ethernet cable contains 8 wires. 4 wires (1, 2, 3, 6) are used for data transmission and the rest (4, 5, 7, 8) is used for supplying power. Fig 3.19 Power Over Ethernet (PoE) 3.6 CONNECTING DEVICES The connecting devices are broadly classified into different categories based on the layer in which they operate in a network. They are HUBs, SWITCHES, ROUTERS & GATEWAYS. (Routers & Gateways are discussed in chapter no. 4) 3.6.1 HUBs Hubs, also called wiring concentrators, provide a central attachment point for network cabling See Figure 3.20 Fig 3.20 HUB IRISET 44 TA2 – Data Communication & Networking Data Transmission on LAN Hubs come in three types:  Passive  Active  Switching The following sections describe each of these types in more detail.  Passive Hubs: Passive hubs do not contain any electronic components and do not process the data signal in any way. The only purpose of a passive hub is to combine the signals from several network cable segments. All devices attached to a passive hub receive all the packets that pass through the hub. Because the hub doesn’t clean up or amplify the signals (in fact, the hub absorbs a small part of the signal), the distance between a computer and the hub can be no more than half the maximum permissible distance between two computers on the network. For example, if the network design limits the distance between two computers to 200 meters, the maximum distance between a computer and the hub is 100 meters. As you might guess, the limited functionality of passive hubs makes them inexpensive and easy to configure.  Active Hubs: Active hubs incorporate electronic components that can amplify and clean up the electronic signals that flow between devices on the network. This process of cleaning up the signals is called signal regeneration. Signal regeneration has the following benefits:  The network is more robust (less sensitive to errors).  Distances between devices can be increased. These advantages generally outweigh the fact that active hubs cost considerably more than passive hubs. Earlier you learned about repeaters, devices that amplify and regenerate network signals. Because active hubs function in part as repeaters, they occasionally are called multiport repeaters. Intelligent Hubs: Intelligent hubs are enhanced active hubs. Several functions can add intelligence to a hub i.e Hub management: Hubs now support network management protocols that enable the hub to send packets to a central network console. These protocols also enable the console to control the hub; for example, a network administrator can order the hub to shut down a connection that is generating network errors. Switching: The latest development in hubs is the switching hub, which includes circuitry that very quickly routes signals between ports on the hub ports on the hub, a switching hub repeats a packet only to the port that connects to the destination computer for the packet. Many switching hubs have the capability of switching packets to the fastest of several alternative paths. Switching hubs are replacing bridges and routers on many networks. In essence, a switching hub acts like a very fast bridge, which is what is described in the next section. 3.6.2 SWITCHES A network switch is a computer networking device that connects network segments. The network switch plays an integral part in most Ethernet LANs. IRISET 45 TA2 – Data Communication & Networking Data Transmission on LAN Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch. Switches provide many additional features not offered by older devices such as hubs and bridges. In particular switches provide the following benefits: Switch ports connected to a single device micro segments the LAN, providing dedicated bandwidth to that single device. Input and output buffers with the switching matrix in high speed backplane enables switching at ethernet speeds and supports micro-segmentation. Refer figure 3.21 Switches allow multiple simultaneous conversations between devices on different ports Switch ports connected to a single device support full duplex, in effect doubling the amount of bandwidth available to the device. Switches support rate adaptation, which means that devices that use different Ethernet speeds can communicate through the switch that Hubs cannot. Refer figure 3.21 for difference between a HUB and SWITCH. Fig 3.21 HUB and a SWITCH A switch creates a different collision domain per switch port. If you have 4 computers A/B/C/D on 4 switch ports, then A and B can transfer data between them as well as C and D at the same time, and they will never interfere with each others' conversations. In the case of a "hub" then they would all have to share the bandwidth, run in Half duplex and there would be collisions and retransmissions. Switches use Layer 2 logics to examine the Ethernet data-link header and choose how to process the frames. In particular, switches make decisions to forward and filter frames, learn MAC addresses, and use STP (Spanning tree protocol) to avoid loops in the following manner Steps for Data forwarding Step 1: Switches forward frames based on the destination address: a) If the destination address is a broadcast, multicast or unknown destination unicast address, the switch floods the frame. b) If the destination address is a known unicast address (a unicast address found in the MAC table): i) If the outgoing interface listed in the MAC address table is different from the interface in which the frame was received, the switch forwards the frame to the outgoing interface. ii) If the outgoing interface is the same as the interface in which the frame was received, the switch filters the frame, meaning that the switch simply ignores the frame and does not forward it. IRISET 46 TA2 – Data Communication & Networking Data Transmission on LAN Step 2: Switches use the following logic to learn MAC address table entries: a) For each received frame, examine the source MAC address and note the inter face from which the frame was received. b) If they are not already in the table, add the address and interface, setting then activity timer to 0. c) If it is already in the table, reset the inactivity timer for the entry to 0. Step 3: Switches use STP to prevent loops by causing some interfaces to block, meaning that they do not send or receive frames. Once a switch learns the topology through a spanning tree protocol, it forwards data link layer frames using a layer 2 forwarding method. Almost always switch ports are in Full duplex operation by default, unless there is a requirement for interoperability with devices that are strictly Half duplex. Switches tended to use micro segmentation to prevent collisions among devices connected to Ethernets. Micro-segmentation facilitates to have dedicated bandwidth on point to point connections with every computer and to therefore run in Full duplex with no collision. Methods of forwarding that can be used on a Switch  Store and forward: The switch buffers and, typically, performs a checksum on each frame before forwarding it on.  Cut through: The switch reads only up to the frame's hardware address before starting to forward it. There is no error checking with this method.  Fragment free: A method that attempts to retain the benefits of both "store and forward" and "cut through". Fragment free checks the first 64 bytes of the frame, where addressing information is stored. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frames that are in error because of a collision will not be forwarded. This way the frame will always reach its intended destination. Error checking of the actual data in the packet is left for the end device typically a router.  Adaptive switching: A method of automatically switching between the other three modes. TYPES OF SWITCHES i. Unmanaged switches: These switches have no configuration interface or options. They are plug and play. They are typically the least expensive switches. ii. Managed switches: These switches have one or more methods to modify the operation of the switch. Common management methods include: a serial console or command line interface accessed via telnet or Secure Shell, an embedded Simple Network Management Protocol (SNMP) agent allowing management from a remote console or management station, or a web interface for management from a web browser. Smart (or intelligent) switches — these are managed switches with a limited set of management features. Likewise "web-managed" they provide a web interface (and usually no CLI access) and allow configuration of basic settings, such as VLANs, port-speed and duplex. Enterprise Managed (or fully managed) switches - These have a full set of management features, including Command Line Interface, SNMP agent, and web interface. They may have additional features to manipulate configurations, such as the ability to display, modify, backup and restore configurations. IRISET 47 TA2 – Data Communication & Networking Data Transmission on LAN iii. Layer -3 switches: Layer-3 switches are required because the traffic of LAN is no longer local and has crossed the WANs due to converged networks. The speed of LAN is much faster and the L-3 switches can provide routing at switching speeds across the WAN. Further the need of a much faster router, however is very expensive. Features of L-3 switch are Layer-3 switches operate in both layer 2 (data link layer) and 3 (network layer) Can perform both MAC switching and IP routing both. A combination of switch and router but much faster and easier to configure than router. A switch that performs multi port V LAN and data pipelining functions of a standard L–2 switch. A switch that can perform basic routing functions between virtual LANs. Provides routing at switching speeds. Difference between a L2 and L3 switch is given in the following fig.3.22 L-2 SWITCH L-3 SWITCH Fig 3.22 Difference between L-2 and L-3 switch 3.7 VLAN (Virtual LAN) A virtual LAN, commonly known as a VLAN, is a group of hosts with a common set of requirements that communicate as if they were attached to the same broadcast domain, regardless of their physical location. A VLAN has the same attributes as a physical LAN, but it allows for end stations to be grouped together even if they are not located on the same network switch. Network reconfiguration can be done through software instead of physically relocating them. IEEE 802.1Q is the networking standard that supports Virtual LANs (VLANs) on an Ethernet network. The standard defines a system of VLAN tagging for Ethernet frames and the accompanying procedures to be used by switches in handling such frames. IRISET 48 TA2 – Data Communication & Networking Data Transmission on LAN Portions of the network which are VLAN-aware (i.e., IEEE 802.1Q conformant) can include VLAN tags. Traffic on a VLAN-unaware (i.e., IEEE 802.1D conformant) portion of the network will not contain VLAN tags. When a frame enters the VLAN-aware portion of the network, a tag is added to represent the VLAN membership of the frame's port or the port / protocol combination, depending on whether port-based or port-and-protocol-based VLAN classification is being used. Each frame must be distinguishable as being within exactly one VLAN. A frame in the VLAN-aware portion of the network that does not contain a VLAN tag is assumed to be flowing on the native (or default) VLAN. VLANs are created to provide the segmentation services traditionally provided by routers in LAN configurations. VLANs address issues such as scalability, security, and network management. Routers in VLAN topologies provide broadcast filtering, security, address summarization, and traffic flow management. By definition, switches may not bridge IP traffic between VLANs as it would violate the integrity of the VLAN broadcast domain. This is also useful if someone wants to create multiple Layer 3 networks on the same Layer 2 switch. For example, if a DHCP server (which will broadcast its presence) was plugged into a switch it will serve any host on that switch that was configured to use the server. By using VLANs you can easily split the network up so some hosts won't use that server and will obtain Link-local addresses. Virtual LANs are essentially Layer 2 constructs, compared with IP subnets which are Layer 3 constructs. In an environment employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although it is possible to have multiple subnets on one VLAN or have one subnet spread across multiple VLANs. Virtual LANs and IP subnets provide independent Layer 2 and Layer 3 constructs that map to one another and this correspondence is useful during the network design process. By using VLANs, one can control traffic patterns and react quickly to relocations. VLANs provide the flexibility to adapt to changes in network requirements and allow for simplified administration Creation of VLAN is shown in the figure 3.23 and 3.24. Fig 3.23 Virtual LAN By default all the ports of a switch are in VLAN1, hence VLAN1 is known as administrative VLAN (or) management VLAN. VLANs can be created from 2 to 1001. IRISET 49 TA2 – Data Communication & Networking Data Transmission on LAN 3.7.1 VLANs Creation By creating VLANs, an administrator can limit the number of users in each broadcast domain. This minimizes bandwidth contention, which effectively increases the bandwidth available to users. Routers also maintain broadcast domain isolation by blocking broadcast frames. Therefore, traffic can pass from one VLAN to another only through a router or a layer 3 switch (Which has routing capabilities). Typically, each subnet belongs to a different VLAN. Therefore, a network with many subnets will probably have many VLANs. Switches and VLANs enable a network administrator to assign users to broadcast domains based upon the user’s job need. This provides a high level of deployment flexibility for a network administrator. Advantages of VLANs include the following:  Segmentation of broadcast domains effectively increases available bandwidth  Enhanced security through isolation of user communities  Deployment flexibility based upon job function rather than physical placement 3.7.2 VLAN Switch Port Modes VLAN Switch ports run in either access or trunk mode. In access mode, the interface belongs to one and only one VLAN. Normally a switch port in access mode attaches to an end user device or a server. An access link connects a VLAN-unaware device to the port of a VLAN-aware bridge. Trunks, on the other hand, multiplex traffic for multiple VLANs over the same physical link. Trunk links usually interconnect switches. The devices connected to a trunk link must be VLAN- aware. Trunk protocols may be proprietary or may be based upon the IEEE 802.1Q standard. Without trunk links, multiple access links must be installed to support multiple VLANs between switches. This creates an impractical and not cost-effective administrative challenge. Trunks are the preferred option for interconnecting switches in most cases. Fig 3.24 shows two VLANs (i.e.VLAN2 & VLAN3) in switch no.1 & no.2 are connected using the Trunk port. Inter connection of VLANs using Trunk port Fig 3.24 VLAN creation & Trunking IRISET 50 TA2 – Data Communication & Networking

Use Quizgecko on...
Browser
Browser