Data Communication (3rd Sem) PDF
Document Details
Uploaded by Deleted User
Akshith Shetty, Keshav Nayak
Tags
Summary
This document provides an overview of data communication concepts, including message, sender, receiver, transmission, and protocol. It explains different communication models, types of networks (LAN, MAN, WAN), circuit switching, packet switching, and frame relay. It also details the layered architecture of protocols, like OSI and TCP/IP, and their key features.
Full Transcript
DATA COMMUNICATION By : Akshith Shetty Keshav Nayak Unit -1 Data communication: Trans...
DATA COMMUNICATION By : Akshith Shetty Keshav Nayak Unit -1 Data communication: Transfer of data from one device to another via some form of transmission medium. Message:Information(data) to be communicated. Sender: Device that sends the message. Receiver: Device that receives the message. Transmission: A physical path by which a message travels from one sender to receiver. Protocol: Set of rules that governs the data communication. ->Communication model:- K A& ->Data communication model:- It is a framework that defines how data is transmitted and processed between devices in a communication network. Eg:- Communication between workstation and a server over a public telephone. Source:- Device that generates data that has to be transmitted, Transmitter:- A transmitter transforms and encodes the information(data) generated by the source so that they can be transmitted through some sought of transmission system. Transmission system:- This can be a single transmission line or a complex network connecting source and destination. Receiver:- Accepts the signal from the transmission system and converts it into a form that can be handled by the destination device. Destination:- Takes the incoming data from the receiver. Difference between LAN,MAN and WAN:- LAN MAN WAN A computer network that A computer network that A computer network that interconnects computers interconnects user with extends over a larger within a limited area such computer resources in a geographical area. as an office, school etc geographical area larger than LAN but smaller than WAN Covers an area within Covers an area within 100 Covers a larger area that 1km to 10 kms. kms. goes beyond 100 kms. K High data transferring speed. Propagation delay is short. Can be used in schools Moderate data transfer speed. Propagation delay is moderate Can be used in small Low data transfer. Propagation delay is long Can be used in State, country. A& and colleges or offices. town, city LAN (Some extra points) LANs come in different configurations.most common are switched LANs and wireless LANs Most common switched LAN : switched Ethernet Most common wireless LAN : Wi-fi LANs. WAN(Some extra points) Traditionally WANs have been implemented using one of two technologies ○ Circuit switching ○ Packet switching Subsequently frame relay and ATM networks assumed major roles While ATM and to some extent frame relay are still widely used, their use is gradually supplanted by services based on gigabit ethernet and internet protocol technologies. ->Circuit switching:- 1.In circuit switching a dedicated common path is established between two stations through nodes. K 2. That path is a connected sequence of physical links between nodes. 3. Data is transmitted along a dedicated path. 4. Each node data is switched to an approximate channel without delay. Eg: Telephone network ->Packet Switching:- A& 1. In packet switching, there is no dedicated common path.’ 2. Data is divided into a sequence of chunks called packets. 3. Each packet is transmitted through a network from node to node along some path. 4. At destination these packets are combined as they were divided at the source. ->Frame Relay:- 1. Frame Relay is a packet-switched data communication service that operates at the data link layer (Layer 2) of the OSI model. 2. It is designed for efficient and cost-effective data transmission over Wide Area Networks (WANs). 3. Frame Relay uses frames to encapsulate data and relies on a network of switches (Frame Relay switches or routers) to forward these frames to their destination. ->ATM(Asynchronous Transfer mode):- 1. It operates at both the data link layer (Layer 2) and the network layer (Layer 3) of the OSI model. 2. It was designed to handle a wide range of multimedia data types and provide a high level of service quality for various applications. 3. Asynchronous Transfer Mode (ATM) is a high-speed, cell-switched networking technology which is Sometimes known as cell relay. 1. 2. 3. 4. Provides modularity. K ->Need of protocol architecture:- Complex tasks can be divided into simple tasks. Peer layers communicate with a protocol Complexity is only at the end system. A& ->Key features of protocol:- 1. Syntax: a. Definition: Syntax refers to the structure and format of the data b. Importance: Syntax ensures that devices understand the structure of the data being exchanged 2. Semantics: a. Definition: Semantics defines the meaning of each section of data. b. Importance: Semantics ensure that the information being exchanged is understood in the correct context. 3. Timing: a. Definition: Timing refers to when data is sent and how long devices should wait for a response. b. Importance: Timing is crucial for coordinating communication between devices ->Explain protocol architecture in computer network Communication task is organised into three independent layers K 1. Application layer: a. Contains logic to support various user applications. b. Each application on a computer has an unique address.This allows transport layer to support multiple applications in each computer c. These addresses are known as service access points. 2. Transport layer: A& a. Contains mechanisms using which data is collected in an order with which it is sent. 3. Network access layer: a. Is concerned with the exchange of data between a computer and the network to which it is attached. Protocol data unit: Each layer’s information is referred to as a Protocol Data Unit (PDU). Application Layer (User Data): PDU: At the application layer, the user data is known as a "message" or "data." Control Information: This layer adds control information relevant to the application, such as application-specific headers and metadata. Transport Layer (TCP Segment): PDU: At the transport layer, the PDU is referred to as a "segment" in the case of TCP. Control Information (TCP Header): The transport layer appends a TCP header to the user data. This header includes information like destination port, sequence number for reordering, and a checksum for error detection. Network Layer (IP Datagram): PDU: At the network layer, the PDU is known as a "datagram," specifically an IP datagram in the context of the TCP/IP architecture. Control Information (IP Header): The network layer adds an IP header containing information crucial for routing, such as the destination host address. Data Link Layer (Frame or Packet): PDU: At the data link layer, the PDU is typically referred to as a "frame" or "packet." Control Information (Data Link Header): This layer adds a header containing information necessary for the specific subnetwork, including addressing details for the next hop. K OSI model(open system interconnection): OSI is a reference model having 7 layer architecture, which is developed by ISO A& 7. Application layer Application layer serves as a window for service applications to access network Provides access to the TCP/IP environment to users. 6. Presentation layer Data is manipulated as per required format to transmit over the network. It performs encryption/decryption and compression of data. 5. Session layer It establishes a session for communication between two devices. It also terminates the session once the session is completed. 4. Transport layer In transport layer data is called as a segment Source and destination port addresses are added to its header and forwarded to the network layer. 3. Network layer In network layer data is called as a packet In network layer sender and receiver's IP address are added to segments received transport layer 2. Data link layer layer 1. Physical layer K Data is called frame in data link layer sender and receivers MAC address are added to packets received from network In the physical layer data is in the form of bit stream. This layer specifies medium, signal encoding technique, bandwidth. A& Advantages of layered model of data communication: Simplified Design and Implementation: Each layer has a specific set of functions, making the design and implementation of communication systems more manageable. Developers can focus on one layer at a time, which simplifies the overall complexity of the system. Modularity and Ease of Maintenance: Layers are modular, with each layer focusing on a specific aspect of communication. This modularity makes it easier to understand, implement, and maintain each layer independently without affecting the others Flexibility and Scalability: The layered model allows for flexibility in choosing different protocols for each layer based on specific requirements. It also facilitates scalability by enabling the addition of new layers or the modification of existing ones without disrupting the entire system. Educational and Documentation Benefits: The layered model provides a conceptual framework that aids in education, training, and documentation. It helps students, developers, and network administrators understand the functionality and interaction of different components in a systematic manner. TCP/IP protocol suite network layer K A& Session and presentation layer are combined with application layer Rest of the layers perform same function described in OSI model Difference between OSI and TCP/IP OSI TCP/IP Has 7 layers Has 5 layers It is a reference model Implementation of OSI model Strict boundaries for protocols Protocols are not strictly defined Model was developed before First protocol was developed and development of protocols then model was developed Protocol independent standard Protocol dependent standard Internet terminologies: Central office(CO): K A& Refers to the physical location where telecommunication equipment is housed and interconnected to provide various communication services. Customer premise equipment(CPE): Refers to devices located at customers destinations used to connect and interact with the service provider Internet service provider(ISP): Is a company or organisation that offers individuals and businesses access to internet services Network service provider(NSP): A company or organisation that operates and manages a large scale network infrastructure Network access point(NAP): Physical location where multiple ISPs and networks connect to exchange internet traffic. Point of presence(POP): Refers to physical location where ISP has a presence within a layer network infrastructure Definitions: Data communication: Data communication is the transfer of data from one device to another through some form of transmission medium Computer network: Collection of interconnected computers and devices that can share data among themselves. Internet: It is a global network of computers that is accessed by world wide web. Protocol: Set of rules that must be followed for efficient exchange of data over a network Frequency:Rate at which signal repeats OR number of cycles in 1 sec. Phase: Phase/phase shift, describes position of the waveform relative to time 0. Wavelength: Distance occupied by a single cycle. Bandwidth: Refers to the maximum amount of data that can be transmitted over a network in a given amount of time. Amplitude: Refers to range between highest and lowest voltage levels of a signal Most significant impairments: Attenuation Delay distortion Noise Attenuation: K Refers to gradual loss in strength of signal, resulting in reduction in its amplitude. This can lead to weaker and distorted signals at the receiving end. Techniques such as signal boosting and error correction are employed to mitigate the effects of attenuation. Delay distortion: Different components of signal experiences varying delays as they traverse through a medium A& Delay distortion may lead to intersymbol interference(Transmitted signal overlaps with each other) Techniques such as equalisation can be used to mitigate effects of delay distortion Noise: Refers to unwanted interference while transmitting signal Noise can corrupt the original data being sent Techniques such as shielding and signal processing algorithms are used to mitigate the effect of noise Types of noise: Thermal noise: ○ Thermal noise is due to thermal agitation of electrons ○ Thermal noise is a function of temperature. ○ Thermal noise is uniformly distributed across the bandwidths used in communication channels and hence referred to as white noise. Inter modulation noise: ○ When signals at different frequencies share the same transmission medium,the result may be intermodulation noise. ○ Intermodulation noise produce signals at a frequency that is sum or difference of two original frequencies or multiples of those frequencies Crosstalk: ○ This occurs when signal from one channel interferes with signal from adjacent channel ○ Crosstalk has the same order of magnitude or less than that of thermal noise. Impulse noise: ○ Also known as a spike noise is a sudden disturbance in communication system ○ It consists of short-duration, high-amplitude voltage/current spikes that can corrupt the original data signal. Some definitions: Analog data: Refers to the information that is continuous. Digital data: Refers to the information that is discreet. Analog signal: Has many levels of intensity over a period of time. Digital signal: Can have limited number of defined values Key data transmission terms K A& Signal element: Represents a discrete value or a state that carries information.signal elements can have binary or multiple levels. Digital signal encoding format: Non-return to zero level(NRZ - L): NRZ-L is a digital signal encoding format, that represents binary data using two different voltage levels, typically high and low In NRZ-L: High voltage is used to represent ‘0’. Low voltage is used to represent ‘1’. Ex: Non-return to zero-inverted(NRZ-I): NRZ-I is a digital signal encoding format, that represents binary data using two different voltage levels, typically high and low In NRZ-I: When ‘0’ is encountered there is no transition at the beginning of the interval When ‘1’ is encountered there is transition at the beginning of the interval Ex: 0 1 K 0 0 1 1 0 0 0 1 0 A& NRZ-L AND NRZ-I: Advantages: Makes efficient use of bandwidth Easy to identify noise. Disadvantages: Presence of DC components Lack of synchronisation capability Multi-level binary: This scheme represents more than 2 levels to represent digital data Bipolar alternate mark inversion: Bipolar AMI is a digital signal encoding format, that represents binary data using three different voltage levels In bipolar AMI: When ‘0’ is encountered there is no signal line When ‘1’ is encountered, for successive 1’s the signal alternates between positive and negative voltage. Ex: 0 1 0 0 1 1 0 0 0 1 0 Pseudoternary: K A& Bipolar AMI is a digital signal encoding format that represents binary data using three different voltage levels.here representation is opposite to AMI. In bipolar Pseudoternary: When ‘1’ is encountered there is no signal line When ‘0’ is encountered, for successive 1’s the signal alternates between positive and negative voltage. Ex: 0 1 0 0 1 1 0 0 0 1 0 Bipolar AMI and pseudoternary: Advantages: No loss in synchronisation for 1’s in AMI and 0’s in pseudoternary No DC for 1’s in AMI and 0’s in pseudoternary Disadvantages: Loss in synchronisation for 0’s in AMI and 1’s in pseudoternary Biphase: K DC is present for 0’s in AMI and 1’s in pseudoternary Biphase is a digital signal encoding scheme where each bit is represented by transition at the middle of the bit period. A& Manchester When ‘0’ is encountered, signal transition is from high to low at the middle of the interval When ‘1’ is encountered, signal transitions from low to high at the middle of interval. Ex: 0 1 0 0 1 1 0 0 0 1 0 Differential manchester: When ‘0’ is encountered,then transition at the beginning of the interval When ‘1’ is encountered,then no transition at the beginning of the interval In addition to these there is always transition at the middle of the interval Ex: 0 1 0 0 1 1 0 0 0 1 0 Manchester and differential manchester: Advantages: K Synchronisation: Because there is predictable transition during each A& bit time, the receiver can synchronise on that transition No DC component: DC components are completely eliminated in biphase Disadvantages: One notable disadvantage is susceptibility to noise and distortion Biphase encoding may require more complex circuitry and processing Modulation rate: Modulation rate is the rate at which signal elements are generated. Scrambling technique: Idea behind this approach: The sequence that would result in a constant voltage level is replaced by filling sequences that will provide sufficient transition for receivers clock to maintain synchronisation. Design goals No long sequences of zero-leve; line signals No DC component Error-detection capability No reduction in data rate Bipolar with 8-zeros substitution: Drawback of bipolar-AMI is that long strings of zeros may result in loss of synchronisation. To overcome this, encoding is embedded with following rules: If an octet of zero occur and the last voltage pulse preceding this octet was positive, then the eight zeros of the octet are encoded as 000 + - 0 - + If an octet of zero occur and the last voltage pulse preceding this octet was negative, then the eight zeros of the octet are encoded as 000 - + 0 + - Ex: K A& High density bipolar-3 zeros(HDB3) In this scheme strings of 4 zeros are replaced by sequence containing one or two pulse Ex: 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 1 0 B8ZS and HDB3Z: K ❖ Types of errors ⮚ ⮚ In Data communication, an error occurs when bit are altered between transmission and reception Two general types of errors are: A& ▪ Single bit error: A single bit error is a type of data communication error that occurs when a single bit (0 or 1) in a data unit, (byte or a packet) is altered ▪ Burst error: A burst error is a type of data communication error that occurs when there is a consecutive alteration of multiple bits within a short sequence of data. ⮚ Some terminologies: ▪ Dataword: A dataword, also known as a "message," is the original data that needs to be transmitted or processed. ▪ Code word: A codeword is the result of applying an encoding scheme to the original dataword. ▪ Redundant bit: A redundant bit, also known as a "parity bit" or "check bit," is an additional bit that is added to the original dataword during encoding ❖ Error detection: ⮚ The sender creates codewords out of datawords by using a generator that applies that some rules and procedures ⮚ Each codeword sent to the receiver may change during transmission. ⮚ If the received codeword is the same as one of the valid codewords, the word is accepted ⮚ If the received codeword is not valid, it is discarded. ❖ Parity check ⮚ Parity check is a basic error detection technique which involves adding a single "parity bit" to the original dataword. Here's how parity check works: K⮚ ⮚ Even parity: ▪ In even parity, the total number of 1s (binary 'on' bits) in the dataword and the parity bit combined should be even. ▪ The parity bit is set to 0 or 1 so that the total number of 1s in the dataword and parity bit combined becomes even. ▪ If an odd number of bits are inverted due to error, an undetected error occurs. Odd parity: A& ▪ In odd parity, the total number of 1s (binary 'on' bits) in the dataword and the parity bit combined should be odd. ▪ The parity bit is set to 0 or 1 so that the total number of 1s in the dataword and parity bit becomes odd. ▪ If an even number of bits are inverted due to error, an undetected error occurs. ❖ Two-dimensional parity check: ⮚ In two-dimensional parity, data is organised into a matrix or grid format. Additional parity bits are calculated for both the rows and the columns of the matrix. ⮚ If two bits in one data unit are altered and two bits in the exact same positions of another data unit are altered, undetected error occurs. Internet checksum: K A& Note: To perform the one's complement operation on a set of binary digits, replace 0 digits with 1 digits and 1 digits with 0 digits. The one's-complement addition of two binary integers of equal bit length is performed as follows: 1. The two numbers are treated as unsigned binary integers and added. 2. If there is a carry out of the leftmost bit, add 1 to the sum. This is called an endaround carry. Hamming distance: Definition: The Hamming distance d(v1, v2) between two n-bit binary sequences v1 and v2 is the number of bits in which v1 and v2 disagree. For example, if v1 = 011011, v2 = 110001 then d(v1, v2) = 3 Role in Error Detection: The Hamming distance is used to detect whether errors have occurred or not during data transmission. Here's how it works: ○ The received codeword is compared to the expected (original) codeword. ○ If the Hamming distance between the received codeword and the original codeword is nonzero, it indicates that errors have occurred. Note: Now let us consider the block code technique for error correction. Suppose we wish to transmit blocks of data of length k bits. Instead of transmitting each block as k bits, we map each k-bit sequence into a unique n-bit codeword. Now let us consider the block code technique for error correction. Suppose we wish to transmit blocks of data of length k bits. Instead of transmitting each block as k bits, we map each k-bit sequence into a unique n-bit codeword. Data Block Codeword 00 00000 01 00111 10 11001 11 11110 K 𝑛 =>The position of the parity bits will be 2 ,where n=0,1,2.. Minimum hamming distance: Definition: The minimum Hamming distance is the smallest Hamming distance among all possible pairs of distinct codewords. Role in Error Detection and Correction: A& ○ Error Detection: If the minimum Hamming distance is greater than a threshold value , it ensures that errors of a certain magnitude can be detected. ○ Error Correction: If the minimum Hamming distance is greater than a threshold value, it ensures that errors of a certain magnitude can be corrected. ○ Example: A binary Hamming code with a minimum Hamming distance of 3 can correct single-bit errors and detect double-bit errors Note: To detect s errors, the minimum Hamming distance should be dmin = s + 1. It can be shown that to correct t errors, we need to have dmin = 2t + 1. Cyclic redundancy check(CRC) For given a k-bit block of bits sequence, the transmitter generates an (n - k)-bit sequence, known as a frame check sequence (FCS) The resulting frame, consisting of n bits, is exactly divisible by some predetermined number. The receiver then divides the incoming frame by that number and, if there is no remainder, assumes there was no error. Some important points:(just for reference) T = n-bit frame to be transmitted K D = k-bit block of data, or message, the first k bits of T F = (n - k)-bit FCS, the last (n - k) bits of T P = pattern of n - k + 1 bits; this is the predetermined divisor Example: A& CRC division using polynomials Some important points: K In a polynomial representation, the divisor is normally referred to as the generator polynomial g(x). A& ○ Dataword: d(x) ○ Codeword: c(x) ○ Generator: g(x) ○ Syndrome: s(x) ○ Error: e(x) If s(x) is not zero, then one or more bits is corrupted. However, if s(x) is zero, either no bit is corrupted or the decoder fails to detect any errors. 𝑛−𝑘 codeword = d(x)*𝑥 + r(x)(remainder).where n-k is the number of bits in the divisor minus 1. Note:In a cyclic code, those e(x) errors that are divisible by g(x) are not caught. Points to be noted: errors can be caught. K 𝑡 0 1. If the generator has more than one term and the coefficient of 𝑥 is 1, all single-bit 2. If a generator cannot divide 𝑥 + 1 (t between 2 and n - 1), then all isolated double errors can be detected. 3. A generator that contains a factor of x + 1 can detect all odd-numbered errors. A& 4. All burst errors with L ≤ r will be detected. 𝑟−1 5. All burst errors with L = r + 1 will be detected with probability 1 – (1/2). 𝑟 6. All burst errors with L > r + 1 will be detected with probability 1 – (1/2). a. Here r -> highest power of generator polynomial b. L -> length of error Characteristics of good generator polynomial: 1. It should have at least two terms. 0 2. The coefficient of the term 𝑥 should be 1. 𝑡 3. It should not divide 𝑥 + 1, for t between 2 and n - 1. 4. It should have the factor x + 1. Error Correction can be handled in two ways: Backward error correction(Retransmission): Once the error is detected, the receiver requests the sender to retransmit the entire data unit. Forward error correction: In this case, the receiver uses the error-correcting code which automatically corrects the errors. Forward error correction: K On the transmission end, each k-bit dataword is mapped to an n-bit codeword, using an FEC (forward error correction) encoder. The codeword is then transmitted. During transmission the signal is subjected to impairments and error may occur. A& Codeword is received and passed through an FEC decoder at receiver's end, with one of four possible outcomes ○ No errors : Transmitted codeword and received code word are identical ○ Detectable, correctable errors: For certain error patterns, it is possible for the decoder to detect and correct those errors. ○ Detectable, not correctable errors: For certain error patterns, the decoder can detect but not correct the errors. In this case, the decoder simply reports an uncorrectable error. ○ Undetectable errors: For rare error patterns, the decoder does not detect the error and maps the incoming n-bit codeword to a k-bit dataword that differs from the original k-bit dataword. DIGITAL-TO-ANALOG CONVERSION: Process of changing one of the characteristics of an analog signal based on the digital data. K A& Carrier signal:In analog transmission, the sending device produces a high-frequency signal that acts as a base for the information signal. This signal is called the carrier signal or carrier frequency. Modulation:Modulation refers to the process of encoding digital or analog data onto a carrier signal for transmission over a communication channel. Amplitude shift keying: In amplitude shift keying, the amplitude of the carrier signal is varied to represent the data. In ASK both frequency and phase remain constant. Binary amplitude shift keying: ASK is normally implemented using only two levels. This is referred to as binary amplitude shift keying or on-off keying (OOK). The peak amplitude of one signal level is 0; the other is the same as the amplitude of the carrier signal. Implementation of BASK: K A& If digital data are presented as a unipolar NRZ digital signal with a high voltage of 1 V and a low voltage of 0 V, the implementation can be achieved by multiplying the NRZ digital signal by the carrier signal coming from an oscillator. When the amplitude of the NRZ signal is 1, the amplitude of the carrier frequency remains the same. when the amplitude of the NRZ signal is 0, the amplitude of the carrier frequency is zero. Multilevel ASK Multilevel ASKs have more than two levels. We can use 4, 8, 16, or more different amplitudes for the signal and modulate the data using 2, 3, 4, or more bits at a time. Although this is not implemented with pure ASK, it is implemented with QAM Frequency shift keying: In frequency shift keying, the frequency of the carrier signal is varied to represent data. In FSK both peak amplitude and phase remain constant. Binary Frequency Shift keying: In BFSK we consider two carrier frequencies f1 and f2. We use the first carrier if the data element is 0; we use the second if the data element is 1. Implementation of BFSK: K A& There are two implementations of BFSK: Noncoherent ○ Discontinuity in the phase when one signal element ends and the next begins. ○ Implemented by treating BFSK as two ASK modulations and using two carrier frequencies. Coherent. ○ Phase continues through the boundary of two signal elements. ○ Implemented by using one voltage-controlled oscillator (VCO) that changes its frequency according to the input voltage Phase shift keying: In phase shift keying, the phase of the carrier is varied to represent the data. In PSK both peak amplitude and frequency remain constant. Binary phase shift keying: In binary phase shift keying we have 2 signal elements. When the amplitude of polar NRZ is 1 the signal element(of carrier signal) has a ◦ phase of 0 and when the amplitude is 0 the signal element(of carrier signal) has a ◦ phase of 180. Implementation: K The polar NRZ signal is multiplied by the carrier frequency the 1 bit (positive voltage) is represented by a phase starting at 0° the 0 bit (negative voltage) is represented by a phase starting at 180° Quadrature Phase shift keying: A& Quadrature Phase Shift Keying (QPSK) is a digital modulation scheme used to transmit data by varying the phase of a carrier signal. Implementation: K Quadrature PSK or QPSK uses two separate BPSK modulations; The incoming bits are first passed through a serial-to-parallel conversion that sends one bit to one modulator and the next bit to the other modulator. If the duration of each bit in the incoming signal is T, the duration of each bit sent to A& the corresponding BPSK signal is 2T. The two composite signals created by each multiplier are sine waves with the same frequency, but different phases. When they are added, the result is another sine wave, with one of four possible phases: 45°, −45°, 135°, and −135°. There are four kinds of signal elements in the output signal (L = 4), so we can send 2 bits per signal element (r = 2). Constellation diagram: A constellation diagram is a graphical representation used to display and visualise the various symbols or signal states that represent digital data. For each point on the diagram, four pieces of information can be deduced component component. K 1. The projection of the point on the X axis defines the peak amplitude of the in-phase 2. The projection of the point on the Y axis defines the peak amplitude of the quadrature 3. The length of the line (vector) that connects the point to the origin is the peak amplitude of the signal element 4. The angle the line makes with the X axis is the phase of the signal element. A& For ASK, we are using only an in-phase carrier. Therefore, the two points should be on the X axis. Binary 0 has an amplitude of 0 V; binary 1 has an amplitude of 1 V (for example). The points are located at the origin and at 1 unit. BPSK also uses only an in-phase carrier. However, we use a polar NRZ signal for modulation. It creates two types of signal elements, one with amplitude 1 and the other with amplitude −1. This can be stated in other words: BPSK creates two different signal elements, one with amplitude 1 V and in phase and the other with amplitude 1 V and 180° out of phase. 1 All signal elements in QPSK have an amplitude of 2 2 , but their phases are different (45°, 135°, −135°, and −45°). Quadrature amplitude modulation: Quadrature Amplitude Modulation (QAM) is a modulation scheme used to transmit data over a carrier signal by varying both the amplitude and phase of the carrier wave. K A& Here's how QAM works: Carrier Signal: The carrier signal is typically a sinusoidal wave, and its frequency is much higher than the data signal to be transmitted. Amplitude Variation: The amplitude of the carrier signal is varied to represent different combinations of digital bits. Each amplitude represents a unique symbol or constellation point. Phase Variation: In addition to amplitude variation, QAM also varies the phase of the carrier signal to represent different symbols. The phase changes can be in multiples of 90 degrees (π/2 radians), which allows multiple phase states. Analog to digital conversion: Pulse code modulation Pulse Code Modulation (PCM) is a technique used to convert analog signals, such as audio or video, into a digital format. PCM is widely used in applications like audio recording, voice communication, and data transmission. Sampling: ○ The first step in PCM is sampling, where the continuous analog signal is sampled at regular intervals. ○ These samples represent the amplitude of the analog signal at specific ○ K points in time. The Nyquist-Shannon sampling theorem dictates that the sampling rate must be at least twice the highest frequency component of the analog signal to accurately reconstruct it later. Quantization: ○ After sampling, each sampled value is quantized, which means it is A& mapped to a discrete set of values. ○ This process involves assigning a digital code (typically binary) to each sampled value. Encoding: ○ The quantized values are then encoded into a digital bitstream. ○ Each binary code represents one sample and is transmitted or stored as a sequence of bits. Ex: Delta modulation: K A& Delta modulation (DM) is a technique used for converting analog signals into digital format. It operates by quantizing the difference (delta) between the current sample of an analog signal and the previous quantized sample. Sampling: ○ Like in Pulse Code Modulation (PCM), delta modulation begins with the sampling of the analog signal at regular intervals. ○ These samples represent the amplitude of the analog signal at specific points in time.- Delta Calculation: ○ In delta modulation, instead of quantizing the sampled value directly as in PCM, the system quantized the difference (delta) between the current sample and the previous quantized sample. ○ This means that, at each sampling instance, the system only considers whether the signal has increased or decreased since the last quantized value. Comparison: ○ The delta (difference) is compared to a predefined step size, also known as the step size or step size parameter (∆). ○ This step size determines the granularity of the quantization and essentially sets the resolution of the delta modulation system. K A& Unit-2 Multiplexing: Multiplexing in data communication is a technique used to combine multiple signals into a single transmission medium. Basic format of multiplexed system: K The lines on the left direct their transmission streams to a multiplexer (MUX) MUX combines them into a single stream (many-to-one). At the receiving end, that stream is fed into a demultiplexer (DEMUX), A& DEMUX separates the stream back into its component transmissions (one-to-many) and directs them to their corresponding lines. Note: In the figure, the word link refers to the physical path. The word channel refers to the portion of a link that carries a transmission between a given pair of lines. One link can have many (n) channels. Different categories of multiplexing: Frequency division multiplexing(FDM): It involves combining multiple analog signals onto a single transmission medium (communication channel) by allocating each signal a specific range of frequencies within the available bandwidth of the channel. Multiplexing process: K A& Each source generates a signal of a similar frequency range Inside the multiplexer, these similar signals modulate different carrier frequencies ( f1, f2, and f3). The resulting modulated signals are then combined into a single composite signal that is sent out over a link that has enough bandwidth to accommodate it. Demultiplexing process: K The demultiplexer uses a series of filters to decompose the multiplexed signal into its constituent component signals. The individual signals are then passed to a demodulator that separates them from their carriers and passes them to the output lines. A& Q)Assume that a voice channel occupies a bandwidth of 4 kHz. We need to combine three voice channels into a link with a bandwidth of 12 kHz, from 20 to 32 kHz. Show the configuration, using the frequency domain. Assume there are no guard bands. Solution: We shift (modulate) each of the three voice channels to a different bandwidth, as shown in Figure 6.6. We use the 20- to 24-kHz bandwidth for the first channel, the 24- to 28-kHz bandwidth for the second channel, and the 28- to 32-kHz bandwidth for the third one. Then we combine them as shown in Figure 6.6. At the receiver, each channel receives the entire signal, using a filter to separate out its own signal. The first channel uses a filter that passes frequencies between 20 and 24 kHz and filters out (discards) any other frequencies. The second channel uses a filter that passes frequencies between 24 and 28 kHz, and the third channel uses a filter that passes frequencies between 28 and 32 kHz. Each channel then shifts the frequency to start from zero. Q)Five channels, each with a 100-kHz bandwidth, are to be multiplexed together. What is the minimum bandwidth of the link if there is a need for a guard band of 10 kHz between the channels to prevent interference? K A& Solution: For five channels, we need at least four guard bands. This means that the required bandwidth is at least 5 × 100 + 4 × 10 = 540 kHz, as shown in Figure 6.7. Wavelength division multiplexing: Wavelength Division Multiplexing (WDM) is a technique used in optical fiber communication systems to transmit multiple data signals simultaneously over a single optical fibre. wider band of light. K Very narrow bands of light from different sources are combined to make a At the receiver, the signals are separated by the demultiplexer. A& Although WDM technology is very complex, the basic idea is very simple. a. We want to combine multiple light sources into one single light at the multiplexer and do the reverse at the demultiplexer. b. The combining and splitting of light sources are easily handled by a prism. One application of WDM is the SONET network A new method, called dense WDM (DWDM), can multiplex a very large number of channels by spacing channels very close to one another. It achieves even greater efficiency Time division multiplexing: In TDM, the available time on the channel is divided into discrete time slots, and each time slot is allocated to a specific data source or signal. Time-division multiplexing (TDM) is a digital process that can be applied when the data rate capacity of the transmission medium is greater than the data rate required by the sending and receiving devices. K A& Synchronous and statistical time division multiplexing: Synchronous TDM K One slot is allocated for each input line. Input line is given slots in the output frame even if it does not Statistical TDM: The slots are allocated dynamically. Input line is given slots in the output frame only if it has data to A& contain data send. In this, the number of slots in each In this, the number of slots in each frame are equal to the number of frame are less than the number of input lines. input lines. Slots in this carry data only and Slots in this contain both data and there is no need of addressing. address of the destination. Synchronous bits are added to each Synchronous bits are not added to frame each frame Wastage of bandwidth can occur There is no wastage of bandwidth can occur In this, buffering is not done, the In this, buffering is done and only frame is sent after a specific interval those inputs are given slots in the of time whether it has data to send output frame whose buffer contains or not. data to send. Buffer : In Time Division Multiplexing (TDM), a buffer refers to a temporary storage area or memory space that is used to hold data temporarily as it is being transmitted or received within the TDM system. Q) In Figure 6.13, the data rate for each input connection is 1 kbps. If 1 bit at a time is multiplexed (a unit is 1 bit), what is the duration of 1. each input slot, 2. each output slot, and 3. each frame? K A& Solution We can answer the questions as follows: 1. The data rate of each input connection is 1 kbps. This means that the bit duration is 1/1000 s or 1 ms. The duration of the input time slot is 1 ms (same as bit duration). 2. The duration of each output time slot is one-third of the input time slot. This means that the duration of the output time slot is 1/3 ms. 3. Each frame carries three output time slots. So the duration of a frame is 3 × 1/3 ms, or 1 ms. The duration of a frame is the same as the duration of an input unit. Q)Figure 6.14 shows synchronous TDM with a data stream for each input and one data stream for the output. The unit of data is 1 bit. Find, (1) the input bit duration, (2) the output bit duration, (3) the output bit rate, and (4) the output frame rate. Solution We can answer the questions as follows: 1. The input bit duration is the inverse of the bit rate: 1/1 Mbps = 1 μs. 2. The output bit duration is one-fourth of the input bit duration, or 1/4 μs. 3. The output bit rate is the inverse of the output bit duration, or 1/4 μs, or 4 Mbps. This can also be K deduced from the fact that the output rate is 4 times as fast as any input rate; so the output rate = 4 × 1 Mbps = 4 Mbps 4. The frame rate is always the same as any input rate. So the frame rate is 10,00,000 frames per second. Because we are sending 4 bits in each frame, we can verify the result of the previous question by multiplying the frame rate by the number of bits per frame. A& Q) Four 10-kbps connections are multiplexed together. A unit is 1 bit. Find (i) the duration of 1 bit before multiplexing, (ii) the transmission rate of the link, (iii) the duration of a time slot, and (iv) the duration of a frame. (i) Duration of 1 bit before multiplexing The duration of 1 bit before multiplexing is the inverse of the bit rate: Duration = 1 / bit rate = 1 / 10 kbps = 0.1 ms (ii) Transmission rate of the link The transmission rate of the link is the sum of the bit rates of all the multiplexed channels: Transmission rate = 4 * 10 kbps = 40 kbps (iii) Duration of a time slot The duration of a time slot is the amount of time that each channel is allocated to transmit data. In this case, the time slots are equal in duration: Duration of a time slot = Duration of 1 bit before multiplexing / Number of channels(or 1/ total transmission rate) = 0.1 ms / 4 = 0.025 ms (iv) Duration of a frame The duration of a frame is the total amount of time required for all the channels to transmit data in one cycle. In this case, the duration of a frame is equal to the sum of the durations of all the time slots: Duration of a frame = 4 * Duration of a time slot = 4 * 0.025 ms = 0.1 ms Therefore, the answers to the questions are: (i) 0.1 ms (ii) 40 kbps (iii) 0.025 ms (iv) 0.1 ms K A& Data Rate Management: Data rate management refers to the process of controlling and optimising(Make the best use of) the flow of data in a computer or communication system One problem with TDM is how to handle a disparity in the input data rates. If data rates are not the same, three strategies, or a combination of them, can be used. We call these three strategies multilevel multiplexing, multiple-slot allocation, and pulse stuffing Multilevel multiplexing: Multilevel multiplexing is a technique used when the data rate of an input line is a multiple of others. For example, in Figure 6.19, we have two inputs of 20 kbps and three inputs of 40 kbps. The first two input lines can be multiplexed together to provide a data rate equal to the last three. A second level of multiplexing can create an output of 160 kbps. Multiple-Slot Allocation: K Sometimes it is more efficient to allot more than one slot in a frame to a single input line. In Figure 6.20, the input line with a 50-kbps data rate can be given two slots in the output. We insert a demultiplexer in the line to make two inputs out of one. Pulse Stuffing: A& Sometimes the bit rates of sources are not multiple integers of each other. One solution is to make the highest input data rate the dominant data rate and then add dummy bits to the input lines with lower rates. This will increase their rates. This technique is called pulse stuffing, bit padding, or bit stuffing. Frame synchronisation: Synchronisation between the multiplexer and demultiplexer is a major issue. If the multiplexer and the demultiplexer are not synchronised, a bit belonging to one channel may be received by the other channel. For this reason, one or more synchronisation bits are usually added to the beginning of each frame. These bits, called framing bits that follow a particular pattern pattern that allows the demultiplexer to synchronise with the incoming stream. K Example 3: Consider four sources, each creating 250 8-bit characters per second. If the interleaved unit is a character and 1 synchronizing bit is added to each frame, find (a) the data rate of each source, A& (b) the duration of each character in each source, (c) the frame rate, (d) the duration of each frame, (e) the number of bits in each frame, and (f) the data rate of the link Solution : We can answer the questions as follows: A. The data rate of each source is 250 × 8 = 2000 bps = 2 kbps. B. Each source sends 250 characters per second; therefore, the duration of a character is 1/250 s, or 4 ms. C. Each frame has one character from each source, which means the link needs to send 250 frames per second to keep the transmission rate of each source. D. The duration of each frame is 1/250 s, or 4 ms. Note that the duration of each frame is the same as the duration of each character coming from each source. E. Each frame carries 4 characters and 1 extra synchronizing bit. This means that each frame is 4 × 8 + 1 = 33 bits. F. This means that the data rate of the link is 250 × 33, or 8250 bps. Digital signal service: Telephone companies implement TDM through a hierarchy of digital signals, called digital signal (DS) service or digital hierarchy. K ❑ DS-0 is a single digital channel of 64 kbps. ❑ DS-1 is a 1.544-Mbps service. It can be used as a single service for 1.544-Mbps transmissions, or it can be used to multiplex 24 DS-0 channels or to carry any other combination desired of these service types. A& ❑ DS-2 is a 6.312-Mbps service. It can be used as a single service for 6.312-Mbps transmissions; or it can be used to multiplex 4 DS-1 channels, 96 DS-0 channels, or a combination of these service types. ❑ DS-3 is a 44.376-Mbps service. It can be used as a single service for 44.376-Mbps transmissions; or it can be used to multiplex 7 DS-2 channels, 28 DS-1 channels, 672 DS-0 channels, or a combination of these service types. ❑ DS-4 is a 274.176-Mbps service. It can be used to multiplex 6 DS-3 channels, 42 DS-2 channels, 168 DS-1 channels, 4032 DS-0 channels, or a combination of these service types. Spread spectrum: In spread spectrum (SS), we combine signals from different sources to fit into a larger bandwidth, but our goals are to prevent eavesdropping and jamming. Spread spectrum achieves its goals through two principles: The bandwidth allocated to each station needs to be larger than what is needed. The spreading process occurs after the signal is created by the source. Frequency Hopping Spread Spectrum : The frequency hopping spread spectrum (FHSS) technique uses M different carrier frequencies that are modulated by the source signal. At one moment, the signal modulates one carrier frequency; at the next K moment, the signal modulates another carrier frequency. Although the modulation is done using one carrier frequency at a time, M frequencies are used in the long run. The bandwidth occupied by a source after spreading is 𝐵𝐹𝐻𝑆𝑆 >> B. A& Bandwidth spreading: A pseudorandom code generator, called pseudorandom noise (PN), creates a k-bit pattern for every hopping period 𝑇ℎ. The frequency table uses the pattern to find the frequency to be used for this hopping period and passes it to the frequency synthesizer. The frequency synthesizer creates a carrier signal of that frequency, and the source signal modulates the carrier signal. Direct sequence spread spectrum: The direct sequence spread spectrum (DSSS) technique also expands the bandwidth of the K original signal, but the process is different. Each bit is assigned a code of n bits, called chips, where the chip rate is n times that of the data bit. DSS example: A& As an example, let us consider the sequence used in a wireless LAN, the famous Barker sequence, where n is 11. The figure shows the chips and the result of multiplying the original data by the chips to get the spread signal. The spreading code is 11 chips having the pattern 10110111000 (in this case). If the original signal rate is N, the rate of the spread signal is 11N. This means that the required bandwidth for the spread signal is 11 times larger than the bandwidth of the original signal. Flow control: Flow control is a technique for assuring that a transmitting entity does not overwhelm(very great in amount) a receiving entity with data. The receiving entity typically allocates a data buffer of some maximum length for a transfer. In the absence of flow control, the receiver’s buffer may fill up and overflow while it is processing old data. Stop wait flow control: The simplest form of flow control, known as stop-and-wait flow control. A source entity transmits a frame. After the destination entity receives the frame, it sends back an acknowledgment to the frame just received. The source must wait until it receives the acknowledgment before sending the next frame. Breaking the larger frames: frames. K Source will break up a large block of data into smaller blocks and transmit the data in many This is done for the following reasons: ○ The buffer size of the receiver may be limited. ○ With smaller frames, errors are detected sooner, and a smaller amount of data needs to be retransmitted. ○ On a shared medium, such as a LAN, it is undesirable to permit one station to occupy A& the medium for an extended period. Sliding-Window Flow Control: K Suppose 2 stations A and B are connected via a full-duplex link. Station B allocates buffer space for W frames. Thus A is allowed to send W frames without waiting for any acknowledgments. A& B acknowledges a frame by sending an acknowledgment that includes the sequence number of the next frame expected. (This acknowledgment also implicitly announces that B is prepared to receive the next W frames, beginning with the number specified.) Each time a frame is sent, the shaded window shrinks; each time an acknowledgment is received, the shaded window grows. A maintains a list of sequence numbers that it is allowed to send, and B maintains a list of sequence numbers that it is prepared to receive. Each of these lists can be thought of as a window of frames. The operation is referred to as sliding-window flow control. Error control: Error control refers to mechanisms to detect and correct errors that occur in the transmission of frames. Stop-and-Wait ARQ: K A& Stop-and-wait ARQ is based on the stop-and-wait flow control technique The source station transmits a single frame and then must await an acknowledgment (ACK). No other data frames can be sent until the acknowledgement is received Two sorts of errors could occur. ○ First, the frame that arrives at the destination could be damaged. After a frame is transmitted, the source station waits for an acknowledgment. If no acknowledgment is received by the time that the timer expires, then the same frame is sent again. ○ The second sort of error is a damaged acknowledgment. If the ACK is damaged it will not be recognised by A, which will therefore time out and resend the same frame. This duplicate frame arrives and is accepted by B. To avoid this problem, frames are alternately labelled with 0 or 1, and positive acknowledgments are of the form ACK0 and ACK1. In keeping with the sliding-window convention, an ACK0 acknowledges receipt of a frame numbered 1 and indicates that the receiver is ready for a frame numbered 0. Go-back-N ARQ: K The form of error control based on sliding-window flow control In Go-back-N we can send several packets before receiving acknowledgments, but the receiver can only buffer one packet. If there is no error the destination will send a positive acknowledgment.(RR = receive A& ready) If the destination detects an error in a frame, it may send a negative acknowledgement (REJ = reject) for that frame. If the frame contains error then the destination station will discard that frame and all future incoming frames until the frame in error is correctly received. The source station, when it receives a REJ, must retransmit the frame in error plus all succeeding frames that were transmitted in the interim(meantime). Note : Here receiver window size is 1 Go-back-N ARQ example: K A& Note: Because of the propagation delay on the line, by the time that an acknowledgment (positive or negative) arrives back at the sending station, it has already sent at least one additional frame beyond the one being acknowledged. Selective-Reject ARQ: With selective-reject ARQ, the only frames retransmitted are those that receive a negative acknowledgment, in this case called SREJ, or those that time out. Selective reject is more efficient than go-back-N, because it minimises the amount of retransmission. On the other hand, the receiver must maintain a buffer large enough The transmitter, too, requires more complex logic to be able to send a frame out of sequence. Because of such complications, selective-reject ARQ is much less widely used than go-back-N ARQ. 𝑘 Note : for a k-bit sequence number field, which provides a sequence number range of 2 , the 𝑘 maximum window size is limited to 2 -1. K A& Difference between Go-back-N and selective repeat protocol: Go-Back-N Protocol Selective Repeat Protocol Receiver window size of Go-Back-N 𝑘 Maximum window size is 2 -1 for a Protocol is 1 k-bit sequence number field. In Go-Back-N Protocol, type of In selective Repeat protocol, type of Acknowledgement is cumulative. Acknowledgement is individual In Go-Back-N Protocol, if Receives In selective Repeat protocol, if a corrupt packet, then also, the Receives a corrupt packet, it entire window is re-transmitted. immediately sends a negative acknowledgement and hence only the selective packet is retransmitted. HDLC protocol: K Go-Back-N Protocol is less complex Selective Repeat protocol is more complex A& High-level Data Link Control (HDLC) is a bit-oriented protocol for communication over point-to-point and multipoint links. Note: Point to point link: A point-to-point link is a communication link established between two devices or nodes, creating a direct, dedicated connection between them Multipoint link: A multipoint link, also known as a broadcast or shared medium, is a communication link that connects multiple devices to a common communication medium or channel Basic Characteristics: The three station types : Primary station: ○ Responsible for controlling the operation of the link. ○ Frames issued by the primary are called commands. Secondary station: ○ Operates under the control of the primary station. ○ Frames issued by a secondary are called responses. Combined station: ○ Term used to describe a station that can operate both as a primary station and as a secondary station within the same communication link. The two link configurations are: Unbalanced configuration: Consists of one primary and one or more secondary stations and supports both full-duplex and half-duplex transmission. Note: Half-duplex:In half-duplex transmission, data can flow in only one direction at a time. Full-duplex:In full-duplex transmission, data can flow in both directions simultaneously at the same time K A& Balanced configuration: Consists of two combined stations and supports both full-duplex and half-duplex transmission. The three data transfer modes are: Normal response mode: NRM is a communication mode that defines how a primary station and secondary station interact and exchange data. K A& Asynchronous balanced mode (ABM): Used with a balanced configuration. Either of the combined stations may initiate transmission without receiving permission from the other combined station. Asynchronous balanced mode (ABM): In Asynchronous response mode configuration is unbalanced just like NRM, but the only difference is that the secondary station is independent of the primary station. Frames in HDLC: 1. Information Frames (I-Frames): i. Information frames are used to carry user data between the communicating stations. ii. They serve the primary purpose of transferring data from the sender (transmitter) to the receiver. 2. Supervisory Frames (S-Frames): i. Supervisory frames are used to manage the flow of information frames and control various aspects of communication. ii. They play a key role in link supervision and management. 3. Unnumbered Frames (U-Frames): i. Unnumbered frames are used for various control purposes, including link ii. K establishment, initialization, and termination. They also manage administrative functions. A& Frame format of HDLC protocol and explain each fields: Flag fields: Flag fields delimit the frame at both ends with the unique pattern 01111110. A single flag may be used as the closing flag for one frame and the opening flag for the next. Address Field: The address field identifies the secondary station that is to receive the frame. This field is not needed for point-to-point links but is always included for the sake of uniformity. Control fields: HDLC defines three types of frames, each with a different control field format. ○ Information frame: Information frames are used to carry user data between the communicating stations. ○ Supervisory frame: Supervisory frames are used to manage the flow of information frames and control various aspects of communication. ○ Unnumbered frame: Unnumbered frames are used for various control purposes, K including link establishment, initialization, and termination. Information Field The information field is present only in I-frames and some U-frames. The field can contain any sequence of bits but must consist of an integral number of octets. A& Frame Check Sequence Field The frame check sequence (FCS) is an error detecting code calculated from the remaining bits of the frame, exclusive of flags. Note:(Just for reference): Poll Bit Set (P/F Bit = 1): When the Poll Bit is set (P/F Bit = 1) in an HDLC frame, it indicates a "Poll" frame. The primary station is essentially "polling" or asking the secondary stations for a response. Poll Bit Not Set (P/F Bit = 0): When the Poll Bit is not set (P/F Bit = 0), it indicates a "Final" frame. A "Final" frame is used to convey the final response or data from a secondary station to a primary station. The secondary station responds with a "Final" frame to the "Poll" frame sent by the primary station. This final response may include the requested data or acknowledgment. HDLC operations: Link setup and disconnect Set asynchronous balanced/ extended mode (SABM, SABME) : (Command) Set mode; extended = 7-bit sequence numbers Unnumbered Acknowledgment (UA) (Response) Acknowledge acceptance of one of the set-mode commands K Disconnect (DISC) (Command) Terminate logical link connection Two way data exchange: A& The N(S) and N(R) fields of the I-frame are sequence numbers that support flow control and error control N(S) is the sequence number of the frame being sent N(R) is the acknowledgment for I-frames received The receive ready (RR) frame acknowledges the last I-frame received by indicating the next I-frame expected. Busy condition: A issues an RNR, which requires B to halt transmission of I-frames. The station receiving the RNR will usually poll the busy station at some periodic interval by K sending an RR with the P bit set. This requires the other side to respond with either an RR or an RNR. When the busy condition has cleared, A returns an RR, and I-frame transmission from B can resume Reject recovery: A& A transmits I-frames numbered 3, 4, and 5.Number 4 suffers an error and is lost. When B receives I-frame number 5, it discards this frame because it is out of order and sends an REJ with an N(R) of 4. This causes A to initiate retransmission of I-frames previously sent, beginning with frame 4. A may continue to send additional frames after the retransmitted frames. Timeout recovery: K A transmits I-frame number 3 as the last in a sequence of I-frames. The frame suffers an error. A, however, would have started a timer as the frame was transmitted. This timer has a duration long enough to span the expected response time. A, however, would have started a timer as the frame was transmitted. This timer has a A& duration long enough to span the expected response time. Code-division multiple access: In CDMA, one channel carries all transmissions simultaneously. Let us assume we have four stations, 1, 2, 3, and 4, connected to the same channel. The data from station 1 are d1, from station 2 are d2, and so on. The code assigned to the first station is c1, to the second is c2, and so on. We assume that the assigned codes have two properties. ○ If we multiply each code by another, we get 0. ○ If we multiply each code by itself, we get 4 (the number of stations). Station 1 multiplies its data by its code to get d1 ⋅ c1. Station 2 multiplies its data by its code to get d2 ⋅ c2, and so on. The data that go on the channel are the sum of all these terms, as shown in the box. Suppose station 2 wants to receive the data of station 1,It multiplies the data on the channel by c1, the code of station 1. Because (c1 ⋅ c1) is 4, but (c2 ⋅ c1), (c3 ⋅ c1), and (c4 ⋅ c1) are all 0s, station 2 divides the result by 4 to get the data from station 1. Chips : Sequence of code assigned to a station. K A& CDMA illustration with an example: (Explain : Multiplication of code to data,addition of individual terms and how a station will access a data of another station) Walsh table: Circuit switching: K A circuit-switched network is made of a set of switches connected by physical links, in which each link is divided into n channels. In circuit switching, the resources need to be reserved during the setup phase The resources remain dedicated for the entire duration of data transfer until the teardown phase. Data transferred between the two stations are not packetized A& Illustration: When end system A needs to communicate with end system M, system A needs to request a connection to M that must be accepted by all switches as well as by M itself. This is called the setup phase. A circuit (channel) is reserved on each link, and the combination of circuits or channels defines the dedicated path. After the dedicated path is established, the data-transfer phase can take place. After all data has been transferred, the circuits are torn down. Advantages: Dedicated paths Fixed bandwidth Fixed data rate Suitable for long continuous communication Disadvantage: High setup time Inefficient resource utilisation Circuit switching is not easily scalable. Not ideal for data transmission Setup Phase: K Three Phases of circuit switching: Resource Reservation: During this setup phase, a channel is reserved on each A& link Path Establishment: The combination of these reserved channels creates a dedicated path between the source and the destination. Data Transfer Phase: Uninterrupted Data Flow: With the established and dedicated path, the actual data transfer takes place without interruptions Continuous Connection: Throughout the data transfer phase, the resources are dedicated to this communication session hence ensuring continuous transmission. Teardown Phase: Circuit Release: The circuits that were reserved for this particular communication session are released. Return of Resources: After teardown, the resources that were previously dedicated to this communication session become available for other potential communication sessions. Circuit switching concepts: and data traffic. K Digital switch:The function of digital switch is routing and managing circuit-switched voice Network interface : In a circuit-switched node, a "network interface" refers to the hardware or components that facilitate the connection between the node and the A& circuit-switched network. Control unit : It performs three general tasks ○ It establishes connections. This is generally done on demand ○ Maintain the connection ○ Tear down the connection Space division switching: Key features : 1. Dedicated Communication Paths: Space division switching creates dedicated communication paths between input and output ports. 2. Independence of Channels: Each communication path or channel operates independently. 3. Non-Blocking Architecture: Any input can be connected to any output without conflicts 4. Crossbar Switches: Common implementations of space division switching use crossbar switches. Disadvantages: The number of crosspoints grows with the square of the number of attached stations. This is costly for a large switch. The loss of a crosspoint prevents connection between the two devices whose lines intersect at that crosspoint. The crosspoints are inefficiently utilised; even when all of the attached devices are active, only a small fraction of the crosspoints are engaged. K To overcome these limitations, multiple-stage switches are employed. A multiple-stage switch has two advantages ○ The number of crosspoints is reduced, increasing crossbar utilisation. ○ There is more than one path through the network to connect two endpoints, increasing reliability. Note:A consideration with a multistage space division switch is that it may be blocking. A& Time division switching: Time division switching works by dividing the available transmission time into discrete time slots. Each time slot is allocated to a specific user for a short duration Time Slot Interchange (TSI) is a crucial element in time division switching The primary purpose of Time Slot Interchange is to enable the dynamic and flexible routing of time slots Time slot interchange: Let us assume data from input line I and J are a and b respectively This is fed to Time slot interchange system The routing logic can rearrange the time slots based on specific routing instructions. After the time slots have been rearranged, the TSI system sends them to the output as a new TDM signal Softswitch architecture: K In any telephone network switch, the most complex element is the software that controls call processing. Typically, this software runs on a proprietary processor(Custom designed)that is integrated with the physical circuit-switching hardware. A more flexible approach is to physically separate the call-processing function from the A& hardware-switching function. In softswitch terminology, the physical-switching function is performed by a media gateway (MG) and the call processing logic resides in a media gateway controller (MGC). Packet switching principles: Brief summary of packet switching operations: Data are transmitted in short packets. A typical upper bound on packet length is 1000 octets (bytes). If a source has a longer message to send, the message is broken up into a series of packets Each packet contains a portion of the user’s data plus some control information. At each node, the packet is received, stored briefly, and passed on to the next node. Depiction of simple packet switching network: K Consider a packet to be sent from station A to station E. The packet includes control information that indicates that the intended destination is E. The packet is sent from A to node 4. Node 4 stores the packet, determines the next leg of the route (say 5), and queues the packet to go out on that link (the 4-5 link). When the link is available, the packet is transmitted to node 5, which forwards the packet to A& node 6, and finally to E. Advantages of packet switching: Efficient Bandwidth Utilisation: This approach allows for the more efficient use of available bandwidth because packets can be transmitted as soon as they are ready Flexibility: Packet switching is highly adaptable and versatile. It can handle various data types, including voice, video, and text, with differing bandwidth requirements. Scalability: As network traffic grows, additional network nodes and links can be added without significant disruption. Cost-Effective: Packet switching is typically more cost-effective than circuit switching because it makes efficient use of network resources. Datagram networks: Packets in this approach are referred to as datagrams. Each packet is treated independently of all others. In this example, all four packets (or datagrams) belong to the same message, but may travel different paths to reach their destination. This is so because the links may be involved in carrying packets from other sources and do not have the necessary bandwidth available to carry all the packets from A to X. This approach can cause the datagrams of a transmission to arrive at their destination out of order with different delays between the packets. Packets may also be lost or dropped because of a lack of resources. Note:The datagram networks are sometimes referred to as connectionless networks. The term connectionless here means that the switch (packet switch) does not keep information about the connection state. There are no setup or teardown phases. Each packet is treated the same by a switch regardless of its source or destination. Advantages of datagram approach: K Flexibility: Each packet can take its own route to reach the destination. This allows for better load balancing and redundancy in case of network issues. Scalability: It easily scales to accommodate varying network conditions. Efficiency: Datagram communication allows for efficient use of network resources. Note: In networking, overhead refers to the additional data or resources required to support the A& communication process beyond the actual user data being transmitted Virtual-circuit networks: A virtual-circuit network is a cross between a circuit-switched network and a datagram network. It has some characteristics of both As in a circuit-switched network, there are setup and teardown phases in addition to the data transfer phase. As in a circuit-switched network, all packets follow the same path established during the connection. Resources can be allocated during the setup phase, as in a circuit-switched network, or on demand, as in a datagram network. As in a datagram network, data is packetized and each packet carries an address in the header. Note : A virtual-circuit network is normally implemented in the data-link layer, while a circuit-switched network is implemented in the physical layer and a datagram network in the network layer. But this may change in the future Advantages of virtual circuit network: Resource Reservation: Resources are reserved during the call setup phase. This enables the network to allocate bandwidth, ensuring a certain level of service Reliable Data Delivery: Similar to circuit-switched networks, VCNs guarantee in-sequence, error-free packet delivery, which is particularly beneficial for applications such as voice and video calls. Reduced Overhead: The establishment of a virtual circuit minimises the need for extensive routing information in each packet. Circuit switching K Difference between circuit switching and packet switching: Requires a dedicated path before Packet switching Does not requires a dedicated path A& sending the data from source to before sending the data from source to destination destination Call setup is required No call setup is required Entire data follows the same route Packets can follow any route No store and forward transmission Supports store and forward transmission Wastage of bandwidth No wastage of bandwidth Delay in circuit switched network The total delay is due to the time needed to create the connection, transfer K data, and disconnect the circuit. The call request signal suffers both propagation delay and processing delay Call accept signal suffers only propagation delay as the connection is already setup Data is transferred as a entire block without processing delay at each node The acknowledgement signal do not suffer a processing delay A& Delay in Packet switching(Datagram network): Datagram packet switching does not require a call setup. Thus, for short messages, it will be faster than virtual circuit packet switching and perhaps circuit switching. However, because each individual datagram is routed independently, the processing for each datagram at each node may be longer than for virtual circuit packets. Thus, for long messages, the virtual circuit technique may be superior. Delay in Packet switching(Virtual circuit network): K A virtual circuit is requested using a Call Request packet, which incurs a delay at each node. A& The virtual circuit is accepted with a Call Accept packet. In contrast to the to Circuit switching call accept singal, the call accept packet suffers a processing delay at each node The reason is that this packet is queued at each node and must wait its turn for transmission. Once the virtual circuit is established, the message is transmitted in packets. In contrast to circuit switching, the acknowledgement packet also suffers a processing delay at each node. Comparison of communication switching technique: Circuit switching Datagram network Virtual circuit network Implemented in physical layer Implemented in network layer Implemented in datalink layer Delay during call setup phase No call setup phase but there There is a delay during both but no delay during data is a delay during data call setup and data transmission transmission transmission Busy signal if the called party is Sender may be notified if the Sender is notified of busy Packet is not delivered connection denial A predefined path is set up No predefined path is set up A predefined path is set up between the sender and between the sender and between the sender and receiver before data receiver before data receiver before data transmission and this path is transmission transmission. However, this dedicated entirely to the path is virtual and may not be connection until the session dedicated entirely to the ends connection as in the case of circuit switching. Packet size and Transmission time Relationship K A& In this example, it is assumed that there is a virtual circuit from station X through nodes a and b to station Y. The message to be sent comprises 40 octets, and each packet contains 3 octets of control information, which is placed at the beginning of each packet and is referred to as a header. ○ 1-Packet transmission: If the entire message is sent as a single packet of 43 octets (3 octets of header plus 40 octets of data), then the packet is first transmitted from station X to node a. When the entire packet is received, it can then be transmitted from a to b. When the entire packet is received at node b, it is then transferred to station Y. Ignoring switching time, total transmission time is 129 octet-times (43 octets × 3 packet transmissions). ○ 2-Packet transmission: Suppose now that we break up the message into two packets, each containing 20 octets of the message and, of course, 3 octets each of header, or control information. In this case, node a can begin transmitting the first packet as soon as it has arrived from X, without waiting for the second packet. Because of this overlap in transmission, the total transmission time drops to 92 octet-times. ○ 5-Packet transmission: By breaking the message up into five packets, each intermediate node can begin transmission even sooner and the savings in time is greater, with a total of 77 octet-times for transmission. ○ K 10-Packet transmission: However, this process of using more and smaller packets eventually results in increased, rather than reduced, delay as illustrated in Figure 9.13d. This is because each packet contains a fixed amount of header, and more packets mean more of these headers. A& Routing in Packet-Switching Networks: Routing: Determining the best path to carry the packet from source to destination. Requirements for routing functions : Correctness: The routing mechanism must ensure that packets are delivered accurately to the intended destination. Simplicity: The routing system should be easy to implement and manage. Robustness: It should be capable of finding alternative routes in case of failures without losing data or breaking connections. Stability: While reacting to failures, the network should maintain stability, avoiding oscillations or drastic fluctuations. Fairness: The routing strategy should be fair in its treatment of different data flows. Optimality: The routing approach aims to optimize(Make the best use of)network performance. Efficiency: Routing should be performed with minimal overhead. Routing Strategies : Routing strategy refers to the set of rules used to determine how data packets are forwarded from their source to their destination. 1. Fixed routing: In fixed routing, a predetermined, unchanging path is established for each source-destination pair of nodes in a network. This configuration is set either statically or changes only when there's a modification in the network topology(Network topology is the physical or logical arrangement of the nodes and connections in a network). This approach employs a central routing matrix that stores, for every source-destination pair of nodes, the identity of the next node on the route. Routing tables are derived from the central matrix and stored at each node. Each node stores a single column of the central routing directory, which indicates the next node to take for each destination. (Explain the diagram if asked) K A& 2. Flooding: Key points in the flooding process: ○ Initial Transmission: The source node transmits the packet to all of its neighbours. ○ Propagation: Each receiving node further transmits the packet to its neighbours except the one from which it received the packet. ○ Duplicates and Counters: To prevent infinite packet circulation, some mechanisms are necessary. Two common approaches are used: a. Duplicate Checking: Nodes keep track of the packets they have already retransmitted. If a duplicate packet arrives, it's discarded. b. Hop Count Field: Each packet contains a field representing the maximum number of hops the packet can traverse. Each time a node forwards the packet, it decrements the hop count by one. When the hop count reaches zero, the packet is discarded. (Just for understanding) K A& 1. The label on each packet in the figure indicates the current value of the hop count field in that packet. 2. A packet is to be sent from node 1 to node 6 and is assigned a hop count of 3. 3. On the first hop, three copies of the packet are created, and the hop count is decremented to 2. 4. For the second hop of all these copies, a total of nine copies are created. 5. One of these copies reaches node 6, which recognizes that it is the intended destination and does not retransmit. 6. However, the other nodes generate a total of 22 new copies for their third and final hop. Each packet now has a hop count of 1. 7. Note that if a node is not keeping track of packet identifiers, it may generate multiple copies at this third stage. 8. All packets received from the third hop are discarded, because the hop count is exhausted. 3. Random routing: K In random routing, a node selects only one outgoing path for retransmission of an incoming packet. The outgoing link is chosen at random, excluding the link on which the packet arrived. If all links are equally likely to be chosen, then a node may simply utilise outgoing links in a round-robin fashion. A refinement of this technique is to assign a probability to each outgoing link and to select A& the link based on that probability. Adaptive routing: In virtually all packet-switching networks, some sort of adaptive routing technique is used. That is, the routing decisions are changed as conditions on the network change. The principal conditions that influence routing decisions are given: ○ Failure: When a node or link fails, it can no longer be used as part of a route. ○ Congestion: When a particular portion of the network is heavily congested, it is desirable to take other routes. Advantages: ○ An adaptive routing strategy can improve performance ○ An adaptive routing strategy can aid in congestion control Disadvantages: ○ The routing decision is more complex ○ Consume more bandwidth ○ Dynamic routing requires more resources such as CPU, RAM etc Dijkstra’s Algorithm : Define: N : Set of nodes in the network. s : Source node. T : Set of nodes incorporated by the algorithm. w(i,j) : Link cost from node i to node j L(n) : Cost of the least-cost path from node s to n 1. K The algorithm has three steps; steps 2 and 3 are repeated until T = N. Initialization: a. Start with only the source node in the set of incorporated nodes: T = {s} b. Set initial path costs to neighbouring nodes as their direct link costs from the source A& node: L(n) : w(s,n) c. If direct link does not exist then : L(n) : ∞ 2. Get Next Node: a. Choose the neighbouring node not in T with the least-cost path from the source node. b. Add this node to T and include the edge contributing to the least-cost path from the current set T to this newly added node. 3. Update Least-Cost Paths: a. For each node in T, consider if the path cost to a particular node via the newly added node provides a lower cost than the current known path from the source node. b. If a lower-cost path is found, update the cost along with the path for that node to this new minimum. Example: K A& Bellman–Ford Algorithm: Define: S : Source node. w(i,j) : Link cost from node i to node j in the graph h : Maximum number of links in a path at the current stage of the algorithm. 𝐿ℎ(𝑛) : Cost of the least-cost path from node s to node n under the constraint of no more than h links Initialization: 𝐿0(𝑛) = ∞ for all n ≠ s ○ Initially, the path costs from the source node to all other nodes are set to infinity. 𝐿ℎ(𝑠) = 0 for all h ○ The cost from the source node to itself is set to 0 for any allowed number of links. Update: For each successive h ≥ 0 : For each n ≠ s, compute 𝐿ℎ+1(𝑛) = 𝑚𝑖𝑛𝑗[𝐿ℎ(𝑗) + w(j,n)] Example: K 𝐿ℎ+1(𝑛) is calculated by selecting the minimum sum of 𝐿ℎ(𝑗) and w(j,n) from all nodes j (excluding the source node s) A& Unit - 3 Congestion Control in Data Networks: Congestion: congestion refers to a situation where the demand for network resources exceeds the available capacity. Effects of Congestion: K A& The scenario describes a node with multiple I/O ports, each connected to other nodes or end systems. Each port has two buffers: one for incoming packets and on