Summary

This document appears to be lecture notes, or study material, on noise, information theory, and data compression. It includes diverse concepts, formulas, and questions with corresponding answers. Suitable for undergraduate-level studies.

Full Transcript

Ans=a Ans=a Two resistors with resistances R1 and R2 are connected in parallel and have physical temperatures T1 and T2 respectively. The effective noise temperature Te of an equivalent resistor with resistance equal to the parallel combination R1 and R2 is Two resist...

Ans=a Ans=a Two resistors with resistances R1 and R2 are connected in parallel and have physical temperatures T1 and T2 respectively. The effective noise temperature Te of an equivalent resistor with resistance equal to the parallel combination R1 and R2 is Two resistors with resistances R1 and R2 are connected in parallel and have physical temperatures T1 and T2 respectively. The effective noise temperature Te of an equivalent resistor with resistance equal to the parallel combination R1 and R2 is Ans=b Ans=c Ans=a The collection of all the sample functions is known as A) Ensemble B) Assumble C) Average D) Set The collection of all the sample functions is known as A) Ensemble B) Assumble C) Average D) Set If the future value of a sample function can be predicted based on its past values, the process is known as A) Deterministic process B) Non-Deterministic process C) Independent process D) Statistical process If the future value of a sample function can be predicted based on its past values, the process is known as A) Deterministic process B) Non-Deterministic process C) Independent process D) Statistical process In signal processing, white noise is a random signal having equal intensity at different frequencies, giving it a constant power spectral density. In mathematics and in signal processing, the Hilbert transform is a specific linear operator that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). This linear operator is given by convolution with the function In any event, “in-phase” and “quadrature” refer to two sinusoids that have the same frequency and are 90° out of phase. 2 silde – properties of Narrow band noise Autocorrelation is a function which matches A. Two same signals B. Two different signal C. One signal with its delayed version D. One to many The concept of information entropy was introduced by Claude Shannon in his 1948 paper "A Mathematical Theory of Communication", and is also referred to as Shannon entropy. Shannon's theory defines a data communication system composed of three elements: a source of data, a communication channel, and a receiver. In information theory, the entropy of a random variable is the average level of "information", "surprise", or "uncertainty" inherent to the variable's possible outcomes. Given a discrete random variable {X}, which takes values in the alphabet {X} and is distributed according to [0,1] The "fundamental problem of communication" – as expressed by Shannon – is for the receiver to be able to identify what data was generated by the source, based on the signal it receives through the channel. Shannon considered various ways to encode, compress, and transmit messages from a data source, and proved in his famous source coding theorem that the entropy represents an absolute mathematical limit on how well data from the source can be losslessly compressed onto a perfectly noiseless channel. Shannon strengthened this result considerably for noisy channels in his noisy-channel coding theorem. The core idea of information theory is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. For instance, the knowledge that some particular number will not be the winning number of a lottery provides very little information, because any particular chosen number will almost certainly not win. However, knowledge that a particular number will win a lottery has high informational value because it communicates the outcome of a very low probability event. The information content, also called the surprisal or self- information, of an E is a function which increases as the probability p(E) of an event decreases. When p(E) is close to 1, the surprisal of the event is low, but if p(E) is close to 0, the surprisal of the event is high. This relationship is described by the function where log is the logarithm, which gives 0 surprise when the probability of the event is 1. Hence, we can define the information, or surprisal, of an event E by Huffman coding is a simple and systematic way to design good variable- length codes given the probabilities of the symbols. The resulting code is both uniquely decodable and instantaneous (prefix-free). Huffman coding is used in many applications. For example, it is a component of standard image compression (JPEG), video compression (MPEG), and codes used in fax machines. DATA COMPRESSION Data Compression, also known as source coding, is the process of encoding or converting data in such a way that it consumes less memory space. Data compression reduces the number of resources required to store and transmit data. It can be done in two ways- lossless compression and lossy compression. Lossy compression reduces the size of data by removing unnecessary information, while there is no data loss in lossless compression. WHAT IS SHANNON FANO CODING? Shannon Fano Algorithm is an entropy encoding technique for lossless data compression of multimedia. Named after Claude Shannon and Robert Fano, it assigns a code to each symbol based on their probabilities of occurrence. It is a variable-length encoding scheme, that is, the codes assigned to the symbols will be of varying lengths. https://www.geeksforgeeks.org/shannon-fano-algorithm-for-data-compression/ 1. Create a list of probabilities or frequency counts for the given set of symbols so that the relative frequency of occurrence of each symbol is known. 2. Sort the list of symbols in decreasing order of probability, the most probable ones to the left and the least probable ones to the right. 3. Split the list into two parts, with the total probability of both parts being as close to each other as possible. 4. Assign the value 0 to the left part and 1 to the right part. 5. Repeat steps 3 and 4 for each part until all the symbols are split into individual subgroups. The Shannon codes are considered accurate if the code of each symbol is unique. EXAMPLE: The given task is to construct Shannon codes for the given set of symbols using the Shannon-Fano lossless compression technique. Solution: 1. Upon arranging the symbols in decreasing order of probability: P(D) + P(B) = 0.30 + 0.28 = 0.58 and, P(A) + P(C) + P(E) = 0.22 + 0.15 + 0.05 = 0.42 since they almost equally split the table, the most is divided it the blockquote table is block quoten to {D, B} and {A, C, E} and assign them the values 0 and 1 respectively. 2. Now, in {D, B} group, P(D) = 0.30 and P(B) = 0.28 which means that P(D)~P(B), so divide {D, B} into {D} and {B} and assign 0 to D and 1 to B. Step: 3. In {A, C, E} group, P(A) = 0.22 and P(C) + P(E) = 0.20 So the group is divided into {A} and {C, E} and they are assigned values 0 and 1 respectively. 4. In {C, E} group, P(C) = 0.15 and P(E) = 0.05 So divide them into {C} and {E} and assign 0 to {C} and 1 to {E} Note: The splitting is now stopped as each symbol is separated now. The channel capacity (C) is defined to be the maximum rate at which information can be transmitted through a channel. The fundamental theorem of information theory says that at any rate below channel capacity, an error control code can be designed whose probability of error is arbitrarily small. An application of the channel capacity concept to an additive white Gaussian noise (AWGN) channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem: C is measured in bits per second, B is in hertz; the signal and noise powers S and N are expressed in a linear power unit (like watts or volts2) In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. If two port networks are cascaded, then the corresponding equivalent (effective) noise temperature of the cascade is obtained as Te =……….., where Te1 and Te2 are effective noise temperature of the two port networks respectively and ga1 is the available gain of port 1. a) Te1 +Te2 b) Te1 +(Te2 / ga1) c) Te2 +(Te1 ga1) d) Te1 - Te2 If two port networks are cascaded, then the corresponding equivalent (effective) noise temperature of the cascade is obtained as Te =……….., where Te1 and Te2 are effective noise temperature of the two port networks respectively and ga1 is the available gain of port 1. a) Te1 +Te2 b) Te1 +(Te2 / ga1) c) Te2 +(Te1 ga1) d) Te1 - Te2 Ans=b

Use Quizgecko on...
Browser
Browser