Module 6: Optimum Reception of Digital Signal PDF
Document Details
Uploaded by StainlessCarolingianArt
Tags
Related
- Fast Fourier Transform PDF
- digital_signal_processing_principles_algorithms_and_applications_third_edition.pdf
- Digital Signal and Image Processing Lecture Notes PDF
- Digital Signal Processing Lecture 1 PDF
- Digital Signal Processing for Machine Learning (BME 596) PDF
- Digital Signal Processing Applications PDF
Summary
These notes detail the optimum reception of digital signals, focusing on baseband signal receivers, signal-to-noise ratio (SNR) calculations, and Gaussian noise considerations. The material covers topics like integrator design and analysis in the presence of noise.
Full Transcript
# Module 6: Optimum Reception of Digital Signal ## Baseband Signal Receiver - Consider a baseband signal where logic 1 is represented by +v for time T<sub>b</sub> and logic 0 is represented by -v for time T<sub>b</sub>. - At the receiver, we are not interested in preserving the complete waveform,...
# Module 6: Optimum Reception of Digital Signal ## Baseband Signal Receiver - Consider a baseband signal where logic 1 is represented by +v for time T<sub>b</sub> and logic 0 is represented by -v for time T<sub>b</sub>. - At the receiver, we are not interested in preserving the complete waveform, but we are interested to know whether the transmitted signal was +v or -v. - At a baseband receiver, the received signal is sampled during T<sub>b</sub> period and the value of the received signal at that point decides whether it's '0' or '1'. - For example: - received signal → +v, 0, -v for time t<sub>1</sub>, T<sub>b</sub>, t<sub>2</sub> - transmitted signal = 1 - The transmitted signal was '1' but the received signal sampled at t+ΔT' (where value of received signal is negative due to noise). - As the received signal is negative, the signal is detected as '0'. - This means, the received signal is in error. The probability of error increases. - Therefore, signal strength should be high enough to overcome this error. i.e. (S/N) ratio should be high. ## S/N Ratio of Integrator - White Gaussian Noise - Consider an input signal with white Gaussian noise, n(t), given to an integrator. - Why Gaussian Noise? - Other noises can be avoided by taking proper precautions. But, Thermal noise, which is caused due to thermal motion of electrons, cannot be eliminated. Thermal noise has Gaussian Probability density function. - Integrator integrates the received signal for time period 'T' and the integrated value is then sampled at receiver. - The decision making device makes the decision on this integrated sampled value, which decreases the probability of error. - The integrating process increases the S/N ratio and thus decreases the probability of error. - The optimum receiver is used which increases the S/N ratio and decreases the probability of error (Pe). ## S/N Ratio of Integrator - The input to the integrator is s(t) + n(t). - Noise n(t) is considered as white Gaussian noise with PSD: n<sup>2</sup>/2. - The output of the integrator is, $ v_o(T)=\frac{1}{\tau}\int_{0}^{T}[s(t)+n(t)]dt$ Where, τ = RC = time constant - $ v_o(T)=S_o(T)+n_o(T)$ - $S_o(T) = \frac{1}{\tau} \int_{0}^{T}s(t).dt+1\int_{0}^{T}n(t).dt$ - $ n_o(T)=\frac{1}{\tau} \int_{0}^{T}n(t).dt$ - We can consider the variance of noise, which is representing the average value of noise. - PSD of Noise $= \frac{n^2}{2} = \frac{1}{2} \bar[n(t)]^2$ - Noise power = $n^2\frac{1}{\tau^2}\int_{0}^{T}\frac{1}{2}dt$ - Noise power = $n^2\frac{T}{2\tau^2}$ - SNR = $\frac{Signal\ power}{Noise\ power}= \frac{(\frac{VT}{2})^2}{\frac{n^2T}{2\tau^2}}=\frac{V^2\tau^2}{2Cn^2}$ - SNR = $\frac{V^2\tau^2}{2Cn^2}*\frac{2\tau^2}{2\tau^2}=\frac{2V^2\tau^4}{n^2T^2*2C}$ - SNR = $\frac{2V^2\tau^4}{n^2T^2*2C}=\frac{2V^2T}{n^2}$ ## Probability of Error - Gaussian Function - Standard → $f(x)=\frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ - Gaussian → μ = 0 - $f(x)=\frac{1}{\sqrt{2\pi \sigma^2}}e^{-\frac{x^2}{2\sigma^2}}$ - Now, for Gaussian Noise: - $f(n_o(T)) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{[n_o(T)]^2}{2\sigma^2}}$ - The probability of error = Pe = $\int_{V_T}^{\infty}f(n_o(T))dn_o(T)$ - Pe = $\int_{V_T}^{\infty}\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{[n_o(T)]^2}{2\sigma^2}}dn_o(T)$ - Pe = $\frac{1}{\sqrt{2\pi}\sigma}\int_{V_T}^{\infty}e^{-\frac{[n_o(T)]^2}{2\sigma^2}}dn_o(T).$ - Assume $n_o(T) = x$ & $dx = \frac{dn_o(T)}{\sqrt{2}\sigma}$ - Putting above assumptions in equation (4). - Pe = $\frac{1}{\sqrt{2\pi}\sigma} \int_{\infty}^{\infty} e^{-x^2} dx.\sqrt{2}\sigma$ - Pe = $\frac{1}{\sqrt{\pi}} \int_{\infty}^{\infty} e^{-x^2} dx.$ - When, n_o(T) = ∞ ⇒ x = ∞. - When n_o(T) = V_T - $\frac{V_T}{\sqrt{2}\sigma}=x$ - $x=\frac{V_T}{\sqrt{2}\sigma}=\frac{VT\sqrt{\tau}}{\sqrt{2n\sqrt{T}}}=\frac{V\sqrt{T\tau}}{\sqrt{2n}}$ - x = $\frac{V\sqrt{T\tau}}{\sqrt{2n}}$ - Pe = $\frac{1}{\sqrt{\pi}}\int_{V\sqrt{T\tau}}^{\infty}e^{-x^2}dx.$ - According to integration property, if $\frac{1}{\sqrt\pi}$ is a multiplier and integration limit from x to ∞ for e<sup>-x<sup>2</sup></sup>, then we can rewrite it as half of error function. - Pe = $\frac{1}{2} erfc(x)$ - Pe = $\frac{1}{2} erfc(\frac{V\sqrt{T\tau}}{\sqrt{2n}})$ - Pe = $\frac{1}{2} erfc(\frac{V\sqrt{T}}{\sqrt{2n}})$ - Pe = $\frac{1}{2} erfc(\frac{V^2T}{n})$ - Pe = $\frac{1}{2} erfc(\frac{E_S}{n})^{1/2}$ where, $E_S = V^2T$= bit energy - The above equation shows that the maximum value of the probability of error is half. This implies that even though the signal is impacted by maximum noise, the receiver can not make wrong decisions. ## Optimum filter - The filter which provides the best result on: - Minimum probability of error - Better SNR - Let s<sub>1</sub>(t) or s<sub>2</sub>(t) be two signals - input signals - Let there be two signals corrupted by Gaussian noise having power spectral density (n<sup>2</sup>/2). - The output signal with noise, - v<sub>o</sub>(t) = s<sub>o1</sub>(t) + n<sub>o</sub>(t) or - v<sub>o</sub>(t) = s<sub>o2</sub>(t) + n<sub>o</sub>(t) - Whether the signal is s<sub>1</sub>(t) or s<sub>2</sub>(t) is decided by threshold voltage - The threshold = v(t) = $\frac{s_{o1}(t)+s_{o2}(t)}{2}$. - At sampling time, if the noise n<sub>o</sub>(t) is positive and larger than the voltage difference, then an error occurs. - i.e. If n<sub>o</sub>(t) > v(t) - s<sub>o2</sub>(t). - n<sub>o</sub>(t) > $\frac{s_{o1}(t)+s_{o2}(t)}{2} - s_{o2}(t)$ - n<sub>o</sub>(t) > $\frac{s_{o1}(t)-s_{o2}(t)}{2}$ - Pe = $\int_{\frac{S_{o1}(t)-S_{o2}(t)}{2}}^{\infty}\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{[n_o(T)]^2}{2\sigma^2}}dn_o(T)$ - Let, x = $\frac{n_o(T)}{\sqrt{2}\sigma}$ then $dx = \frac{dn_o(T)}{\sqrt{2}\sigma}$ - If, n<sub>o</sub>(t) = s<sub>o1</sub>(t) - s<sub>o2</sub>(t) ⇒ x = $\frac{s_{o1}(t)-s_{o2}(t)}{2\sqrt{2}\sigma}$. - Pe = $\int_{\frac{s_{o1}(t)-s_{o2}(t)}{2\sqrt{2}\sigma}}^{\infty}\frac{1}{\sqrt{2\pi}\sigma}e^{-x^2}dx. \sqrt{2}\sigma.$ - Pe = $\int_{\frac{s_{o1}(t)-s_{o2}(t)}{2\sqrt{2}\sigma}}^{\infty}\frac{1}{\sqrt{\pi}}e^{-x^2}dx.$ - Pe = $\frac{1}{2}erfc(x)$ - Pe = $\frac{1}{2}erfc[\frac{s_{o1}(t)-s_{o2}(t)}{2\sqrt{2}\sigma}].$ - From the above equation, we can say that the probability of error is dependent on the difference between two voltage levels (i.e. s<sub>o1</sub> - s<sub>o2</sub>). - So, for an optimum filter, we try to maximize this difference and minimize the noise. ## Transfer Function of Optimum Filter - The optimum filter having minimum probability of error (Pe). - To reduce Pe, we try to maximize the difference, - ξ = s<sub>o1</sub>(t) - s<sub>o2</sub>(t) - Let p(t) = s<sub>1</sub>(t) - s<sub>2</sub>(t). - Let p(f) and p<sub>o</sub>(f) be the Fourier transform of p(t) and p<sub>o</sub>(t) respectively, and - H(f) : transfer function of the optimum filter - p<sub>o</sub>(f) = H(f).p(f) - Taking inverse Fourier transform, - p<sub>o</sub>(T) = $\int_{-\infty}^{\infty}p_o (f)e^{j2\pi ft}df$ - p<sub>o</sub>(T) = $\int_{-\infty}^{\infty}H(f).p(f)e^{j2\pi ft}df$ - If the input noise n(t) is having PSD G<sub>n</sub>(f), then the output noise PSD is G<sub>n0</sub>(f). - We can write the relation between the PSD of input and output. - G<sub>n0</sub>(f) = |H(f)|<sup>2</sup>G<sub>n</sub>(f) - Noise power is given by variance. - σ<sup>2</sup> = $\int_{-\infty}^{\infty} G_{n0}(f)df$. - σ<sup>2</sup> = $\int_{-\infty}^{\infty}|H(f)|^2 G_{n}(f)df$. - From equation (5) & (7), - $\frac{[\int_{-\infty}^{\infty} H(f)p(f)e^{j2\pi ft}df]^2}{\int_{-\infty}^{\infty} |H(f)|^2G_n(f)df}$ - To solve the above equation, we can use the Schwartz inequality. - It says If x(f) & y(f) are complex functions of common variable then - $|\int_{-\infty}^{\infty} x(f)y(f)df| ≤ \int_{-\infty}^{\infty}|x(f)|^2df\int_{-\infty}^{\infty}|y(f)|^2df$ - OR - $|\int_{-\infty}^{\infty} x(f)y(f)df|^2 ≤ \int_{-\infty}^{\infty}|x(f)|^2df\int_{-\infty}^{\infty}|y(f)|^2df$ - $\frac{|\int_{-\infty}^{\infty} x(f)y(f)df|^2}{\int_{-\infty}^{\infty}|x(f)|^2df} ≤\int_{-\infty}^{\infty}|y(f)|^2df$ - Comparing equation (8) & (9): - x(f) = H(f).$\sqrt{G_n(f)}$ - y(f) = $\frac{1}{\sqrt{G_n(f)}}p(f)e^{j2\pi ft}$ - From equation (9) & (10) - $\frac{|p_o(T)|^2}{\sigma^2} ≤ \int_{-\infty}^{\infty}|p(f)|^2df$ - The maximum value of the left-hand side will be considered as equal to the right-hand side. - $\frac{[p_o(T)]^2}{\sigma^2}$$_{max} = \int_{-\infty}^{\infty}|p(f)|^2df$ - The maximum value of $e^{j2\pi ft}$ = 1. - [$\frac{[p_o(T)]^2}{\sigma^2}$$_{max} = \int_{-\infty}^{\infty}|p(f)|^2df$. - According to Schwartz inequality equal to (=) sign is applicable if: - x(f) = k.y*(f) where, - k → arbitrary constant - y*(f) → complex conjugate of y(f) - As equation (12) consists of equal to sign, then according to equation (13), it implies: - H(f).$\sqrt{G_n(f)}$ = k.($\frac{1}{\sqrt{G_n(f)}}p(f)e^{j2\pi ft}$)<sup>* </sup> - H(f) = k.$\frac{p(f)e^{-j2\pi ft}}{G_n(f)}$ - H(f) = k.$\frac{p*(f)e^{-j2\pi ft}}{G_n(f)}$ - The above equation shows the transfer function of the optimum filter. ## Matched Filter - When the input noise is 'white,' the optimum filter is called as 'matched filter.' It gives maximum (S/N) ratio. - For the optimum filter, the transfer function is given as: - H(f) = k.$\frac{p*(f)e^{-j2\pi ft}}{G_n(f)}$ - When we find the PSD of Gaussian noise, we get a constant output in the frequency domain. Therefore, it is called as 'white noise'. - For white noise, G<sub>n</sub>(f) = n<sup>2</sup> - Transfer function for the matched filter: - H(f) = k.$\frac{p^*(f)e^{-j2\pi ft}}{n^2/2}$ - Impulse response of the matched filter can be obtained as: - h(t) = IFT [H(f)] - h(t) = k.$\int_{-\infty}^{\infty} \frac{p*(f)e^{-j2\pi ft}}{n^2/2} e^{j2\pi ft} df$ - h(t) = $\frac{2k}{n^2}\int_{-\infty}^{\infty} p*(f)e^{-j2\pi ft} e^{j2\pi ft }df$ - h(t) = $\frac{2k}{n^2}\int_{-\infty}^{\infty} p*(f) e^{j2\pi f(t-t)} df$ - Physically realizable filter will have an impulse response which is real, not complex. - h(t) = h*(t) - Thus, we can replace the R.H.S of equation (4) by its complex conjugate. - h(t) = $\frac{2k}{n^2}\int_{-\infty}^{\infty} p(f) e^{j2\pi f(T-t)} df$ - h(t) = $\frac{2k}{n^2} p(T-t)[\int_{-\infty}^{\infty} s(t) e^{-j2\pi ft}e^{j2\pi ft}dt] $ - h(t) = $\frac{2k}{n^2}[s_1(T-t)-s_2(T-t)]$ - [.. p(t) is a difference, = s<sub>1</sub>(t) - s<sub>2</sub>(t)] - The above result can be shown like: - a. For s<sub>1</sub>(t) - a for p(t) - 2a for p(T-t) - The above result shows a shifted result, and it is clear that: - p(T-t) = 0 for t<0, - which implies, - h(t) = 0 for t<0. - This condition, which is required to be fulfilled in order to get 'causal filter', i.e., the filter which is realizable. ## Probability of error for Matched Filter (Pe) - The matched filter is an optimum filter which gives maximum (S/N) ratio. - Hence, probability of error can be determined by, $\frac{[\int_{-\infty}^{\infty} p_o(T) ]^2}{\sigma^2} $ - From the optimum filter, we have: - $\frac{[p_o(T)]^2}{\sigma^2}$$_{max} = \int_{-\infty}^{\infty}|p(f)|^2df$ - $\frac{[p_o(T)]^2}{\sigma^2}$$_{max} = \int_{-\infty}^{\infty} \frac{|H(f)|^2}{G_n(f)}df$ - For a matched filter, G<sub>n</sub>(f) = n<sup>2</sup>/2. - $\frac{[p_o(T)]^2}{\sigma^2}$$_{max} = \frac{2}{n^2}\int_{-\infty}^{\infty}|p(f)|^2df$ - From Parseval's theorem: - $\frac{[p_o(T)]^2}{\sigma^2} = \frac{2}{n^2}\int_{-\infty}^{\infty}p^2(t).dt$ - $\frac{[p_o(T)]^2}{\sigma^2} = \frac{2}{n^2}\int_{0}^{T}p^2(t).dt$ - $\frac{[p_o(T)]^2}{\sigma^2} = \frac{2}{n^2}\int_{0}^{T}[s_1(t)-s_2(t)]^2.dt$ - $\frac{[p_o(T)]^2}{\sigma^2} = \frac{2}{n^2}[\int_{0}^{T} s_1^2(t)dt +\int_{0}^{T}s_2^2(t)dt-2\int_{0}^{T} s_1(t).s_2(t).dt]$ - $\frac{[p_o(T)]^2}{\sigma^2} = \frac{2}{n^2}[E_{s1}+E_{s2}+2E_{s12}]$ where, - E<sub>s1</sub> = E<sub>s2 </sub> = Energies of s<sub>1</sub>(t) & s<sub>2</sub>(t) - E<sub>s12</sub> = Energy due to correlation between s<sub>1</sub>(t) & s<sub>2</sub>(t). - As we know, s<sub>1</sub>(t) = -s<sub>2</sub>(t) - Thus, both carries the same energy. - E<sub>s1</sub> = E<sub>s2</sub> = E<sub>s12</sub> = E<sub>s</sub> - $\frac{[p_o(T)]^2}{\sigma^2}$ = $\frac{2}{n^2}[E_s+E_s+2E_s]$ - $\frac{[p_o(T)]^2}{\sigma^2}$ = $\frac{2}{n^2}8E_s$ - $\frac{[p_o(T)]^2}{\sigma^2}$ = $\frac{8E_s}{n^2}$ - For optimum filter, - Pe = $\frac{1}{2} erfc[\frac{p_o(T)}{\sqrt{2}\sigma\sigma}]$. - From equation (1): - p<sub>o</sub>(T) = $\sqrt{\frac{8E_s}{n^2}}$ - Substitute equation (3) in (2). - Pe = $\frac{1}{2} erfc[ \frac{\sqrt{\frac{8E_s}{n^2}}}{\sqrt{2}\sigma}].$ - Pe = $\frac{1}{2} erfc[ \frac{\sqrt{8E_s}}{\sqrt{2}\sigma n}].$ - Pe = $\frac{1}{2} erfc[ \frac{\sqrt{E_s}}{n}]^{1/2}$. - The probability of error is half and depends on signal energy.