IMG_0200.jpeg
Document Details

Uploaded by FreshestTinWhistle8745
Nipissing University
Full Transcript
# Lecture 18 ## Channel Capacity ### Information Channels - Noiseless Binary Channel - Noisy Channel With Nonoverlapping Outputs - Noisy Typewriter - Binary Symmetric Channel (BSC) - Binary Erasure Channel (BEC) ### Shannon Capacity **Definition:** The **Shannon capacity** of a channel is defined...
# Lecture 18 ## Channel Capacity ### Information Channels - Noiseless Binary Channel - Noisy Channel With Nonoverlapping Outputs - Noisy Typewriter - Binary Symmetric Channel (BSC) - Binary Erasure Channel (BEC) ### Shannon Capacity **Definition:** The **Shannon capacity** of a channel is defined as the maximum of the mutual information over all possible input distributions: $C = max_{p(x)} I(X; Y)$ - $C$ is measured in bits per channel use. - It quantifies the maximum rate at which information can be reliably transmitted over a communication channel. ### Capacity of the Noiseless Binary Channel - Channel Diagram: - $X \longrightarrow Y$ - $Y = X$ - Channel Matrix: | | X=0 | X=1 | |-----------|-----|-----| | **Y=0** | 1 | 0 | | **Y=1** | 0 | 1 | - Since $Y = X$, every bit that is sent is received without error. - Hence, we should be able to send 1 bit per channel use. - $C = max \ I(X; Y) = max \ H(X) - H(X|Y) = max \ H(X)$ - $H(X)$ is maximized when $p(X = 0) = p(X = 1) = \frac{1}{2}$ - Hence, $C = 1$ bit ### Capacity of Noisy Channel With Nonoverlapping Outputs - Channel Diagram: - $X \longrightarrow Y$ - Channel Matrix: | | X=1 | X=2 | X=3 | |-----------|-----|-----|-----| | **Y=1** | 1 | 0 | 0 | | **Y=2** | 0 | 1 | 0 | | **Y=3** | 0 | 0 | 1 | | **Y=4** | 0 | 0 | 0 | - The channel matrix implies that the input is either received correctly, or not received at all. - If we send at a rate of 1 symbol per channel use, the receiver either gets the symbol correctly, or knows that the symbol was not received. - We can transmit at a rate of $log \ 3$ bits per channel use by using only inputs 1, 2, and 3. - $C = max \ I(X; Y) = max \ H(Y) - H(Y|X) = max \ H(Y)$ - $H(Y)$ is maximized when $p(X = 1) = p(X = 2) = p(X = 3) = \frac{1}{3}$ - Hence, $C = log \ 3$ bits ### Capacity of Noisy Typewriter Channel - Channel Diagram: - $X \longrightarrow Y$ - Channel Matrix: | | X=a | X=b | X=c |... | X=x | X=y | X=z | |-----------|-----|-----|-----|-----|-----|-----|-----| | **Y=a** | 1/2 | 0 | 0 |... | 0 | 0 | 1/2 | | **Y=b** | 1/2 | 1/2 | 0 |... | 0 | 0 | 0 | | **Y=c** | 0 | 1/2 | 1/2 |... | 0 | 0 | 0 | | **...** |... |... |... |... |... |... |... | | **Y=y** | 0 | 0 | 0 |... | 1/2 | 1/2 | 0 | | **Y=z** | 0 | 0 | 0 |... | 0 | 1/2 | 1/2 | - Input alphabet: $\{a, b, c,..., x, y, z\}$ - Output alphabet: $\{a, b, c,..., x, y, z\}$ - With probability $\frac{1}{2}$ each letter is transformed into the next one, or remains the same. - $C = max \ I(X; Y) = max \ H(Y) - H(Y|X)$ - $H(Y|X) = H(X+Z|X) = H(Z|X) = H(Z)$ ($Z$ is independent of $X$) - $Z = \begin{cases} 0, \ with \ probability \ \frac{1}{2} \\ 1, \ with \ probability \ \frac{1}{2} \end{cases}$ - $H(Z) = 1$ - $C = max \ H(Y) - 1$ - $H(Y)$ is maximized when $Y$ is uniform, i.e., $p(Y = a) = p(Y = b) =... = p(Y = z) = \frac{1}{26}$ - In this case, $H(Y) = log \ 26$ - Hence, $C = log \ 26 - 1 = log \ 13$ bits ### Capacity of the Binary Symmetric Channel (BSC) - Channel Diagram: - $X \longrightarrow Y$ - Channel Matrix: | | X=0 | X=1 | |-----------|-----|-----| | **Y=0** | 1-p | p | | **Y=1** | p | 1-p | - $p$ is the probability of error. - $C = max \ I(X; Y) = max \ H(Y) - H(Y|X)$ - $H(Y|X) = \sum_{x} p(x) H(Y|X = x)$ - $H(Y|X = x) = H(p)$ for $x = 0 \ or \ 1$ - $H(Y|X) = H(p)$ - Hence, $C = max \ H(Y) - H(p)$ - Note that $H(p)$ is constant with respect to the input distribution $p(x)$. - We can maximize $H(Y)$ by choosing $p(X = 0) = p(X = 1) = \frac{1}{2}$. - In this case, $p(Y = 0) = p(Y = 1) = \frac{1}{2}$ - Hence, $H(Y) = 1$ - $C = 1 - H(p)$ bits ### Capacity of the Binary Erasure Channel (BEC) - Channel Diagram: - $X \longrightarrow Y$ - Channel Matrix: | | X=0 | X=1 | |-----------|-----|-----| | **Y=0** | 1-$\alpha$ | 0 | | **Y=e** | $\alpha$ | $\alpha$ | | **Y=1** | 0 | 1-$\alpha$ | - $\alpha$ is the probability of erasure. - $C = max \ I(X; Y) = max \ H(Y) - H(Y|X)$ - $H(Y|X) = \sum_{x} p(x) H(Y|X = x)$ - $H(Y|X = 0) = H(Y|X = 1) = H(\alpha)$ - $H(Y|X) = H(\alpha)$ - $I(X; Y) = H(Y) - H(\alpha)$ - $I(X; Y) = H(X) - H(X|Y)$ - $H(X|Y) = \sum_{y} p(y) H(X|Y = y)$ - $H(X|Y = 0) = H(X|Y = 1) = 0$ - $H(X|Y = e) = H(X)$ - $H(X|Y) = p(Y = e) H(X) = \alpha H(X)$ - $I(X; Y) = H(X) - \alpha H(X) = (1 - \alpha) H(X)$ - $C = max \ I(X; Y) = (1 - \alpha) max \ H(X)$ - $H(X)$ is maximized when $p(X = 0) = p(X = 1) = \frac{1}{2}$ - Hence, $C = 1 - \alpha$ bits