Document Details

StylishSpessartine

Uploaded by StylishSpessartine

جامعة العلوم والتقانة

Tags

entropy information theory random variables statistics

Full Transcript

Information Theory Entropy Function Entropy Entropy is a measure of uncertainty of a random variable. Let X be a discrete random variable with alphabet x and probability mass function Px(x)=Pr{X=x}, xͼX. Px(x) will be denoted by P(x). Entropy cont’d The entropy of X is defi...

Information Theory Entropy Function Entropy Entropy is a measure of uncertainty of a random variable. Let X be a discrete random variable with alphabet x and probability mass function Px(x)=Pr{X=x}, xͼX. Px(x) will be denoted by P(x). Entropy cont’d The entropy of X is defined as follow: The entropy H(X) of a discrete random variable is defined by: 1 H(X)   P( x) log b ( ) x X P( x) b logaritm base Entropy cont’d Entropy indicates the average information contained in X. The Entropy is measured in bits or Hartleys or nats according the base of logarithmic function Note The entropy is a function of distribution of X. it does not depend on the actual values taken by the random variable, but only the probabilities. H(X)≥0 Example X  0 with probability 1- p 1 with probability p Show that the entropy of X is: H(X)=-p log p - (1-p) log (1-p) Sometimes this entropy is denoted by H(p, 1-p). Note that the entropy is maximized for p=0.5 and it is zero for p=1 or p=0 When P=0 or P-1there is no uncertainty over the random variable X and no information in revealing its outcome. Example Suppose X can take on K values. Show that the entropy maximized when X is uniformly distributed on these K values and in this case, H(x) = log K Sol. Calculating H(X) results in: 1 H(X)   P( x) log ( ) x X P( x) Using jenson inequality for function f(x) stated bellow:   f ( x)  f (  x ) i i i Sol. Cont’d It’s clear that : 1 1 H(X)  P ( x) log ( ) log(  p ( x ). ) x X P( x) x p( x) So H ( X ) log K It means that the maximum value for H(X) can be log (K). So uniformly distributed X maximized the Entropy and this entropy is log K. Sol. Cont’d Choosing p(x)=1/k, we can obtain H(X)= log K So uniformly distributed X maximized the Entropy and this entropy is log K. Example Balls of different colors are drawn from a hat. The following table shows possible color x  X and probability mass function p(x) green red black yellow x 1/6 1/6 1/3 1/3 P(x) Compute uncertainty H(X) in the color? sol 1 H(X)   P ( x) log ( ) x X P( x) 1 1 1 1 = log10 3  log10 3  log10 6  log10 6 3 3 6 6 = 0.836 hartlys

Use Quizgecko on...
Browser
Browser