Full Transcript

Introduction Digital signal processing is an area of science and engineering that has developed rapidly over the past 30 years. This rapid development is a result of the significant advances in digital computer technology and integrated-circuit fabrication. The digital computers and associated digi...

Introduction Digital signal processing is an area of science and engineering that has developed rapidly over the past 30 years. This rapid development is a result of the significant advances in digital computer technology and integrated-circuit fabrication. The digital computers and associated digital hardware of three decades ago were relatively large and expensive and, as a consequence. their use was limited to general-purpose non-real-time (off-line) scientific computations and business applications. The rapid developments in integrated-circuit technology. starting with medium-scale integration (MSI) and progressing to large-scale integration (LSI). and now. very-large-scale integration (VLSI) of electronic circuits has spurred the development of powerful. smaller, faster, and cheaper digital computers and special-purpose digital hardware. These inexpensive and relatively fast digital circuits have made it possible to construct hi2hly sophisticated digital systems capable of performing complex digital signal processing functions and tasks, which are usually too difficult and/or too expensive to be performed by analog circuitry or analog signal processing systems. Hence many of the signal processing tasks that were conventionally performed by analog means are realized today by less expensive and often more reliable digital hardware. We do not wish to imply that digital signal processing is the proper solution for all signal processing problems. Indeed, for many signals with extremely wide bandwidths, real-time processing is a requirement. For such signals, analog or, perhaps, optical signal processing is the only possible solution. However, where digital circuits are available and have sufficient speed to perform the signal processing, they are usually preferable. signal processing 2 Introduction Chap. 1 Not only do digital circuits yield cheaper and more reliable systems for signal processing. they have other advantages as well. In particular, digital processing hardware allows programmable operations. Through software, one can more easily modify the signal processing functions to be performed by the hardware. Thus digital hardware and associated software provide a greater degree of flexibility in system design. Also, there is often a higher order of precision achievable with digital hardware and software compared with analog circuits and analog signal processing systems. For all these reasons, there has been an explosive growth in digital theory and applications over the past three decades. 1 In this book our objective is to present an introduction of the basic analysis tools and techniques for digital processing of signals. We begin by introducing some of the necessarv terminology and by describing the important operations associated with the process of converting an analog signal to digital form suitable for digital processing. As we shall see, digital processing of analog signals has some drawbacks. First, and foremost. conversion of an analog signal to digital form, accomplished by sampling the signal and quantizing the samples. results in a distortion that prevents us from reconstructing the original analog signal from the quantized samples. Control of the amount of this distortion is achieved by proper choice of the sampling rate and the precision in the quantization process. Second, there are finite precision effects that must be considered in the digital processing of the quantized samples. While these important issues are considered in some detail in this book, the emphasis is on the analysis and design of digital signal processing svstems and computational techniques. 1.1 SIGNALS, SYSTEMS, AND SIGNAL PROCESSING A signal is defined as any physical quantity that varies with time. space. or any other independent variable or variables. Mathematically, we describe a signal as a function of one or more independent variables. For example. the functions 51 (1.1.1 ) S2(t) — 20tdescribe two signals. one that varies linearly with the independent 3 variable t (time) and a second that varies quadratically with t. As another example, consider the function s (x, y) = 3x + 2xy + 10v (1.1.2) This function describes a signal of two independent variables x and y that could represent the two spatial coordinates in a plane. The signals described by (1.1.1) and (1.1.2) belong to a class of signals that are precisely defined by specifying the functional dependence on the independent variable. However, there are cases where such a functional relationship is unknown or too highly complicated to be of any practical use. For example, a speech signal (see Fig. 1.1) cannot be described functionally by expressions such as (1.1.1). In general, a segment of speech may be represented to a high degree of accuracy as a sum of several sinusoids of different amplitudes and frequencres, that is, as (1.1.3) where (Ai(t)}, {Fi(t)), and {91 (t)) are the sets of (possibly time-varying) amplitudes, frequencies, and phases, respectively, of the sinusoids. In fact, one way to interpret the information content or message conveyed by any short time segment of the Sec. 1. 1 Signals, Systems, and Signal Processing Figure 1.1 Example of a speech signal. speech signal is to measure the amplitudes. frequencies, and phases contained in the short time segment of the signal. Another example of a natural signal is an electrocardiogram (ECG). Such a signal provides a doctor with information about the condition of the patient's signal processing 4 Introduction Chap. 1 heart. Similarly, an electroencephalogram (EEG) signal provides information about the activitv of the brain. Speech, electrocardiogram. and electroencephalogram signals are examples of information-bearing signals that evolve as functions of a single independent variable. namely, time. An example of a signal that is a function of two independent variables is an image signal. The independent variables in this case are the spatial coordinates. These are but a few examples of the countless number of natural signals encountered in practice. Associated with natural signals are the means by which such signals are gen erated. For example, speech signals are generated by forcing air through the vocal cords. Images are obtained by exposing a photographic film to a scene or an object. "Ihus signal generation is usually associated with a system that responds to a stimulus or force. In a speech signal. the system consists of the vocal cords and the vocal tract. also called the vocal cavity. The stimulus in combination with the system is called a signal source. Thus we have speech sources. images sources. and various other types of signal sources. A system may also be defined as a physical device that performs an operation on a signal. For example, a filter used to reduce the noise and interference corrupting a desired information-bearing signal is called a system. In this case the filter performs some operation(s) on the signal, which has the effect of reducing (filtering) the noise and interference from the desired information- bearing signal. When we pass a signal through a system, as in filtering. we say that we have processed the signal. In this case the processing of the signal involves filtering the noise and interference from the desired signal. In general, the system is characterized by the type of operation that it performs on the signal. For example, if the operation is linear, the system is called linear. If the operation on the signal is nonlinear, the system is said to be nonlinear, and so forth. Such operations are usually referred to as For our purposes. it is convenient to broaden the definition of a system to include not only physical devices, but also software realizations of operations on a signal. In digital processing of signals on a digital computer. the operations performed on a signal consist of a number of mathematical operations as specified by a software program. In this case. the program represents an implementation of the system in software. Thus we have a system that is realized on a digital computer by means of a sequence of mathematical operations: that is, we have a 5 digital signal processing system realized in software. For example. a digital computer can be programmed to perform digital filtering. Alternatively, the digital processing on the signal may be performed by digital hardware (logic circuits) configured to perform the desired specified operations. In such a realization, we have a physical device that performs the specified operations. In a broader sense. a digital system can be implemented as a combination of digital hardware and software, each of which performs its own set of specified operations. This book deals with the processing of signals by digital means. either in software or in hardware. Since many of the signals encountered in practice are analog, we will also consider the problem of converting an analog signal into a digital signal for processing. Thus we will be dealing primarily with digital systems. The operations performed by such a system can usually be specified mathematically. The method or set of rules for implementing the system by a program that performs the corresponding mathematical operations is called an algorithm. Usually. there are many ways or algorithms by which a system can be implemented, either in software or in hardware, to perform the desired operations and computations. In practice, we have an interest in devising algorithms that are computationally efficient, fast, and easilv implemented. Thus a major topic in our study of digital signal processing is the discussion of efficient algorithms for performing such operations as filtering, correlation, and spectral analysis. 1.1.1 Basic Elements of a Digital Signal Processing System Most of the signals encountered in science and engineering are analog in nature. That is. the signals are functions of a continuous variable. such as time or space, and usually take on values in a continuous range. Such signals may be processed directly' by appropriate analog systems (such as filters or frequency analyzers) or frequency multipliers for the purpose of changing their characteristics or extracting some desired information. In such a case we say that the signal has been processed directly in its analog form, as illustrated in Fig. 1.2. Both the input signal and the output signal are in analog form. signal processing 6 Introduction Chap. 1 AnalogAnalog Analog inputoutput signal signalsignal processor figure 1.2 Analog signal processing. Sec. 1. 1 Signals, Systems, and Signal Processing AnalogAnalog input Digital signal 4 processor OUtPUt signalsignal converter converter Digital Digital input output signal signal Figure 1.3 Block diagram of a digital signal processing system. Digital signal processing provides an alternative method for processing the analog signal, as illustrated in Fig. 1.3. To perform the processing digitally, there is a need for an interface between the analog signal and the digital processor. This interface is called an analog-to-digital (A/D) converter. The output of the AD converter is a digital signal that is appropriate as an input to the digital processor. The digital signal processor may be a large programmable digital computer or a small microprocessor programmed to perform the desired operations on the input signal. It may also be a hardwired digital processor configured to perform a specified set of operations on the input signal. Programmable machines provide the flexibility to change the signal processing operations through a change in the software, whereas hardwired machines are difficult to reconfigure. Consequently, programmable signal processors are in very common use. On the other hand, when signal processing operations are well defined, a hardwired impiemen. tation of the operations can be optimized, resulting in a cheaper signal processor and, usually, one that runs faster than its programmable counterpart. kn applications where the digital output from the digital signal processor is to be given to the user in analog form. such as in speech communications, we must provide another interface from the digital domain to the analog domain. Such an interface is called a digital-to-analog (D/A) converter. Thus the signal is provided to the user in analog form. as illustrated in the block diagram of Fig. 1.3. However, there are other practical applications involving signal analysis, where the desired information is conveyed in digital form and no 7 D/A converter is required. For example, in the digital processing of radar signals, the information extracted from the radar signal, such as the position of the aircraft and its speed, may simply be printed on paper. There is no need for a D/A converter in this case. 1.1.2 Advantages of Digital over Analog Signal Processing There are many reasons why digital signal processing of an analog signal may be preferable to processing the signal directly in the analog domain, as mentioned briefly earlier. First, a digital programmable system allows flexibility in reconfiguring the digital operations simply by changing the program. Reconfiguration of an analog system usually implies a redesign of the hardware followed by testing and verification to see that it operates properly. Accuracy considerations also play an important role in determining the form of the signal processor. Tolerances in analog circuit components make it extremely difficult for the system designer to control the accuracy of an analog signal processing system. On the other hand. a digital system provides much better control of accuracy requirements. Such requirements, in turn, result in specifying the accuracy requirements in the A/D converter and the digital signal processor, in terms of word length, floating-point versus fixed-point arithmetic, and similar factors. Digital signals are easily stored on magnetic media (tape or disk) without deterioration or loss of signal fidelity beyond that introduced in the A/D conversion. As a consequence, the signals become transportable and can be processed off-line in a remote laboratory. The digital signal processing method also allows for the implementation of more sophisticated signal processing algorithms. It is usually very difficult to perform precise mathematical operations on signals in analog form but these same operations can be routinely implemented on a digital computer using software. In some cases a digital implementation of the signal processing system is cheaper than its analog counterpart. The lower cost may be due to the fact that the digital hardware is cheaper, ar perhaps it is a result of the flexibility for mod e ifications provided by the digital implementation. As a consequence of these advantages, digital signal processing has been applied in practical systems covering a broad range of disciplines. We cite, for signal processing 8 Introduction Chap. 1 example, the application of digital signal processing techniques in speech processing and signal transmission on telephone channels, in image processing and transmission, in seismology' and geophysics. In oil exploration, in the detection of nuclear explosions. in the processing of signals received from outer space. and in a vast variety of other applications. Some of these applications are cited in subsequent chapters. As already indicated, however, digital implementation has its limitations. One practical limitation is the speed of operation of A/D converters and digital signal processors. We shall see that signals having extremely wide bandwidths require fast-sampling-rate A/D converters and fast digital signal processors. Hence there are analog signals with large bandwidths for which a digital processing approach is beyond the state of the art of digital hardware. 1.2 CLASSIFICATION OF SIGNALS The methods we use in processing a signal or in analyzing the response of a system to a signal depend heavily on the characteristic attributes of the specific signal. There are techniques that apply only to specific families of signals. Consequently, any investigation in signal processing should start with a classification of the signals involved in the specific application. Sec. 1.2 Classification of Signals 9 1.2.1 Multichannel and Multidimensional Signals As explained in Section 1.1. a signal is described by a function of one or more independent variables. The value of the function (i.e.. the dependent variable) can be a real-valued scalar quantity a complex-valued quantity, or perhaps a vector. For example. the signal (t) = A sin3Tt is a real-valued signal. However, the signal = A cos 37Tt + j A sin 37Tt is complex valued. In some applications, signals are generated by multiple sources or multiple sensors. Such signals, in turn. can be represented in vector form. Figure 1.4 shows the three components of a vector signal that represents the ground acceleration due to an earthquake. This acceleration is the result of three basic types of elastic waves. The primary (P) waves and the secondary (S) waves propagate within the body of rock and are longitudinal and transversal, respectively. The third type of elastic wave is called the surface wave. because it propagates near the ground surface. If Sk(l), k = 1 m 2, 3. denotes the electrical signal from the kth sensor as a function of time. the set of p = 3 signals can be represented by a vector S3(t), where S2(t) S3(t) We refer to such a vector of signals as a mullichannel signal. In electrocardiographys for example, 3-iead and 12-lead electrocardiograms (ECG) are often used in practice. which result in 3-channel and 12-channel signals. Let us now turn our attention to the independent variable(s). If the signal is a function of a single independent variable, the signal is called a one-dimensional signal. On the other hand. a signal is called M-dimensional if its value is a function of M independent variables. The picture shown in Fig. 1.5 is an example of a two-dimensional signal. since the intensity or brightness I (x. y) at each point is a function of two independent variables. On the other hand. a black-and-white television picture may be represented as I (x. y. t) since the brightness is a function of time. Hence the TV picture may be treated as a three-dimensional signal. In contrast, a color TV picture may be described by three intensity functions of the form Ir(x, y. t), Ig(x. y. t and 1b (x. y, t), corresponding to the brightness of the three principal Introduction Chap. 1 colors (red. green, blue) as functions of time. Hence the color TV picture is a three- channel, three-dimensional signal, which can be represented by the vector Ib(x, y, In this book we deal mainly with single-channel, one-dimensional real- or complex-valued signals and we refer to them simply as signals. In mathematical Up South South 10 12 16 18 Time (seconds) (b) Figure 1.4 Three components of ground acceleration measured a few kilometers from the epicenter of an earthquake. (From Earthquakes. by B. A. Bold. @1988 bV W. H. Freeman and Company. Reprinted with permission of the publisher.) Sec. 1.2 Classification of Signals 11 terms these signals are described by a function of a single independent variable. Although the independent variable need not be time, it is common practice to use t as the independent variable. In many cases the signal processing operations and algorithms developed in this text for one-dimensional. single-channel signals can be extended to multichannel and multidimensional signals. 1.2.2 Continuous-Time Versus Discrete-Time Signals Signals can be further classified into four different categories depending on the characteristics of the time (independent) variable and the values they take. Continuous-time signals or analog signals are defined for every value of time and x Figure 1.5 Example of a two-dimensional signal. they take on values in the continuous interval (a. b). where a can be —oc and b can be o. Mathematically. these signals can be described bv functions of a continuous variable. The speech waveform in Fig. 1.1 and the signals (t) = cosnt, X2(t) VI , —oc < t < oc are examples of analog signals. Discrete-time signals are defined only at certain specific values of time. These time instants need not be equidistant. but in practice they are usually taken at equally spaced intervals for computational convenience and mathematical tractability. The signal x(tn) = e It l ,. provides an example of a discrete-time signal, If we use the index n of the discrete-time instants as the independent variable, the signal value Introduction Chap. 1 becomes a function of an integer variable (i.e., a sequence of numbers). Thus a discrete-time signal can be represented mathematically by a sequence of real or complex numbers. To emphasize the discrete-time nature of a signal. we shall denote such a signal as x (n) instead of x(t). If the time instants tn are equally spaced (i.e., tn = n T), the notation x (n T) is also used. For example, the sequence 0.82 , if n (1.2.1) o. otherwise is a discrete-time signal, which is represented graphically as in Fig. 1.6. In applications, discrete-time signals may arise in two ways: 1. By selecting values of an analog signal at discretetime instants. This process is called sampling and is discussed in more detail in Section 1.4. All measuring instruments that take measurements at a regular interval of time provide discrete-time signals. For example, the signal x(n) in Fig. 1.6 can be obtained 10 x(n) 2 3 4 6 7 Figure 1.6 Graphical representation of the discrete time signal x (n ) 0.8" for n > 0 and x(n) = 0 for n < 0. by sampling the analog signal x(t) = 0.8, t 0 and x (t) 0, t < 0 once every second. 2. By accumulating a variable over a period of time. For example, counting the number of cars using a given street every hour. or recording the value of gold every day, results in discrete-time signals. Figure 1.7 shows a graph of the Wölfer sunspot numbers. Each sample of this discrete-time signal provides the number of sunspots observed during an interval of 1 year. Sec. 1.2 Classification of Signals 13 1.2.3 Continuous-Valued Versus Discrete-Valued Signals The values of a continuous-time or discrete-time signal can be continuous or discrete. If a signal takes on all possible values on a finite or an infinite range, it 200 1770 1790 1810 1830 1850 i 870 Year Figure 1.7 Wölfer annual sunspot numbers (1770—1869). is said to be continuous-valued signal. Alternatively, if the signal takes on values from a finite set of possible values. it is said to be a discrete-valued signal. Usually, these values are equidistant and hence can be expressed as an integer multiple of the distance between two successive values. A discrete-time signal having a set of discrete values is called a digital signal. Figure 1.8 shows a digital signal that takes on one of four possible values, In order for a signal to be processed digitally, it must be discrete in time and its values must be discrete (i.e., it must be a digital signal). If the signal to be processed is in analog form, it is converted to a digital signal by sampling the analog signal at discrete instants in time. obtaining a discrete-time signal. and then by quantizing its values to a set of discrete values, as described later in the chapter. The process of converting a continuous-valued signal into a discrete-valued signal, called quantization. is basically an approximation process. It may be accomplished simply by rounding or truncation. For example, if the allowable signal values in the digital signal are integers, say 0 through 15, the continuous- value signal is quantized into these integer values. Thus the signal value 8.58 will be approximated by the value 8 if the quantization process is performed by Introduction Chap. 1 truncation or by 9 if the quantization process is performed by rounding to the nearest integer. An explanation of the analog-to-digital conversion process is given later in the chapter. 0 3 5 6 8 ngure 1.8 Digital signal with four different amplitude values. 1.2.4 Deterministic Versus Random Signals The mathematical analysis and processing of signals requires the availability of a mathematical description for the signa} itself. ms mathematical description, often referred to as the signal model. leads to another important classification of signals. Any signal that can be uniquely described by an explicit mathematical expression, a table of data, or a well-defined rule is called determimstic. This term is used to emphasize the fact that all past, present, and future values of the signal are known precisely, without any uncertainty. In many practical applications, however, there are signals that either cannot be described to any reasonable degree of accuracy by explicit mathematical for e mulas, or such a description is too complicated to be of any practical use. The lack of such a relationship implies that such signals evolve in time in an unpredictable manner. We refer to these signals as random. The output of a noise generator, the seismic signal of Fig. 1.4, and the speech signal in Fig. 1.1 are examples of random signals. Figure 1.9 shows two signals obtained from the same noise generator and their associated histograms. Although the two signals do not resemble each other visually, their histograms reveal some similarities. This provides motivation for Sec. 1.2 Classification of Signals 15 1600 (a) (b) Figure 1.9 Two random signals from the same signal generator and their histograms. Introduction Chap. 1 1600 (c) Figure 1.9 Continued the analysis and description of random signals using statistical techniques instead of explicit formulas. The mathematical framework for the theoretical analysis of random signals is provided by the theory of probability and stochastic processes. Sec. 1.2 Classification of Signals 17 Some basic elements of this approach, adapted to the needs of this book. are presented in Appendix A. It should be emphasized at this point that the classification of a real-world signal as deterministic or random is not always clear. Sometimes. both approaches lead to meaningful results that provide more insight into signal behavior. At other 18 Introduction Chap. 1 times. the wrong classification may lead to erroneous results, since some mathematical tools may apply only to deterministic signals while others may apply only to random signals. This will become clearer as we examine specific mathematical tools. 1.3 THE CONCEPT OF FREQUENCY IN CONTINUOUS-TIME AND DISCRETE-TIME SIGNALS The concept of frequency is familiar to students in engineering and the sciences. This concept is basic in. for example, the design of a radio receiver, a high-fidelity system. or a spectral filter for color photography. From physics we know that frequency is closely related to a specific type of periodic motion called harmonic oscillation. which is described by sinusoidal functions. The concept of frequency is directly related to the concept of time. Actually, it has the dimension of inverse time. Thus we should expect that the nature of time (continuous or discrete) would affect the nature of the frequency accordingly. 1.3.1 Continuous-Time Sinusoidal Signals A simple harmonic oscillation is mathematically described by the following continuous-time sinusoidal signal: xa(t) = Acos(Qt +6). —oc < t < oc (1.3.1) shown in Fig. 1.10. The subscript a used with x (t) denotes an analog signal. This signal is completely characterized bv three parameters: A is the amplitude of the sinusoid. Q is the frequency in radians per second (rad/s), and e is the phase in radians. Instead of Q, we often use the frequency F in cycles per second or hertz (Hz). where (1.3.2) In terms of F. (1.3.1) can be written as xa(t) = Acos(2rFt+ 9), —oc < t < N (1.3.3) We will use both forms, (1.3.1) and (1,3.3), in representing sinusoidal signals. xu(t) cos(2TFt + 9) Sec. 1.3 Frequency Concepts in Continuous-Discrete-Time Signals 19 figure 1.10 Example of an analog sinusoidal signal. The analog sinusoidal signal in (1.3.3) is characterized by the following pr.operties: Al. For every fixed value of the frequency F, xa(r) is periodic. Indeed. it can easily be shown, using elementary trigonometry, that xa(t + Tp) = xa(t) where Tp = I/F is the fundamental period of the sinusoidal signal. A2. Continuous-time sinusoidal signals with distinct (different) frequencies are themselves distinct. A3. Increasing the frequency F results in an increase in the rate of oscillation of the signal, in the sense that more periods are included in a given time interval. We observe that for F = 0. the value Tp = oc is consistent with the fun damental relation F = 1 / Tp Due to continuity of the time variable t, we can increase the frequency F. without limit, with a corresponding increase in the rate of oscillation. The relationships we have described for sinusoidal signals carry over to the class of complex exponential signals xa(t) = (1.3 4) This can easily be seen by expressing these signals in terms of sinusoids using the Euler identitv cos@ j sin 4 (1.3.5) By definition, frequency is an inherently positive physical quantity. This is obvious if we interpret frequency as the number of cycles per unit time in a periodic signal. However. in many cases, only for mathematical convenience, we need to introduce negative frequencies. To see this we recall that the sinusoidal signal (1.3.1) may be expressed as 20 Introduction Chap. 1 Xa (t) A COS(Qt O) e(1.3.6) 2 which follows from (1.3.5). Note that a sinusoidal signal can be obtained by adding two equal-amplitude complex-conjugate exponential signals, sometimes called phasors, illustrated in Fig. 1.11. As time progresses the phasors rotate in opposite directions with angular frequencies ±Q radians per second. Since a positive frequency corresponds to counterclockwise uniform angular motion, a negative frequency simply corresponds to clockwise angular motion. For mathematical convenience, we use both negative and positive frequencies throughout this book. Hence the frequency range for analog sinusoids is —oo < 1m Re Figure 1.11 Representation of a cosine function by a pair of complex-conjugate exponentials (phasors). 1 ,3.2 Discrete-Time Sinusoidal Signals A discrete-time sinusoidal signal may be expressed as x (n) = A COS(wn + 9). —oc < n <.37) where n is an integer variable. called the sample number. A is the amplitude of the sinusoid, co is the frequency in radians per sample. and 6 is the phase in radians If instead of we use the frequencv variable f defined by (1.3.8) the relation (1.3.7) becomes x (n) = A cos(2Tfn + 6). —x < n < oc (1.3.9) The frequency f has dimensions of cycles per sample. In Section 1.4. where we consider the sampiing of analog sinusoids, we relate the frequencv variable f of a discrete-time sinusoid to the frequency F in cycles per second for the analog sinusoid. For the moment we consider the discrete-time sinusoid in (1.3.7) independently of the continuous-time sinusoid given in Sec. 1.3 Frequency Concepts in Continuous-Discrete-Time Signals 21 (1.3.1). Figure 1.12 shows a sinusoid with frequency = r /6 radians per sample (f = cycles per sample) and phase 9 = n /3. x(n) = A cos(wn + 8) Figure 1.12 Example of a discrete-time sinusoidal signal (w = 7/6 and 8 = m /3). In contrast to continuous-time sinusoids. the discrete- time sinusoids are characterized by the following properties: Bl. A discrete-time sinusoid is periodic only if its frequency fis a rational number. By definition, a discrete-time signal x (n) is periodic with period N (N > 0) if and onlv if for all n (1.3.10) The smallest value of N for which (1.3.10) is true is called the fundamental period. The proof of the periodicity property is simple. For a sinusoid with frequency fo to be periodic. we should have + n) + 8] = cos(2nfon + 6) This relation is true if and only if there exists an integer k such that or, equivalently. (1.3.11 ) 22 Introduction Chap. 1 According to (1.3.1 1 ). a discrete-time sinusoidal signal is periodic only if its frequency fo can be expressed as the ratio of two integers (i.e...f{) is rational). To determine the fundamental period N of a periodic sinusoid. we express its frequency fo as in (1.3.11) and cancel common factors so that k and N are relatively prime. Then the fundamental period of the sinusoid is equal to N. Observe that a small change in frequency can result in a large change in the period. For example. note that fl = 31/60 implies that NI = 60, whereas = 30/60 results in N2 = 2. B2. Discrete-time sinusoids whose frequencies are separated by an integer multiple of 27t are identical. To prove this assertion, let us consider the sinusoid + 9). It easily follows that 27T e] = cos(won + 27tn + e ) (1.3.12) As a result, all sinusoidal sequences Xk(n) = A COS(WÅ.n + 8), (1.3.13) where are indistinguishable (i.e., identical). On the other hand, the sequences of any two sinusoids with frequencies in the range —IT < (D < or < f are distinct. Consequently, discrete- time sinusoidal signals with frequencies or If I1 are unique. Any' sequence resulting from a sinusoid with a frequency > T. or if I > 4, is identical to a sequence obtained from a sinusoidal signal with frequency < T. Because of this similarity, we call the sinusoid having the frequency lol > r an alias of a corresponding sinusoid with frequency < n. Thus we regard frequencies in the range w S r, or — S as unique and all frequencies > T, or If I > as aliases. The reader should notice the difference between discreteaime sinusoids and continuous-time sinusoids, where the latter Sec. 1.3 Frequency Concepts in Continuous-Discrete-Time Signals 23 result in distinct signals for Q or F in the entire range — oc < Q < oc or —oc < F < oc. B3. The highest rate of oscillation in a discrete-rime sinusoid is attained when = (or w — r) or, equivalently, f (or f -) To illustrate this property. let us investigate the characteristics of the sinusoidal signal sequence x(n) = cos won when the frequency varies from 0 to T. To simplify the argument. we take vaiues of 00 = 0, T /8, 7/4, yr //2, corresponding to f = 0. which result in periodic sequences having periods N = oc, 16, 8. 4, 2. as depicted in Fig. 1.13. We note that the period of the sinusoid decreases as the frequency increases. In fact, we can see that the rate of oscillation increases as the frequency increases. n EDO x(n) x(n) n Figure 1.13 Signal x (n) = cosqn for vanous values of the frequency 24 Introduction Chap. 1 To see what happens for IT S wo 27. we consider the sinusoids with frequencres = and 27T — (Do. Note that as varies from IT to 27. varies from to 0. It can be easilv seen that A cos win = A cos won — A cos 02n A cos(2T — wo)n (1.3.14) Hence is an alias of w). If we had used a sine function instead of a cosine function, the result would basicalkY' be the same, except for a 180 phase difference between the sinusoids (n) and X2(n). In any case. as we increase the relative frequency of a discrete- time sinusoid from IT to 211, its rate of oscillation decreases. For = 27T the result is a constant signal. as in the case for = o. Obviously. for wo (or f k) we have the highest rate of oscillation. As for the case of continuous-time signals, negative frequencies can be introduced as well for discrete-time signals. For this purpose we use the identitv x(n) = A COS(wn +0) — 2 ( 1._3. 15) Since discrete-time sinusoidal signals with frequencies that are separated by an integer multiple of 2n are identical, it follows that the frequencies in any interval S (DI + 2n constitute all the existing discrete-time sinusoids or complex exponentials. Hence the frequency range for discrete-time sinusoids is finite with duration 2m Usuallv, we choose the range () < 2n or < 1, < f which we call the fundanzenral range. 1.3.3 Harmonically Related Complex Exponentials Sinusoidal signals and complex exponentials play a major role in the analysis of signals and systems. In some cases we deal with sets of harmonically related complex exponentials (or sinusoids). These are sets of periodic complex exponentials with fundamental frequencies that are multiples of a single positive frequency. Although we confine our discussion to complex exponentials. the same properties clearly hold for sinusoidal signals. We consider harmonically related complex exponentials in both continuous time and discrete time. Sec. 1.3 Frequency Concepts in Continuous-Discrete-Time Signals 25 Continuous-time exponentials. The basic signals for continuous-time. harmonically related exponentials are Sk(t) = ejkQot _ (1.3.16) We note that for each value of k, Sk(t) is periodic with fundamental period — Tp/k or fundamental frequency k Fo. Since a signal that is periodic with period Tp/k is also periodic with period k(Tp/k) = Tp for any positive integer k. we see that all of the SR. (t) have a common period of Tp. Furthermore, according to Section 1.3. Iq Fo is allowed to take any value and all members of the set are distinct. in the sense that if kl ,k2, then Ski (t) Sk2(t). From the basic signals in (1.3.16) we can construct a linear combination of harmonically' related complex exponentials of the form (1.3.17) where Ck., k. are arbitrary complex constants. The signal xu(t) is periodic with fundamental period Tp I/FO, and its representation in terms of (1.3.17) is called the Fourier series expansion for xo (t). The complex-valued constants are the Fourier series coefficients and the signal Sk(t) is called the kth harmonic of Discrete-time exponentials. Since a discrete-time complex exponential is periodic if its relative frequency is a rational number, we choose.f() = I/N and we define the sets of harmonically related complex exponentials by Sk(n) j27T (1.3.18) In contrast to the continuous-time case. we note that Sk.+N (n) = ej 27t n (k+N ) /N = Sk(n) This means that, consistent with (1.3.1()), there are only N distinct periodic complex exponentials in the set described by (1.3.18). Furthermore, all members of the set have a common period of N samples. Clearly, we can choose any consecutive N complex exponentials, say from k = no to k = no N — to form a harmonically related set with fundamental frequency fo = I/N. Most often. for convenience. we choose the set that corresponds to no = 0, that is, the set 26 Introduction Chap. 1 san) = ej 2m kn/N k = 0. 1. 2..... N - l (1.3.19) As in the case of continuous-time signals, it is obvious that the linear combination j2rkn /N Cke (1.3.20) results in a periodic signal with fundamental period N. As we shall see later, this is the Fourier series representation for a periodic discrete-time sequence with Fourier coefficients {Ck}. The sequence Sk(n) is called the kth harmonic of x(n). Example 1.3.1 Stored in the memory of a digital signal processor is one cycle of the sinusoidal signal 27th _ sin — 9 where 9 = 2nq/N, where q and N are integers. Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 27 (a) Determine how this table of values can be used to obtain values of harmonically related sinusoids having the same phase. (b) Determine hou- this table can be used to obtain sinusoids of the same frequency but different phase. Solution (a) Let Xk(n) denote the sinusoidal signal sequence 2yr nk = sin This is a sinusoid with frequency.fÅ k IN. which is harmonically related to x(n). But Xk(n) may be expressed as.xk(n) = sin Thus wc observe that =x (k). Xk(2) x (2k), and so on. Hence the sinusoidal sequence x, (n) can be obtained from the table of values of x (n) by taking every klh value of beginning with x(0). In this manner we can generate the values of all harmonically related sinusoids with frequencies = k/N for k = l.. (b) We can control the phase 6 of the sinusoid with frequency f, = k/N bv taking the first value of the sequence from memory location q = g N 12T where q is an integer. Thus the initial phase [d controls the starting location in the table and we wrap around the table each time the index (kn) exceeds N. 1.4 ANALOG-TO-DIGITAL AND DIGITAL-TO-ANALOG CONVERSION Most signals of practical interest, such as speech, biological signals, seismic signals, radar signals, sonar signals, and various communications signals such as audio and video signals, are analog. To process analog signals by digital means, it is first necessary to convert them into digital form. that is, to convert them to a sequence of numbers having finite precision. This procedure is called analog-to- digital (A/D) conversion, and the corresponding devices are called A/D converters (ADCs). 28 Introduction Chap. 1 Conceptually, we view AD conversion as a three-step process. This process is illustrated in Fig. 1.14. 1. Sampling. ms is the conversion of a continuous-time signal into a discretetime signal obtained by taking "samples" of the continuous-time signal at discrete-time instants. Thus, if xa (t) is the input to the sampler, the output is xa(nT) x (n), where T is called the sampling interval. 2. Quantization. This is the conversion of a discrete-time continuous-valued signal into a discrete-time, discrete-valued (digital) signal. "Ihe value of each A/D converter Analog Discrete-time Quantized Digital signal signal signal signal Figure 1.14 Basic parts of an analog-to-digital (A/D) converter. signal sample is represented by' a value selected from a finite set of possible values. The difference between the unquantized sample x(n) and the quantized output xq(n) is called the quantization error. 3. Coding. In the coding process, each discrete value xq(n) is represented by a b-bit binary sequence. Althouzh we model the A/D converter as a sampler followed by a quantizer and coder. in practice the A/D conversion is performed bv a single device that takes xa(t) and produces a binary-coded number. The operations of samp}ing and quantization can be performed in either order but. in practice. sampling is always performed before quantization. In many cases of practical interest (e.g.. speech processing) it is desirable to convert the processed digital signals into analog form. (Obviously. we cannot listen to the sequence of samples representing a speech signal or see the numbers corresponding to a TV signal.) The process of converting a digital signal into an analog signal is known as digital-to-analog (D/A) conversion. All D/A converters "connect the dots" in a digital signal by performing some kind of interpolation, whose accuracy depends on the quality Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 29 of the D/A conversion process. Figure 1.15 illustrates a simple form of D/A conversion, called a zero-order hold or a staircase approximation. Other approximations are possible, such as linearly connecting a pair of successive samples (linear interpolation). fitting a quadratic through three successive samples (quadratic interpolation). and so on. Is there an optimum (ideal) interpolator? For signals having a limited frequencv content (finite bandwidth), the sampling theorem introduced in the following section specifies the optimum form of interpolation. Sampling and quantization are treated in this section. In particular, we demonstrate that sampling does not result in a loss of information, nor does it introduce distortion in the signal if Origirn.l Staircase the signal bandwidth is Sign! finite. In principle, the analog signal can be reconstructed from the samples, provided that Approximation the sampling rate is sufficiently high to avoid the problem commonly called aliasing. On the other hand, quantization is a noninvertible or irreversible process that results in signal distortion. We shall show that the amount of distortion is dependent on 2 Tune Figure 1.15 Zero-order hold digital-to-analog (D/A) conversion. the accuracy, as measured by the number of bits. in the A/D conversion process. The factors affecting the choice of the desired accuracy of the A/D converter are cost and sampling rate. In general. the cost increases with an increase In accuracy and/or sampling rate. 30 Introduction Chap. 1 1.4.1 Sampling of Analog Signals There are manv ways to sample an analog signal. We limit our discussion to periodic or uniform sampling. which is the type of sampling used most often in practice. This is described by the relation x(n) = xa(nT). (1.4. I) where x(n) is the discrete- time signal obtained by "taking samples" of the analog signal xa(t) every T seconds, This procedure is illustrated in Fig. 1.16. The time interval T between successive samples is called the sampling period or sample imerval and its reciprocal 1/ T = FS is called the sampling rate (samples per second) or the sampling frequency (hertz). Periodic sampling establishes a relationship between the time variables t and n of continuous-time and discrete-time signals, respectively. Indeed, these variables are linearly related through the sampling period T or. equivalently. through the sampling rate FS = 1/ T, as n (I. 4.2) As a consequence of (1.4.2), there exists a relationship between the frequency variable F (or Q) for analog signals and the frequency variable f (or o) for discrete-time signals. To establish this relationship, consider an analog sinusoidal signal of the form xa(t) = xa(t) x(n) = xa(nT) A COS(27T Ft + 8) (1.4.3) AnalogDiscrete-time signalsignal Sampler Figure 1.16 Penodic sampling of an analog signal. which, when sampled periodically at a rate Ff = I/T samples per second. yields Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 31 xa(nT) = FnT + e) (1.4.4) = If we compare (1.4 4) with (1.3.9). we note that the frequency variables F and f are linearly related as Fs=sampling frequency (1.4.5) F=frequency of analoa or, equivalently, as f=frequency of digital signal -relative or normalized frequn+9) The relation in (1.4.5) justifies the name relative or normalized frequency, which is sometimes used to describe the frequency variable f, As (1.4.5) implies, we can use f to determine the frequency F in hertz only if the sampling frequency F, is known. We recall from Section 1.3.1 that the range of the frequency variable F or O for continuous-time sinusoids are —oc < F < 00 (1.4.7) —00 < Q < co However, the situation is different for discrete-time sinusoids. From Section 1.3.2 we recall that (1.4.8) By substituting from (1.4.5) and (1.4.6) into (1.4.8), we find that the frequency of the continuous-time sinusoid when sampled at a rate FS = 1/ T must fall in the range ( l. 4.9 ) or, equivalentlv. 32 Introduction Chap. 1 These relations are summarized in Table 1.1. TABLE 1.1 RELATIONS AMONG FREQUENCY VARIABLES Continuous-time signals Discrete-time signals radians radians cvcles sec sample samph- From these relations we observe that the fundamental difference between continuous-time and discrete-time signals is in their range of values of the frequency' variables F and or Q and w. Periodic sampling of a continuous-time signal implies a mapping of the infinite frequencv range for the variable F (or Q) into a finite frequency range for the variable f (or w). Since the highest frequencv in a discrete-time signal is w = or f = it follows that. with a sampling rate Fs, the corresponding highest values of F and Q are (1.4.11) Therefore. sampling introduces an ambiguity. since the highest frequency In a continuous-time signal that can be uniquely distinguished when such a signal is sampled at a rate Ff = 1/ T is Fmax — F,/2. or Qmax = n Fv To see what happens to frequencies above Fs/2, let us consider the following example. Example 1.4.1 The implications of these frequency relations can be fully appreciated by considcnng the two analog sinusoidal signals = COS Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 33 (1.412) COS 2n (50)t which are sampled at a rate F€ = 40 Hz. The corresponding discrete-time signals or sequences are 10 Xl(n) = cos 2n n = COS —n 40 (1.4.13) 50 X2(n) cos 2n cos —n 40 2 However. cos 5rn/2 = cos(2nn + r n /Q) = cos r n /2. Hence A2(n) = (n). Thus the sinusoidal signals are identical and, consequent}y. indistinguishable. If we are given the sampled values generated by' there is some ambiguity as to whether these sampled values correspond to (T) or X2(t). Since X2(i) yields exactly the same values as x! (t) when the two are sampled at Fs = 40 samples per second. we say that the frequency F: = 50 Hz is an alias of the frequency Fl = 10 Hz at the sampling rate of 40 samples per second. ft is important to note that F2 is not the only alias of Fl. in fact at the sampling rate of 40 samples per second. the frequency F3 = 90 Hz is also an alias of F], as is the frequency Fa = 130 Hz, and so on. All of the sinusoids cos 27 (Fl -t 40k)t. k sampled at 40 samples per second. yield identical values. Consequently. they are all aliases of Fi = 10 Hz. In general. the sampling of a continuous-time sinusoidal signal xu(t) A cos(2n For +8) (1.4.14) with a sampling rate = l / T results in a discrete-time signal x (n) = A cos(2rfon + 6) a.4.15) where fo = Fo/i% is the relative frequency of the sinusoid. If we assume that — F,/2 < Fo S F\/2. the frequency fo of x (n) is in the range fo Å, which is the frequency range for discrete-time signals. In this case, the relationship between 34 Introduction Chap. 1 Fo and is one-to-one, and hence it is possible to identify (or reconstruct) the analog signal xa(t) from the samples x(n). On the other hand. if the sinusoids xa(t) A cos(27T +9) (1.4.16) where (1.4.17) are sampled at a rate F, , it is clear that the frequency FR is outside the fundamental frequency range —F, /2 < F Fs/2. Consequently, the sampled signal is x (n) = xa (n T) = A cos 2n = A cos(2nnF0/Fs + 8 + 2rkn) = A cos(2Tfon + 8) which is identical to the discrete-time signal in (1.4.15) obtained by sampling. (1.4.14). Thus an infinite number of continuous-time sinusoids is represented by sampling the same discrete-time signal (i.e.. by the same set of samples). ConsequentlY', if we are given the sequence x (n). an ambiguity exists as to which continuous-time signal xa(t) these values represent. Equivalently. we can say that the frequencies Fk = Fo+kFs, —N < k < oo (k integer) are indistinguishable from the frequency Fo after sampling and hence they are aliases of Fo. The relationship between the frequency variables of the continuous-time and discrete-time signals is illustrated in Fig. 1.17. An example of aliasing is illustrated in Fig. 1.18. where two sinusoids with frequencies Fo — g Hz and Fl = Hz vield identical samples when a sampling rate of FS = 1 Hz is used. From (1.4.17) it easily follows that for k — = ( —Z + 1) Hz Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 35 Figure 1.17 Relationship between the contnnuous-tlme and discrete-time frequencv variables in the case of periodic sampling. 7 tlme. sec figure 1.18 Illustration of aliasing. Since FS/Q. which corresponds to = z, is the highest frequency that can be represented uniquely with a sampling rate Ff. it is a simple matter to determine the mapping of any (alias) frequency above FS/2 (w IT) into the equivalent frequency below Fs/2. We can use Fs/2 or w = IT as the pivotal point and reflect reflection or "fold" the alias frequency to the range 0 S w IT. Since the point of is Fs/2 (w = z), the frequency F, /2 (w = z) is called the folding freauencv. Example 1.4.2 Consider the ana}og signal 36 Introduction Chap. 1 xu(l) = 3 COS 100m (a) Determine the minimum sampling rate required to avoid aliasing. (b) Suppose that the signal is sampled at the rate 200 Hz. What is the discrete-time signal obtained after sampling? (c) Suppose that the signal is sampled at the rate F\ = 75 Hz. What is the discretetime signat obtained after sampling. (d) What is the frequency 0 < F < F of a sinusoid that yields samples identical LO those obtained in part (c)? Solution (a) The frequency of the analog signat is F = 50 Hz. Hence the minimum sampling rate required to avoid atia.sing is F, = 100 Hz. (b) If the signal is sampled at = Hz. the discrete-time sienal is 1007 = 3 cos n 3 cos —n 200 (c) If the signal is sampled at F, = 75 Hz. the discrete-lime s12nai is 1 OOIT 47T — 3 cos n = 3 cos 75 3 3 cos 2n — 27 = 3 cos 3 (d) For the sampling rate of FS = 75 Hz, we have F= = 75f The frequency of the sinusoid in part (c) is f 4. Hence F = 25 Hz Clearly. the sinusoidal signal ya(f) 3 cos 2n Ft Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 37 — 3 COS t sampled at FS = 75 samples!s yields identical samples. Hence F 50 Hz is an alias of F = 25 Hz for the sampling rate FS = 75 Hz. 1.4.2 The Sampling Theorem Given any analog signal. how should we select the sampling period T or, equivalently, the sampling rate E? To answer this question, we must have some information about the characteristics of the signal to be sampled. In particular, we must have some general information concerning the frequency content of the signal. Such information is generally available to us. For example, we know generally that the major frequency components of a speech signal fall below 3000 Hz. On the other hand, television signals. in general. contain important frequency components up to 5 MHz. The information content of such signals is contained in the amplitudes. frequencies. and phases of the various frequency components, but detailed knowledge of the characteristics of such signals is not available to us prior to obtaining the signals. In fact. the purpose of processing the signals is usually to extract this detailed information. However. if we know the maximum frequency content of the general class of signals (e.g.. the class of speech signals. the class of video signals, etc.). we can specify the sampling rate necessary to convert the analog signals to digital signals. Let us suppose that any analog signal can be represented as a sum of sinusoids of different amplitudes. frequencies, and phases. that is. (1.4.18) where N denotes the number of frequency components. Ah signals. such as speech and video, lend themselves to such a representation over any short time segment. The amplitudes, frequencies. and phases usually change slowly with time from one time segment to another. However. suppose that the frequencies do not exceed some known frequency. say Fmax For example, Fmax = 3000 Hz for the class of speech signals and Fmax — 5 MHz for television signals. Since the maximum frequency may vary slightly from different realizations among signals of any given class (e.g., it may vary slightly from speaker to speaker). we may wish to ensure that Fmax does not exceed some predetermined value by passing the 38 Introduction Chap. 1 analog signal through a filter that severely attenuates frequency components above Fmax. Thus we are certain that no signal in the class contains frequency components (having significant amplitude or power) above Fmax. In practice, such filtering is commonly used prior to sampling. From our knowledge of Fmax, we can select the appropriate sampling rate. We know that the highest frequency in an analog signal that can be unambiguously reconstructed when the signal is sampled at a rate Fr = 1/ T is Fs/2. Any frequency above Fs/2 or below —Fs/2 results in samples that are identical with a corresponding frequency in the range —Fs/2 S F < Fs/2. To avoid the ambiguities resulting from aliasing, we must select the sampling rate to be sufficiently high. That is, we must select Fs/2 to be greater than Fmax. Thus to avoid the problem of aliasing, FS is selected so that Fs > 2 Fmax (1.4.19) where Fmax is the largest frequency component in the analog signal. With the sampling rate selected in this manner, any frequency component. say IFtl < Fmax in the analog signal is mapped into a discrete-time sinusoid with a frequency (1.4.20) or, equivalently, — S = 21Tfi S 7T (1.4.21) Since, I fl = - 2 or = is the highest (unique) frequency in a discrete-time signal, the choice of sampling rate according to (1.4.19) avoids the problem of aliasing. In other words, the condition F, > 2 Fmax ensures that all the sinusoidal components in the analog signal are mapped into corresponding discrete-time frequency components with frequencies in the fundamental interval. Thus all the frequency components of the analog signal are represented in sampled form without ambiguity, and hence the analog signal can be reconstructed without distortion from the sample values using an "appropriate" interpolation (digital-to-analog conversion) method. The '*appropriate" or ideal interpolation formula is specified by the sampling theorem. Sampling Theorem. If the highest frequency contained in an analog signal is = B and the signal is sampled at a rate F, > 2Fmax 2B. then xa(t) can be exacfly recovered from its sample values using the interpolation function Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 39 (1.4.22) 27T B t Thus xa(t) may be expressed as (1.4.23) where xa(n/Fs) = xa(nT) x (n) are the samples of xa(r). When the sampling of xo(t) is performed at the minimum sampling rate Fs = 2B, the reconstruction formula in (1.4.23) becomes ( n sin 2nB(t — n 12B) k2B) -nPB) (1 ,4.24) The sampling rate FN = 2B = 2Fmax is called the Nyquist rate. Figure 1.19 illustrates the ideal D/A conversion process using the of interpolation function in (1.4.22). As can be observed from either (1.4.23) or (1.4.24), the reconstruction of xa (t) from the sequence x(n) is a complicated process, involving a weighted sum of the interpolation function (n I)T g(t) and its time-shifted versions g(t —n T) for —oo < n < 00, where the weighting factors are the samples x(n). Because of the complexity and the infinite number of samples required in (1.4.23) or (1.4.24), these reconstruction Figure 1.19 Ideal D'A conversion (interpolation). formulas are primarily of theoretical interest. Practical interpolation methods are given in Chapter 9. Example 1.4.3 Consider the analog signal 40 Introduction Chap. 1 Au(t) 3 cos 507! -4- 10 sin cos 1007 ( What is the Nvquist ratc for this signal? Solution The frequencies present in the signal above are = 25 H Z. 150 HZ. Hz Thus F 150 Hz and according to (1.4.19). 300 H-z The Nyquist rate FN = Hence = 300 Hz Discussion It should be observed that the signal component IOsin sampled at the Nyquist rale FN 300. results in the samples I()sin IT n. which are identically zero. In other words. we are sampling the analog sinusoid at its zero-crossing points, and hence we miss this signal component completel\. This situation would not occur if the sinusoid is offset in phase by some amount 9. In such a case we have 10sin(300rt +9) sampled at the Nyquist rate FN = 300 samples per second. which yields the samples IOSin(rn +8) = IO(sin % n cose -g cos T n sin e ) 10 sin g cos z n (—1Y10sin6 Thus if 9 0 or 7. the samples of the sinusoid taken at the Nyquist rate are not all zero. However. we still cannot obtain the correct amplitude from the samples when the phase 9 is unknown. A simple remedy that avoids this potentially troublesome situation is to sample the analog signal at a rate higher than the Nyquist rate. Example 1.4.4 Consider the analog signal (t) = 3 COS 2000m + 5 sin 6000m t + 10 COS 12.000% t (a) What is the Nyquist rate for this signal? Sec. 1.4 Analog-to-Digital and Digital-to-Analog Conversion 41 (b) Assume now that we sample this signal using a sampling rate 5000 samples/s. What is the discrete-time signal obtained after sampling 0 (c) What is the analog signal (t) we can reconstruct from the samples if we use ideal interpolation 0 Solution (a) The frequencies existing in the analog signal are Fl = 1 kHz. F. = 3 kHz. 6 kHz Thus Fmax 6 kHz. and according to the sampling theorem, F, > - 12 kHz The Nyquist rate is = 12 kHz (b) Since we have chosen F, 5 kHz, the folding frequency is — = 2.5 kHz 2 and this is the maximum frequency that can be represented uniquely by the sampled signal. By making use of (1.4.2) we obtain = 3 cos 2m (l)" + 5 sin 2m in (bn + 10 cos sin2Jt(l + 5 sin 2r(—? + 10 cos 27 ( { Finallv. we obtain 13 cos — 5 sin The same result can be obtained using Fig. 1.17. Indeed. since Fs = 5 kHz, the folding frequency is Fs/2 = 2.5 kHz. This is the maximum frequency that can be represented uniquely by the sampled signal. From (1.4.17) we have — h. — k F,. Thus Fo can be obtained by subtracting from Fk an integer multiple of F, such that —F€/2 S Fo S Fs/2. The frequency Fl is less than F, /Q and thus it is not affected by aliasing. However, the other two frequencies are above the folding frequency and they will be changed by the aliasing effect. 42 Introduction Chap. 1 Indeed. F' -2 kHz 1 kHz 2 From (1.4.5) it follows that f, and f3 = which are in agreement with the result above. Sec. 1.4 and Digital-to-Analog Conversion 43 (c) Since only the frequencv components at I kHz and 2 kHz are present in the sampled signal. the analog signal we can recover IS 13 COS — sin which is obviously different from the original signal Alt u). This distortion of the onginal analog signal was caused by the aliasing effect, due to the low sampling rate used. Although aliasing is a pitfall to be avoided. there are two useful practical applications based on the exploitation of the aliasing effect. These applications are the stroboscope and the sampling oscilloscope. Both instruments are designed to operate as aliasing devices in order to represent high frequencies as low frequencies. To elaborate. consider a signal with high-frequency components confined to a given frequency band Bl < F < B2, where B2 B is defined as the bandwidth of the signal. We assume that B < < Bl < B2. This condition means that the frequency components in the signal are much larger than the bandwidth B of the signal. Such signals are usually called passband or narrowband signals. N ow. if this signal is sampled at a rate F, > 2B, but < < Bl. then all the frequencv components contained in the signal will be aliases of frequencies in the range < F < F, /2. Consequently. if we observe the frequency content of the signal in the fundamental range 0 < F < Fq/2. we know precisely the frequency content of the analog signal since we know the frequency band Bl < F < B2 under consideration. Consequently. if the signal is a narrowband (passband) signal. we can reconstruct the original signal from the samples. provided that the signal is sampled at a rate Fs > 2B. where B is the bandwidth. This statement constitutes another form of the sampling theorem. which we call the passband form in order to distinguish it from the previous form of the sampling theorem. which applies in general to all types of signals. The latter is sometimes called the baseband form. The passband form of the sampling theorem is described in detail in Section 9.1.2 1.4.3 Quantization of Continuous-Amplitude Signals As we have seen. a digital signal is a sequence of numbers (samples) in which each number is represented by a finite number of digits (finite precision). The process of converting a discrete-time continuous-amplitude signal into a digital signal by expressing each sample value as a finite (instead of an infinite) 44 Introduction Chap. 1 number of digits, is called quantization. The error introduced in representing the continuous-valued signal by a finite set of discrete value levels is called quantization error or quantization noise. We denote the quantizer operation on the samples x(n) as and let xq(n) denote the sequence of quantized samples at the output of the quantizer. Hence xq(n) Then the quantization error is a sequence eq (n) defined as the difference between the quantized value and the actual sample value. Thus (1.4.25) We illustrate the quantization process with an example. Let us consider the discrete-time signal obtained by sampling the analog exponential signal xo (r) = 0.9% r > 0 with a sampling frequency FS = 1 Hz (see Fig. 1.20(a)). Observation of Table 1.2, which shows the values of the first 10 samples of x(n), reveals that the description of the sample value x(n) requires n significant digits. It is obvious that this signal cannot x(n) = 0 1.0 971 xa(t) = 0.8 0 9t 0.6 0.4 0.2 0 2 3 4 5 6 8 T = I sec (a) Sec. 1.4 and Digital-to-Analog Conversion 45 Levels of quantization Quantization step (b) figure 1.20 Illustration of quanttzation. Anaiog-to-Digital TABLE 1.2 NUMERICAL ILLUSTRATION OF QUANTIZATION WITH ONE SjGNlFlCANT DIGIT USING TRUNCATION OR ROUNDING be processed bv using a calculator or a digital computer since only the first few samples can be stored and manipulated. For example. most calculators process numbers with only eight significant digits. 46 Introduction Chap. 1 However. let us assume that we want to use only one significant digit. To elimmatc the excess digits. we can either simply discard them (truncation) or discard them by rounding the resulting number (rounding). The resulting quantized signals Aq(n) are shown in Table 1.2. We discuss only quantization by rounding, although it is just as easy to treat truncation. The rounding process is graphicallv illustrated in Fig. 1.20b. The values allowed in the digital signal are called the quantization levels. whereas the distance A between two successive quantization levels is called the quantization step size or resolution. The rounding quantizer assigns each sample of -x (n) to the nearest quantization level. In contrast, a quantizer that performs truncation would have assigned each sample of x(n) to the quantization level below it. The quantization error eq (n) in rounding is limited to the range of —A/2 to that is, (1.4.26) 2 2 In other words, the instantaneous quantization error cannot exceed half of the quantization step (see Table 1.2). If xmin and xmax represent the minimum and maximum value of x(n) and L is the number of quantization levels. then xmax — Xmin (1.4.27) We define the dynamic range of the signal as xmax — xrmn. In our example we have Xmax — I, Xmin = 0, and L = 11, which leads to A = 0.1. Note that if the dynamic range is fixed, increasing the number of quantization levels, L results in a decrease of the quantization step size, Thus the quantization error decreases and the accuracy of the quantizer increases. In practice we can reduce the quantization error to an insignificant amount by choosing a sufficient number of quantization levels. Theoretically, quantization of analog signals always results in a loss of information. This is a result of the ambiguity introduced by quantization. Indeed, quantization is an irreversible or noninvertible process (i.e., a many-to-one mapping) since all samples in a distance A p about a certain quantization level are assigned the same value. This ambiguity makes the exact quantitative analysis of quantization extremely difficult. This subject is discussed further in Chapter 9, where we use statistical analysis, Sec. 1.4 and Digital-to-Analog Conversion 47 1.4.4 Quantization of Sinusoidal Signals Figure 1.21 illustrates the sampling and quantization of an analog sinusoidal signal xa(t) = A cos Qot using a rectangular grid. Horizontal lines within the range of the quantizer indicate the allowed levels of quantization. Vertical lines indicate the sampling times. nus, from the original analog signal xa(t) we obtain a discretetime signal x(n) = xa(nT) by sampling and a discrete-time, discrete-amplitude signal xq(nT) after quantization. In practice, the staircase signat xq (t) can be obtained by using a zero-order hold. This analysis is useful because sinusoids are used as test signals in A/D converters. If the sampling rate FS satisfies the sampling theorem, quantization is the only error in the AD conversion process. Thus we can evaluate the quantization error Time Amplitude Discretization Discretization Quantization Level Qaantizalion Step Rangeof the Qua.ntizer -34 Figare L21 Sampling and quantization of a sinusoidal signal. 48 14 Analog-to-Digital by quantizing the analog signal xa(t) instead of the discrete-time signal x(n) = xu (n T). Inspection of Fig. 1.21 indicates that the signal is almost linear between quantization levels (see Fig. 1.22). The corresponding quantization error eq (t) =.va(t) xq(t) is shown in Fig. 1.22 In Fig. I r denotes the time that xa (r) stavs within the quantization levels. The mean-square error power Pq is (1.4.28) Since eq(t) = (A/2T < 1 < T. we have (l.4.29) If the quantizer has b bits of accuracy and the quantizer covers the entire ran2e 2 AS the quantization step is A = 2 A /2b Hence The average power of the signal xa(t ) is (A cos Qot r dt ( 1.4.3 1 ) The quality of the output of the A/D converter is usually measured bv the signalto- quanta-ation ratio (SQNR). which provides the ratio of the signal power to the noise power: 3 SQNR — 02b Expressed in decibels (dB). the SQNR is SQNR(dB) = SQNR = 1.76 + 6.02b (1 4.32) This implies that the SQNR increases approximately 6 dB for every bit added to the word len2th. that is. for each doubling of the quantization levels. Although formula (1.4.32) was derived for sinusoidal signals, we shall see in Chapter 9 that a similar result holds for every signal whose dynamic range spans the range of the quantizer. This relationship is extremely important because it dictates —t O t —t ee(t) (a) (b) Figure 122 The quantization error eq (t) = xo (t) — Xq(t). the number of bits required by a specific application to assure a given signal-tonoise ratio. For example, most compact disc players use a sampling frequency of 44.1 kHz and 16-bit sample resolution, which implies a SQNR of more than 96 dB. 1.4.5 Coding of Quantized Samples The coding process in an A/D converter assigns a unique binary number to each quantization level. If we have L levels we need at least L different binary numbers. With a word length of b bits we can create 2b different binary numbers. Hence we have 2b L, or equivalently, b log. L. Thus the number of bits required in the coder is the smallest integer greater than or equal to log. L. In our example it can easily be seen that we need a coder with b = 4 bits. Commercially available A/D converters may be obtained with finite precision of b = 16 or less. Generally. the higher the sampling speed and the finer the quantization. the more expensive the device becomes. 1.4.6 Digital-to-Analog Conversion To convert a digital signal into an analog signal we can use a digital-to-analog (D/A) converter. As stated previously. the task of a D/A converter is to interpolate between samples. The sampling theorem specifies the optimum interpolation for a bandlimited signal. However, this type of interpolation is too complicated and. hence impractical, as 50 Introduction Chap. indicated previously. From a practical viewpoint. the simplest D/A converter is the zero- order hold shown in Fig. 1.15. which simply holds constant the value of one sample until the next one is received. Additional improvement can be obtained by using linear interpolation as shown in Fig. 1.23 to connect successive samples with straight-line segments. The zero-order hold and linear interpolator are analyzed in Section 9.3. Better interpolation can be achieved by using more sophisticated higher-order interpolation techniques. In general, suboptimum interpolation techniques result in passing frequencies above the folding frequency. Such frequency components are undesirable and are usually removed by passing the output of the interpolator through a proper analog figure 1.23 Linear point connector (with T- second delay). 1.5 Summary and References filter. which is called a postfilær or smoothing filter. Thus D:'A conversion usually involves a suboptimum interpolator followed by a postfilter. D/A converters are treated in more detail in Section 9.3. 1.4.7 Analysis of Digital Signals and Systems Versus Discrete- Time Signals and Systems We have seen that a digital signal is defined as a function of an integer independent variable and its values are taken from a finite set of possible values. The usefulness of such signals is a consequence of the possibilities offered by digital computers. Computers operate on numbers. which are represented by a string of o's and I's. The length of this string (word length) is fixed and finite and usually is 8, 12. 16. or 32 bits. The effects of finite word length in computations cause complications in the analysis of digital signal processing systems. To avoid these complications. we neglect the quantized nature of digital signals and systems in much of our analysis and consider them as discrete-time signals and svstems. In Chapters 6. 7. and 9 we investigate the consequences of using a finite word length. This is an important topic, since many digital signal processing problems are solved with small computers or microprocessors that employ fixed- point arithmetic. Consequently, one must look carefully at the problem of finite- precision arithmetic and account for it in the design of software and hardware that performs the desired Signal processing tasks. 1.5 SUMMARY AND REFERENCES In this introductory chapter we have attempted to provide the motivation for digital signal processing as an alternative to analog signal processing. We presented the basic elements of a digital signal processing system and defined the operations needed to convert an analog signal imo a digital signal ready for processing. Of particular importance is the sampling theorem. which was introduced bv Nyquist (1928) and later popularized in the classic paper by Shannon (1949). The sampling theorem as described in Section 1.4.2 is derived in Chapter 4. Sinusoidal signals were introduced primarily for the purpose of illustrating the aliasing phenomenon and for the subsequent development of the sampling theorem. Quantization effects that are inherent in the A/D conversion of a signal were also introduced in this chapter. Signal quantization is best treated in statistical terms. as described in Chapters 6, 7. and 9. Finally. the topic of signal reconstruction. or D/A conversion, was described briefly. Signal reconstruction based on staircase or linear interpolation methods is treated in Section 9.3. There are numerous practical applications of digital signal processing. The book edited by Oppenheim (1978) treats applications to speech processing, image processing, radar signal processing, sonar signal processing, and geophysical signal processing. 43

Use Quizgecko on...
Browser
Browser