digital.docx
Document Details
Uploaded by Deleted User
Full Transcript
# Introduction to Digital Signal Processing ## 1 Introduction Digital signal processing is an area of science and engineering that has developed rapidly over the past 30 years. This rapid development is a result of the significant advances in digital computer technology and integrated-circuit fabr...
# Introduction to Digital Signal Processing ## 1 Introduction Digital signal processing is an area of science and engineering that has developed rapidly over the past 30 years. This rapid development is a result of the significant advances in digital computer technology and integrated-circuit fabrication. The digital computers and associated digital hardware of three decades ago were relatively large and expensive, and as a consequence, their use was limited to general-purpose non-real-time (off-line) scientific computations and business applications. The rapid developments in integrated-circuit technology, starting with medium-scale integration (MSI) and progressing to large-scale integration (LSI) and now very-large-scale integration (VLSI) of electronic circuits, have spurred the development of powerful, smaller, faster, and cheaper digital computers and special-purpose digital hardware. These inexpensive and relatively fast digital circuits have made it possible to construct highly sophisticated digital systems capable of performing complex digital signal processing functions and tasks, which are usually too difficult and/or too expensive to be performed by analog circuitry or analog signal processing systems. Hence, many of the signal processing tasks that were conventionally performed by analog means are realized today by less expensive and often more reliable digital hardware. We do not wish to imply that digital signal processing is the proper solution for all signal processing problems. Indeed, for many signals with extremely wide bandwidths, real-time processing is a requirement. For such signals, analog or, perhaps, optical signal processing is the only possible signal processing solution. However, where digital circuits are available and have sufficient speed to perform the signal processing, they are usually preferable. Not only do digital circuits yield cheaper and more reliable systems for signal processing, they have other advantages as well. In particular, digital processing hardware allows programmable operations. Through software, one can more easily modify the signal processing functions to be performed by the hardware. Thus digital hardware and associated software provide a greater degree of flexibility in system design. Also, there is often a higher order of precision achievable with digital hardware and software compared with analog circuits and analog signal processing systems. For all these reasons, there has been an explosive growth in digital theory and applications over the past three decades. In this book, our objective is to present an introduction of the basic analysis tools and techniques for digital processing of signals. We begin by introducing some of the necessary terminology and by describing the important operations associated with the process of converting an analog signal to digital form suitable for digital processing. As we shall see, digital processing of analog signals has some drawbacks. First, and foremost, conversion of an analog signal to digital form, accomplished by sampling the signal and quantizing the samples, results in a distortion that prevents us from reconstructing the original analog signal from the quantized samples. Control of the amount of this distortion is achieved by proper choice of the sampling rate and the precision in the quantization process. Second, there are finite precision effects that must be considered in the digital processing of the quantized samples. While these important issues are considered in some detail in this book, the emphasis is on the analysis and design of digital signal processing systems and computational techniques. ## 1.1 Signals, Systems, and Signal Processing A signal is defined as any physical quantity that varies with time, space, or any other independent variable or variables. Mathematically, we describe a signal as a function of one or more independent variables. For example, the functions $s_1(t) = 5t$ and $s_2(t) = 20t^2$ describe two signals: one that varies linearly with the independent variable *t* (time) and a second that varies quadratically with *t*. As another example, consider the function $s(x,y) = 3x + 2xy + 10y$ This function describes a signal of two independent variables *x* and *y* that could represent the spatial coordinates in a plane. The signals described by (1.1.1) and (1.1.2) belong to a class of signals that are precisely defined by specifying the functional dependence on the independent variable. However, there are cases where such a functional relationship is unknown or too highly complicated to be of any practical use. For example, a speech signal (see Fig. 1.1) cannot be described functionally by expressions such as (1.1.1). In general, a segment of speech may be represented to a high degree of accuracy as a sum of several sinusoids of different amplitudes and frequencies, that is, as: $ \sum_{i=1}^{N}A_i(t)sin[2\pi F_i(t)t + \theta_i(t)]$ where {$A_i(t)$}, {$F_i(t)$}, and {$\theta_i(t)$} are the sets of (possibly time-varying) amplitudes, frequencies, and phases, respectively, of the sinusoids. In fact, one way to interpret the information content or message conveyed by any short time segment of the Sec. 1.1 Signals, Systems, and Signal Processing ## 1.1.1 Basic Elements of a Digital Signal Processing System Most of the signals encountered in science and engineering are *analog* in nature. That is, the signals are functions of a continuous variable, such as time or space, and usually take on values in a continuous range. Such signals may be processed directly by appropriate analog systems (such as filters or frequency analyzers) or frequency multipliers for the purpose of changing their characteristics or extracting some desired information. In such a case, we say that the signal has been processed directly in its analog form, as illustrated in Fig. 1.2. Both the input signal and the output signal are in analog form. Digital signal processing provides an alternative method for processing the analog signal, as illustrated in Fig. 1.3. To perform the processing digitally, there is a need for an interface between the analog signal and the digital processor. This interface is called an analog-to-digital (A/D) converter. The output of the AD converter is a digital signal that is appropriate as an input to the digital processor. The digital signal processor may be a large programmable digital computer or a small microprocessor programmed to perform the desired operations on the input signal. It may also be a hardwired digital processor configured to perform a specified set of operations on the input signal. Programmable machines provide the flexibility to change the signal processing operations through a change in the software, whereas hardwired machines are difficult to reconfigure. Consequently, programmable signal processors are in very common use. On the other hand, when signal processing operations are well defined, a hardwired implementation of the operations can be optimized, resulting in a cheaper signal processor and, usually, one that runs faster than its programmable counterpart. In applications where the digital output from the digital signal processor is to be given to the user in analog form, such as in speech communication, we must provide another interface from the digital domain to the analog domain. Such an interface is called a digital-to-analog (D/A) converter. Thus the signal is provided to the user in analog form, as illustrated in the block diagram of Fig. 1.3. However, there are other practical applications involving signal analysis, where the desired information is conveyed in digital form and no D/A converter is required. For example, in the digital processing of radar signals, the information extracted from the radar signal, such as the position of the aircraft and its speed, may simply be printed on paper. There is no need for a D/A converter in this case. ## 1.1.2 Advantages of Digital over Analog Signal Processing There are many reasons why digital signal processing of an analog signal may be preferable to processing the signal directly in the analog domain, as mentioned briefly earlier. First, a digital programmable system allows flexibility in reconfiguring the digital operations simply by changing the program. Reconfiguration of an analog system usually implies a redesign of the hardware followed by testing and verification to see that it operates properly. Accuracy considerations also play an important role in determining the form of the signal processor. Tolerances in analog circuit components make it extremely difficult for the system designer to control the accuracy of an analog signal processing system. On the other hand, a digital system provides much better control of accuracy requirements. Such requirements, in turn, result in specifying the accuracy requirements in the A/D converter and the digital signal processor, in terms of word length, floating-point versus fixed-point arithmetic, and similar factors. Digital signals are easily stored on magnetic media (tape or disk) without deterioration or loss of signal fidelity beyond that introduced in the A/D conversion. As a consequence, the signals become transportable and can be processed off-line in a remote laboratory. The digital signal processing method also allows for the implementation of more sophisticated signal processing algorithms. It is usually very difficult to perform precise mathematical operations on signals in analog form but these same operations can be routinely implemented on a digital computer using software. In some cases, a digital implementation of the signal processing system is cheaper than its analog counterpart. The lower cost may be due to the fact that the digital hardware is cheaper or perhaps it is a result of the flexibility for modifications provided by the digital implementation. As a consequence of these advantages, digital signal processing has been applied in practical systems covering a broad range of disciplines. We cite, for example, the application of digital signal processing techniques in speech processing and signal transmission on telephone channels, in image processing and transmission, in seismology and geophysics, in oil exploration, in the detection of nuclear explosions, in the processing of signals received from outer space, and in a vast variety of other applications. Some of these applications are cited in subsequent chapters. As already indicated, however, digital implementation has its limitations. One practical limitation is the speed of operation of A/D converters and digital signal processors. We shall see that signals having extremely wide bandwidths require fast-sampling-rate A/D converters and fast digital signal processors. Hence, there are analog signals with large bandwidths for which a digital processing approach is beyond the state of the art of digital hardware. ## 1.2 Classification of Signals The methods we use in processing a signal or in analyzing the response of a system to a signal depend heavily on the characteristic attributes of the specific signal. There are techniques that apply only to specific families of signals. Consequently, any investigation in signal processing should start with a classification of the signals involved in the specific application. ## 1.2.1 Multichannel and Multidimensional Signals As explained in Section 1.1, a signal is described by a function of one or more independent variables. The value of the function (i.e., the dependent variable) can be a real-valued scalar quantity, a complex-valued quantity, or perhaps a vector. For example, the signal $s(t) = Asin3\pi t$ is a real-valued signal. However, the signal $Ae^{j\theta} = A cos 3\pi t + jA sin 3\pi t$ is complex valued. In some applications, signals are generated by multiple sources or multiple sensors. Such signals, in turn, can be represented in vector form. Figure 1.4 shows the three components of a vector signal that represents the ground acceleration due to an earthquake. This acceleration is the result of three basic types of elastic waves. The primary (P) waves and the secondary (S) waves propagate within the body of rock and are longitudinal and transversal, respectively. The third type of elastic wave is called the surface wave, because it propagates near the ground surface. If $S_k(t)$, k = 1, 2, 3, denotes the electrical signal from the kth sensor as a function of time, the set of *p* = 3 signals can be represented by a vector $S_3(t)$, where $S_3(t) = \begin{bmatrix} S_1(t) \\ S_2(t) \\ S_3(t) \end{bmatrix}$. We refer to such a vector of signals as a *multichannel signal*. In electrocardiographys, for example, 3-lead and 12-lead electrocardiograms (ECG) are often used in practice, which result in 3-channel and 12-channel signals. Let us now turn our attention to the independent variable(s). If the signal is a function of a single independent variable, the signal is called a *one-dimensional signal*. On the other hand, a signal is called *M-dimensional* if its value is a function of M independent variables. The picture shown in Fig. 1.5 is an example of a *two-dimensional signal*, since the intensity or brightness, I(x, y), at each point is a function of two independent variables. On the other hand, a black-and-white television picture may be represented as I(x, y, t), since the brightness is a function of time. Hence, the TV picture may be treated as a *three-dimensional signal*. In contrast, a color TV picture may be described by three intensity functions of the form $I_r(x, y, t)$, $I_g(x, y, t)$,and $I_b(x, y, t)$, corresponding to the brightness of the three principal colors (red, green, blue) as functions of time. Hence, the color TV picture is a *three-channel, three-dimensional signal*, which can be represented by the vector $s(t) = \begin{bmatrix} I_r(x, y, t) \\ I_g(x, y, t) \\ I_b(x, y, t) \end{bmatrix}$. In this book, we deal mainly with *single-channel, one-dimensional real- or complex-valued signals* and we refer to them simply as signals. In mathematical terms, these signals are described by a function of a single independent variable. Although the independent variable need not be time, it is common practice to use *t* as the independent variable. In many cases, the signal processing operations and algorithms developed in this text for one-dimensional, single-channel signals can be extended to multichannel and multidimensional signals. ## 1.2.2 Continuous-Time Versus Discrete-Time Signals Signals can be further classified into four different categories depending on the characteristics of the time (independent) variable and the values they take. *Continuous-time signals* or *analog signals* are defined for every value of time and they take on values in the continuous interval (a, b), where a can be -∞ and b can be ∞. Mathematically, these signals can be described by functions of a continuous variable. The speech waveform in Fig. 1.1 and the signals $s(t) = cosnt$, $s_2(t) = Vt$, -∞ < t < ∞ are examples of analog signals. *Discrete-time signals* are defined only at certain specific values of time. These time instants need not be equidistant, but in practice, they are usually taken at equally spaced intervals for computational convenience and mathematical tractability. The signal $x(t_n) = e^{|t_n|}$, $t_n = nT$, n = 0, ±1, ±2,..., provides an example of a discrete-time signal. If we use the index *n* of the discrete-time instants as the independent variable, the signal value becomes a function of an integer variable (i.e., a sequence of numbers). Thus, a discrete-time signal can be represented mathematically by a sequence of real or complex numbers. To emphasize the discrete-time nature of a signal, we shall denote such a signal as *x(n)* instead of *x(t)*. If the time instants *t_n* are equally spaced (i.e., *t_n* = *nT*), the notation *x(nT)* is also used. For example, the sequence, $x(n) = \begin{cases} 0.8^n, & \text{if } n > 0 \\ 0, & \text{otherwise} \end{cases} $ is a *discrete-time signal*, which is represented graphically as in Fig. 1.6. In applications, discrete-time signals may arise in two ways: 1. By selecting values of an analog signal at discrete-time instants. This process is called sampling and is discussed in more detail in Section 1.4. All measuring instruments that take measurements at a regular interval of time provide discrete-time signals. For example, the signal *x(n)* in Fig. 1.6 can be obtained by sampling the analog signal $x(t) = 0.8, t \ge 0$ and $x(t) = 0, t < 0$ once every second. 2. By accumulating a variable over a period of time. For example, counting the number of cars using a given street every hour, or recording the value of gold every day, results in discrete-time signals. Figure 1.7 shows a graph of the Wölfer sunspot numbers. Each sample of this discrete-time signal provides the number of sunspots observed during an interval of 1 year. ## 1.2.3 Continuous-Valued Versus Discrete-Valued Signals The values of a continuous-time or discrete-time signal can be continuous or discrete. If a signal takes on all possible values on a finite or an infinite range, it is said to be a *continuous-valued signal*. Alternatively, if the signal takes on values from a finite set of possible values, it is said to be a *discrete-valued signal*. Usually, these values are equidistant and hence can be expressed as an integer multiple of the distance between two successive values. A discrete-time signal having a set of discrete values is called a *digital signal*. Figure 1.8 shows a digital signal that takes on one of four possible values. In order for a signal to be processed digitally, it must be discrete in time and its values must be discrete (i.e., it must be a digital signal). If the signal to be processed is in analog form, it is converted to a digital signal by sampling the analog signal at discrete instants in time, obtaining a discrete-time signal, and then by quantizing its values to a set of discrete values, as described later in the chapter. The process of converting a continuous-valued signal into a discrete-valued signal, called quantization, is basically an approximation process. It may be accomplished simply by rounding or truncation. For example, if the allowable signal values in the digital signal are integers, say 0 through 15, the continuous-value signal is quantized into these integer values. Thus the signal value 8.58 will be approximated by the value 8 if the quantization process is performed by truncation or by 9 if the quantization process is performed by rounding to the nearest integer. An explanation of the analog-to-digital conversion process is given later in the chapter. ## 1.2.4 Deterministic Versus Random Signals The mathematical analysis and processing of signals requires the availability of a mathematical description for the signal itself. This mathematical description, often referred to as the *signal model*, leads to another important classification of signals. Any signal that can be uniquely described by an explicit mathematical expression, a table of data, or a well-defined rule is called *deterministic*. This term is used to emphasize the fact that all past, present, and future values of the signal are known precisely, without any uncertainty. In many practical applications, however, there are signals that either cannot be described to any reasonable degree of accuracy by explicit mathematical formulas, or such a description is too complicated to be of any practical use. The lack of such a relationship implies that such signals evolve in time in an unpredictable manner. We refer to these signals as *random*. The output of a noise generator, the seismic signal of Fig. 1.4, and the speech signal in Fig. 1.1 are examples of random signals. Figure 1.9 shows two signals obtained from the same noise generator and their associated histograms. Although the two signals do not resemble each other visually, their histograms reveal some similarities. This provides motivation for the analysis and description of random signals using statistical techniques instead of explicit formulas. The mathematical framework for the theoretical analysis of random signals is provided by the *theory of probability and stochastic processes*. Some basic elements of this approach, adapted to the needs of this book, are presented in Appendix A. It should be emphasized at this point that the classification of a real-world signal as deterministic or random is not always clear. Sometimes, both approaches lead to meaningful results that provide more insight into signal behavior. At other times, the wrong classification may lead to erroneous results, since some mathematical tools may apply only to deterministic signals, while others may apply only to random signals. This will become clearer as we examine specific mathematical tools. ## 1.3 The Concept of Frequency in Continuous-Time and Discrete-Time Signals The concept of frequency is familiar to students in engineering and the sciences. This concept is basic in, for example, the design of a radio receiver, a high-fidelity system, or a spectral filter for color photography. From physics, we know that frequency is closely related to a specific type of periodic motion called harmonic oscillation, which is described by sinusoidal functions. The concept of frequency is directly related to the concept of time. Actually, it has the dimension of inverse time. Thus, we should expect that the nature of time (continuous or discrete) would affect the nature of the frequency accordingly. ## 1.3.1 Continuous-Time Sinusoidal Signals A simple harmonic oscillation is mathematically described by the following continuous-time sinusoidal signal: $x_a(t) = Acos(\Omega t + \theta), -\infty < t < \infty$ (1.3.1) shown in Fig. 1.10. The subscript *a* used with *x(t)* denotes an *analog signal*. This signal is completely characterized by three parameters: *A* is the amplitude of the sinusoid, $\Omega$ is the frequency in radians per second (rad/s), and $\theta$ is the phase in radians. Instead of $\Omega$, we often use the frequency *F* in cycles per second or hertz (Hz), where $\Omega = 2\pi F$ (1.3.2) In terms of *F*, (1.3.1) can be written as $x_a(t) = Acos(2\pi Ft + \theta), -\infty < t < \infty$ (1.3.3) We will use both forms, (1.3.1) and (1.3.3), in representing sinusoidal signals. The analog sinusoidal signal in (1.3.3) is characterized by the following properties: A1. For every fixed value of the frequency F, $x_a(t)$ is periodic. Indeed, it can easily be shown, using elementary trigonometry, that $x_a(t + T_p) = x_a(t)$, where $T_p = 1/F$ is the fundamental period of the sinusoidal signal. A2. Continuous-time sinusoidal signals with distinct (different) frequencies are themselves distinct. A3. Increasing the frequency F results in an increase in the rate of oscillation of the signal, in the sense that more periods are included in a given time interval. We observe that for F = 0, the value $T_p = \infty$ is consistent with the fundamental relation $F = 1 / T_p$. Due to continuity of the time variable *t*, we can increase the frequency *F*, without limit, with a corresponding increase in the rate of oscillation. The relationships we have described for sinusoidal signals carry over to the class of complex exponential signals $x_a(t) = $ This can easily be seen by expressing these signals in terms of sinusoids using the Euler identity $e^{\pm j\theta} = cos\theta \pm j sin\theta$ (1.3.5). By definition, frequency is an inherently positive physical quantity. This is obvious if we interpret frequency as the number of cycles per unit time in a periodic signal. However, in many cases, only for mathematical convenience, we need to introduce negative frequencies. To see this, we recall that the sinusoidal signal (1.3.1) may be expressed as $x_a(t) = A COS(\Omega t + \theta) = \frac{A}{2}e^{j(\Omega t + \theta)} + \frac{A}{2}e^{-j(\Omega t + \theta)}$ (1.3.6), which follows from (1.3.5). Note that a sinusoidal signal can be obtained by adding two equal-amplitude complex-conjugate exponential signals, sometimes called phasors, illustrated in Fig. 1.11. As time progresses, the phasors rotate in opposite directions with angular frequencies ±$\Omega$ radians per second. Since a positive frequency corresponds to counterclockwise uniform angular motion, a negative frequency simply corresponds to clockwise angular motion. For mathematical convenience, we use both negative and positive frequencies throughout this book. Hence, the frequency range for analog sinusoids is -∞ < $\Omega$ < ∞ ## 1.3.2 Discrete-Time Sinusoidal Signals A discrete-time sinusoidal signal may be expressed as $x(n) = A COS(\omega n + \theta), -\infty < n < \infty$ (1.3.7) where *n* is an integer variable, called the *sample number*, *A* is the amplitude of the sinusoid, $\omega$ is the frequency in radians per sample, and $\theta$ is the phase in radians. If instead of $\omega$ we use the frequency variable *f* defined by the relation (1.3.7) becomes $\omega = 2\pi f$ (1.3.8) the relation (1.3.7) becomes *x(n) = Acos(2πfn + θ), -∞ < n < ∞* (1.3.9) The frequency *f* has dimensions of *cycles per sample*. In Section 1.4, where we consider the sampling of analog sinusoids, we relate the frequency variable *f* of a discrete-time sinusoid to the frequency *F* in cycles per second for the analog sinusoid. For the moment we consider the discrete-time sinusoid in (1.3.7) independently of the continuous-time sinusoid given in (1.3.1). Figure 1.12 shows a sinusoid with frequency $\omega = \pi/6$ radians per sample (f = cycles per sample) and phase $\theta = \pi/3$. In contrast to continuous-time sinusoids, the discrete-time sinusoids are characterized by the following properties: B1. A discrete-time sinusoid is periodic only if its frequency *f* is a rational number. By definition, a discrete-time signal *x(n)* is periodic with period *N* (*N* > 0) if and only if *x(n + N) = x(n)*for all *n* (1.3.10) The smallest value of *N* for which (1.3.10) is true is called the *fundamental period*. The proof of the periodicity property is simple. For a sinusoid with frequency *f_0* to be periodic, we should have $2\pi f_0(N + n) + \theta = cos(2\pi f_0n + \theta)$ This relation is true if and only if there exists an integer *k* such that $2\pi f_0N = 2k\pi$ or, equivalently. $f_0 = \frac{k}{N}$ (1.3.11) According to (1.3.11), a discrete-time sinusoidal signal is periodic only if its frequency *f_0* can be expressed as the ratio of two integers (i.e., *f_0* is rational). To determine the fundamental period *N* of a periodic sinusoid, we express its frequency *f_0* as in (1.3.11) and cancel common factors so that *k* and *N* are relatively prime. Then, the fundamental period of the sinusoid is equal to *N*. Observe that a small change in frequency can result in a large change in the period. For example, note that $f_1 = 31/60$ implies that $N_1 = 60$, whereas $f_2 = 30/60$ results in $N_2 = 2$. B2. Discrete-time sinusoids whose frequencies are separated by an integer multiple of *2π* are identical. To prove this assertion, let us consider the sinusoid $cos(\omega_0n + \theta)$. It easily follows that $s[(\omega_0 + 2\pi n + \theta] = cos(\omega_0n + 2\pi n + \theta) = cos(\omega_0n + \theta)$ (1.3.12) As a result, all sinusoidal sequences $x_k(n) = ACOS(\omega_k.n + \theta), k = 0, 1, 2,...$ (1.3.13) where $\omega_k = \omega_0 + 2k\pi$, $-\pi < \omega_0 < \pi$ or $ -\pi < \omega < \pi$, are indistinguishable (i.e., identical). On the other hand, the sequences of any two sinusoids with frequencies in the range $-\pi < \omega < \pi$ (D $f_0 < f$ are distinct. Consequently, discrete-time sinusoidal signals with frequencies or $f_1 \ge f$ are unique. Any sequence resulting from a sinusoid with a frequency $f_0 > \pi$ , or if $f_1 > 4$, is identical to a sinusoidal sequence obtained from a sinusoidal signal with frequency $\omega_k < \pi$. Because of this similarity, we call the sinusoid having the frequency $f_0|\ge \pi$ an *alias* of a corresponding sinusoid with frequency < $\pi$. Thus, we regard frequencies in the range $\omega \le \pi$ , or $-\pi \le f \le \pi$ as unique and all frequencies $|\omega| > \pi$, or $|f| > \pi$ as *aliases*. The reader should notice the difference between discrete-time sinusoids and continuous-time sinusoids, where the latter result in distinct signals for $\Omega$ or F in the entire range -∞ < $\Omega$ < ∞ or -∞ < F < ∞. B3. The highest rate of oscillation in a discrete-time sinusoid is attained when $\omega = \pi$ (or $\omega = -\pi$) or, equivalently, f = 1/2 (or f = -1/2). To illustrate this property, let us investigate the characteristics of the sinusoidal signal sequence $x(n) = cos \omega_0n$ when the frequency varies from 0 to $\pi$. To simplify the argument, we take values of $\omega_0 = 0$, $\frac{\pi}{8}$, $\frac{7\pi}{4}$, $\frac{\pi}{2}$, corresponding to *f* = 0, $\frac{1}{16}$, $\frac{8}{4}$, $\frac{2}{2}$, which result in periodic sequences having periods *N* = ∞, 16, 8, 4, 2, as depicted in Fig. 1.13. We note that the period *N* of the sinusoid decreases as the frequency increases. In fact, we can see that the rate of oscillation increases as the frequency increases. To see what happens for $\pi \le \omega_0 \le 2\pi$, we consider the sinusoids with frequencies $\omega$ and $2\pi - \omega$. (Do note that as $\omega$ varies from $\pi$ to 0. It can be easily seen that $x_1(n) = A cos \omega n = A cos \omega n $ $x_2(n) = A cos 2\pi n = A cos (2\pi - \omega)n = A cos(-\omega n) = x_1(n)$ (1.3.14) Hence $\omega$ is an alias of $\omega$. If we had used a sine function instead of a cosine function, the result would basically be the same, except for a 180 phase difference between the sinusoids *x_1(n)* and *x_2(n)*. In any case, as we increase the relative frequency of a discrete-time sinusoid from $\pi$ to 2π, its rate of oscillation decreases. For $\omega = 2π$, the result is a constant signal, as in the case for $\omega = 0$. Obviously, for $\omega_0$ (or *f_0*) we have the highest rate of oscillation. As for the case of continuous-time signals, negative frequencies can be introduced as well for discrete-time signals. For this purpose, we use the identity $x(n) = A COS(\omega n + \theta) = \frac{A}{2}e^{j(\omega n+\theta)} + \frac{A}{2}e^{-j(\omega n+\theta)}$ (1.3.15) Since discrete-time sinusoidal signals with frequencies that are separated by an integer multiple of *2π* are identical, it follows that the frequencies in any interval $\omega \le \omega \le (\omega + 2\pi)$ constitute all the existing discrete-time sinusoids or complex exponentials. Hence, the frequency range for discrete-time sinusoids is finite with duration *2π*. Usually, we choose the range $-\pi \le \omega \le \pi$ or $-\pi \le f \le \pi$, which we call the *fundamental range*. ## 1.3.3 Harmonically Related Complex Exponentials Sinusoidal signals and complex exponentials play a major role in the analysis of signals and systems. In some cases, we deal with sets of *harmonically related complex exponentials* (or sinusoids). These are sets of periodic complex exponentials with fundamental frequencies that are multiples of a single positive frequency. Although we confine our discussion to complex exponentials, the same properties clearly hold for sinusoidal signals. We consider harmonically related complex exponentials in both continuous time and discrete time. ### Continuous-time exponentials. The basic signals for continuous-time, harmonically related exponentials are $S_k(t) = e^{jk\Omega_0t}, k = 0, \pm1, \pm2,...$ (1.3.16) We note that for each value of k, $S_k(t)$ is periodic with fundamental period $1/(kF_0) = T_p/k$ or fundamental frequency *kF_0*. Since a signal that is periodic with period $T_p/k$ is also periodic with period $k(T_p/k) = T_p$ for any positive integer *k*, we see that all of the $S_k(t)$ have a common period of $T_p$. Furthermore, according to Section 1.3, *f_0* = *F_0* is allowed to take any value and all members of the set are distinct, in the sense that if $k_1 \ne k_2$, then $S_{k_1}(t) \ne S_{k_2}(t)$. From the basic signals in (1.3.16), we can construct a linear combination of harmonically' related complex exponentials of the form $x(t) = \sum_{k=-\infty}^{\infty}C_kS_k(t) = \sum_{k=-\infty}^{\infty}C_ke^{jk\Omega_0t}$ (1.3.17) where $C_k$, k = 0, ±1, ±2, ... are arbitrary complex constants. The signal *x(t)* is periodic with fundamental period $T_p = 1/F_0$, and its representation in terms of (1.3.17) is called the *Fourier series expansion* for *x_0(t)*. The complex-valued constants are the Fourier series coefficients and the signal $S_k(t)$ is called the *kth harmonic* of *x_i(t)*. ### Discrete-time exponentials. Since a discrete-time complex exponential is periodic if its relative frequency is a rational number, we choose *f_0* = *1/N* and we define the sets of harmonically related complex exponentials by $S_k(n) = e^{j2\pi T}, k = 0, \pm1, \pm2,...$ (1.3.18) In contrast to the continuous-time case, we note that $S_{k+N}(n) = e^{j2\pi n(k+N)/N} = e^{j2\pi n}S_k(n) = S_k(n)$ This means that, consistent with (1.3.1()), there are only *N* distinct periodic complex exponentials in the set described by (1.3.18). Furthermore, all members of the set have a common period of *N* samples. Clearly, we can choose any consecutive *N* complex exponentials, say from *k* = *n_0* to *k* = *n_0 + N* - 1 to form a harmonically related set with fundamental frequency *f_0* = *1/N*. Most often, for convenience, we choose the set that corresponds to