Signals and Systems by Oppenheim, Willsky, and Nawab (2nd Edition) PDF
Document Details
Uploaded by IrresistibleWilliamsite7001
Alan V. Oppenheim,Alan S. Willsky,S. Hamid Nawab
Tags
Summary
This book, "Signals and Systems" (2nd Edition) by Oppenheim, Willsky, and Nawab, is a comprehensive textbook covering continuous and discrete-time signal processing and systems. It's designed for undergraduate-level study in fields like electrical engineering. The book explores essential topics in depth, making it a valuable resource for students and professionals.
Full Transcript
This page intentionally left blank SIGNALS & SYSTEMS PRENTICE HALL SIGNAL PROCESSING SERIES Alan V. Oppenheim, Series Editor ANDREWS & HUNT Digital Image Restoration BRACEWELL Two Dimensional Imaging BRIGHAM The Fast Fourier Transform and Its Applications BuRDIC Underwater Acou...
This page intentionally left blank SIGNALS & SYSTEMS PRENTICE HALL SIGNAL PROCESSING SERIES Alan V. Oppenheim, Series Editor ANDREWS & HUNT Digital Image Restoration BRACEWELL Two Dimensional Imaging BRIGHAM The Fast Fourier Transform and Its Applications BuRDIC Underwater Acoustic System Analysis 2/E CASTLEMAN Digital Image Processing CoHEN Time-Frequency Analysis CROCHIERE & RABINER Multirate Digital Signal Processing DuDGEON & MERSEREAU Multidimensional Digital Signal Processing HAYKIN Advances in Spectrum Analysis and Array Processing. Vols. I, II & III HAYKIN, Eo. Array Signal Processing JoHNSON & DuDGEON Array Signal Processing KAY Fundamentals of Statistical Signal Processing KAY Modern Spectral Estimation KINO Acoustic Waves: Devices, Imaging, and Analog Signal Processing LIM Two-Dimensional Signal and Image Processing LIM, Eo. Speech Enhancement LIM & OPPENHEIM, Eos. Advanced Topics in Signal Processing MARPLE Digital Spectral Analysis with Applications MccLELLAN & RADER Number Theory in Digital Signal Processing MENDEL Lessons in Estimation Theory for Signal Processing Communications and Control 2/E NIKIAS & PETROPULU Higher Order Spectra Analysis OPPENHEIM & NAWAB Symbolic and Knowledge-Based Signal Processing OPPENHEIM & WILLSKY, WITH NAWAB Signals and Systems, 2/E OPPENHEIM & ScHAFER Digital Signal Processing OPPENHEIM & ScHAFER Discrete-Time Signal Processing 0RFANIDIS Signal Processing PHILLIPS & NAGLE Digital Control Systems Analysis and Design, 3/E PICINBONO Random Signals and Systems RABINER & GoLD Theory and Applications of Digital Signal Processing RABINER & SCHAFER Digital Processing of Speech Signals RABINER & JuANG Fundamentals of Speech Recognition RoBINSON & TREITEL Geophysical Signal Analysis STEARNS & DAVID Signal Processing Algorithms in Fortran and C STEARNS & DAVID Signal Processing Algorithms in MATIAB TEKALP Digital Video Processing THERRIEN Discrete Random Signals and Statistical Signal Processing TRIBOLET Seismic Applications of Homomorphic Signal Processing VETTERLI & KovACEVIC Wavelets and Subband Coding VIADYANATHAN Multirate Systems and Filter Banks WIDROW & STEARNS Adaptive Signal Processing SECOND EDITION SIGNALS & SYSTEMS ALAN V. OPPENHEIM ALAN S. WILLSKY MASSACHUSETTS INSTITUTE OF TECHNOLOGY WITH S. HAMID NAWAB BOSTON UNIVERSITY PRENTICE HALL UPPER SADDLE RIVER, NEW JERSEY 07458 Library of Congress Cataloging-in-Publication Data Oppenheim, Alan V. Signals and systems / Alan V. Oppenheim, Alan S. Willsky, with S. Hamid Nawab. - 2nd ed. p. cm. - Prentice-Hall signal processing series Includes bibliographical references and index. ISBN 0-13-814757-4 l. System analysis. 2. Signal theory (Telecommunication) I. Willsky, Alan S. II. Nawab, Syed Hamid. III. Title. IV. Series QA402.063 1996 621.382'23–dc20 96-19945 CIP Acquisitions editor: Tom Robbins Production service: TKM Productions Editorial/production supervision: Sharyn Vitrano Copy editor: Brian Baker Interior and cover design: Patrice Van Acker Art director: Amy Rosen Managing editor: Bayani Mendoza DeLeon Editor-in-Chief: Marcia Horton Director of production and manufacturing: David W. Riccardi Manufacturing buyer: Donna Sullivan Editorial assistant: Phyllis Morgan © 1997 by Alan V. Oppenheim and Alan S. Willsky © 1983 by Alan V. Oppenheim, Alan S. Willsky, and Ian T. Young Published by Prentice-Hall, Inc. Simon & Schuster / A Viacom Company Upper Saddle River, New Jersey 07458 Printed in the United States of America 10 9 8 7 6 5 4 ISBN 0-13–814757–4 Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo Simon & Schuster Asia Pte. Ltd., Singapore Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro To Phyllis, Jason, and Justine To Susanna, Lydia, and Kate CONTENTS PREFACE XVII ACKNOWLEDGEMENTS XXV FOREWORD XXVII 1 SIGNALS AND SYSTEMS 1 1.0 Introduction 1 1.1 Continuous-Time and Discrete-Time Signals 1 1.1.1 Examples and Mathematical Representation 1 1.1.2 Signal Energy and Power 5 1.2 Transformations of the Independent Variable 7 1.2.1 Examples of Transformations of the Independent Variable 8 1.2.2 Periodic Signals 11 1.2.3 Even and Odd Signals 13 1.3 Exponential and Sinusoidal Signals 14 1.3.1 Continuous-Time Complex Exponential and Sinusoidal Signals 15 1.3.2 Discrete-Time Complex Exponential and Sinusoidal Signals 21 1.3.3 Periodicity Properties of Discrete-Time Complex Exponentials 25 1.4 The Unit Impulse and Unit Step Functions 30 1.4.1 The Discrete-Time Unit Impulse and Unit Step Sequences 30 1.4.2 The Continuous-Time Unit Step and Unit Impulse Functions 32 1.5 Continuous-Time and Discrete-Time Systems 38 1.5.1 Simple Examples of Systems 39 1.5.2 Interconnections of Systems 41 1.6 Basic System Properties 44 1.6.1 Systems with and without Memory 44 1.6.2 Invertibility and Inverse Systems 45 1.6.3 Causality 46 1.6.4 Stability 48 1.6.5 Time Invariance 50 1.6.6 Linearity 53 1.7 Summary 56 Problems 57 2 LINEAR TIME-INVARIANT SYSTEMS 74 2.0 Introduction 74 2.1 Discrete-Time LTI Systems: The Convolution Sum 75 vii viii Contents 2.1.1 The Representation of Discrete-Time Signals in Terms of Impulses 75 2.1.2 The Discrete-Time Unit Impulse Response and the Convolution-Sum Representation of LTI Systems 77 2.2 Continuous-Time LTI Systems: The Convolution Integral 90 2.2.1 The Representation of Continuous-Time Signals in Terms of Impulses 90 2.2.2 The Continuous-Time Unit Impulse Response and the Convolution Integral Representation of LTI Systems 94 2.3 Properties of Linear Time-Invariant Systems 103 2.3.1 The Commutative Property 104 2.3.2 The Distributive Property 104 2.3.3 The Associative Property 107 2.3.4 LTI Systems with and without Memory 108 2.3.5 Invertibility of LTI Systems 109 2.3.6 Causality for LTI Systems 112 2.3.7 Stability for LTI Systems 113 2.3.8 The Unit Step Response of an LTI System 115 2.4 Causal LTI Systems Described by Differential and Difference Equations 116 2.4.1 Linear Constant -Coefficient Differential Equations 117 2.4.2 Linear Constant-Coefficient Difference Equations 121 2.4.3 Block Diagram Representations of First-Order Systems Described by Differential and Difference Equations 124 2.5 Singularity Functions 127 2.5.1 The Unit Impulse as an Idealized Short Pulse 128 2.5.2 Defining the Unit Impulse through Convolution 131 2.5.3 Unit Doublets and Other Singularity Functions 132 2.6 Summary 137 Problems 137 3 FOURIER SERIES REPRESENTATION OF PERIODIC SIGNALS 177 3.0 Introduction 177 3.1 A Historical Perspective 178 3.2 The Response of LTI Systems to Complex Exponentials 182 3.3 Fourier Series Representation of Continuous-Time Periodic Signals 186 3.3.1 Linear Combinations of Harmonically Related Complex Exponentials 186 3.3.2 Determination of the Fourier Series Representation of a Continuous-Time Periodic Signal 190 3.4 Convergence of the Fourier Series 195 3.5 Properties of Continuous-Time Fourier Series 202 3.5.1 Linearity 202 Contents ix 3.5.2 Time Shifting 202 3.5.3 Time Reversal 203 3.5.4 Time Scaling 204 3.5.5 Multiplication 204 3.5.6 Conjugation and Conjugate Symmetry 204 Parseval's Relation for Continuous-Time Periodic Signals 3.5.7 205 3.5.8 Summary of Properties of the Continuous-Time Fourier Series 205 3.5.9 Examples 205 3.6 Fourier Series Representation of Discrete-Time Periodic Signal 211 3.6.1 Linear Combinations of Harmonically Related Complex Exponentials 211 3.6.2 Determination of the Fourier Series Representation of a Periodic Signal 212 3.7 Properties of Discrete-Time Fourier Series 221 3.7.1 Multiplication 222 3.7.2 First Difference 222 3.7.3 Parseval's Relation for Discrete-Time Periodic Signals 223 3.7.4 Examples 223 3.8 Fourier Series and LTI Systems 226 3.9 Filtering 231 3.9.1 Frequency-Shaping Filters 232 3.9.2 Frequency-Selective Filters 236 3.10 Examples of Continuous-Time Filters Described by Differential Equations 239 3.10.1 A Simple RC Lowpass Filter 239 3.10.2 A Simple RC Highpass Filter 241 3.11 Examples of Discrete-Time Filters Described by Difference Equations 244 3.11.1 First-Order Recursive Discrete-Time Filters 244 3.11.2 Nonrecursive Discrete-Time Filters 245 3.12 Summary 249 Problems 250 4 THE CONTINUOUS-TIME FOURIER TRANSFORM 284 4.0 Introduction 284 4.1 Representation of Aperiodic Signals: The Continuous-Time Fourier Transform 285 4.1.1 Development of the Fourier Transform Representation of an Aperiodic Signal 285 4.1.2 Convergence of Fourier Transforms 289 4.1.3 Examples of Continuous-Time Fourier Transforms 290 4.2 The Fourier Transform for Periodic Signals 296 4.3 Properties of the Continuous-Time Fourier Transform 300 4.3.1 Linearity 301 x Contents 4.3.2 Time Shifting 301 4.3.3 Conjugation and Conjugate Symmetry 303 4.3.4 Differentiation and Integration 306 4.3.5 Time and Frequency Scaling 308 4.3.6 Duality 309 4.3.7 Parseval's Relation 312 4.4 The Convolution Property 314 4.4.1 Examples 317 4.5 The Multiplication Property 322 4.5.1 Frequency-Selective Filtering with Variable Center Frequency 325 4.6 Tables of Fourier Properties and of Basic Fourier Transform Pairs 328 4.7 Systems Characterized by Linear Constant-Coefficient Differential Equations 330 4.8 Summary 333 Problems 334 5 THE DISCRETE-TIME FOURIER TRANSFORM 358 5.0 Introduction 358 5.1 Representation of Aperiodic Signals: The Discrete-Time Fourier Transform 359 5.1.1 Development of the Discrete-Time Fourier Transform 359 5.1.2 Examples of Discrete-Time Fourier Transforms 362 5.1.3 Convergence Issues Associated with the Discrete-Time Fourier Transform 366 5.2 The Fourier Transform for Periodic Signals 367 5.3 Properties of the Discrete-Time Fourier Transform 372 5.3.1 Periodicity of the Discrete-Time Fourier Transform 373 5.3.2 Linearity of the Fourier Transform 373 5.3.3 Time Shifting and Frequency Shifting 373 5.3.4 Conjugation and Conjugate Symmetry 375 5.3.5 Differencing and Accumulation 375 5.3.6 Time Reversal 376 5.3.7 Time Expansion 377 5.3.8 Differentiation in Frequency 380 5.3.9 Parseval's Relation 380 5.4 The Convolution Property 382 5.4.1 Examples 383 5.5 The Multiplication Property 388 5.6 Tables of Fourier Transform Properties and Basic Fourier Transform Pairs 390 5.7 Duality 390 5.7.1 Duality in the Discrete-Time Fourier Series 391 5.7.2 Duality between the Discrete-Time Fourier Transform and the Continuous-Time Fourier Series 395 Contents xi 5.8 Systems Characterized by Linear Constant-Coefficient Difference Equations 396 5.9 Summary 399 Problems 400 6 TIME AND FREQUENCY CHARACTERIZATION OF SIGNALS AND SYSTEMS 423 6.0 Introduction 423 6.1 The Magnitude-Phase Representation of the Fourier Transform 423 6.2 The Magnitude-Phase Representation of the Frequency Response of LTI Systems 427 6.2.1 Linear and Nonlinear Phase 428 6.2.2 Group Delay 430 6.2.3 Log-Magnitude and Bode Plots 436 6.3 Time-Domain Properties of Ideal Frequency-Selective Filters 439 6.4 Time-Domain and Frequency-Domain Aspects of Nonideal Filters 444 6.5 First-Order and Second-Order Continuous-Time Systems 448 6.5.1 First-Order Continuous-Time Systems 448 6.5.2 Second-Order Continuous-Time Systems 451 6.5.3 Bode Plots for Rational Frequency Responses 456 6.6 First-Order and Second-Order Discrete-Time Systems 461 6.6.1 First-Order Discrete-Time Systems 461 6.6.2 Second-Order Discrete-Time Systems 465 6.7 Examples of Time- and Frequency-Domain Analysis of Systems 472 6.7.1 Analysis of an Automobile Suspension System 473 6.7.2 Examples of Discrete-Time Nonrecursive Filter 476 6.8 Summary 482 Problems 483 7 SAMPLING 514 7.0 Introduction 514 7.1 Representation of a Continuous-Time Signal by Its Samples: The Sampling Theorem 515 7.1.1 Impulse-Train Sampling 516 7.1.2 Sampling with a Zero-Order Hold 520 7.2 Reconstruction of a Signal from Its Samples Using Interpolation 522 7.3 The Effect of Undersampling: Aliasing 527 7.4 Discrete-Time Processing of Continuous-Time Signals 534 7.4.1 Digital Differentiator 541 7.4.2 Half-Sample Delay 543 xii Contents 7.5 Sampling of Discrete-Time Signals 545 7.5.1 Impulse-Train Sampling 545 7.5.2 Discrete-Time Decimation and Interpolation 549 7.6 Summary 555 Problems 556 8 COMMUNICATION SYSTEMS 582 8.0 Introduction 582 8.1 Complex Exponential and Sinusoidal Amplitude Modulation 583 8.1.1 Amplitude Modulation with a Complex Exponential Carrier 583 8.1.2 Amplitude Modulation with a Sinusoidal Carrier 585 8.2 Demodulation for Sinusoidal AM 587 8.2.1 Synchronous Demodulation 587 8.2.2 Asynchronous Demodulation 590 8.3 Frequency-Division Multiplexing 594 8.4 Single-Sideband Sinusoidal Amplitude Modulation 597 8.5 Amplitude Modulation with a Pulse-Train Carrier 601 8.5.1 Modulation of a Pulse-Train Carrier 601 8.5.2 Time-Division Multiplexing 604 8.6 Pulse-Amplitude Modulation 604 8.6.1 Pulse-Amplitude Modulated Signals 604 8.6.2 Intersymbol Interference in PAM Systems 607 8.6.3 Digital Pulse-Amplitude and Pulse-Code Modulation 610 8.7 Sinusoidal Frequency Modulation 611 8.7.1 Narrowband Frequency Modulation 613 8.7.2 Wideband Frequency Modulation 615 8.7.3 Periodic Square-Wave Modulating Signal 617 8.8 Discrete-Time Modulation 619 8.8.1 Discrete-Time Sinusoidal Amplitude Modulation 619 8.8.2 Discrete-Time Transmodulation 623 8.9 Summary 623 Problems 625 9 THE LAPLACE TRANSFORM 654 9.0 Introduction 654 9.1 The Laplace Transform 655 9.2 The Region of Convergence for Laplace Transforms 662 9.3 The Inverse Laplace Transform 670 9.4 Geometric Evaluation of the Fourier Transform from the Pole-Zero Plot 674 9.4.1 First-Order Systems 676 9.4.2 Second-Order Systems 677 9.4.3 All-Pass Systems 681 9.5 Properties of the Laplace Transform 682 9.5.1 Linearity of the Laplace Transform 683 9.5.2 Time Shifting 684 Contents xiii 9.5.3 Shifting in the s-Domain 685 9.5.4 Time Scaling 685 9.5.5 Conjugation 687 9.5.6 Convolution Property 687 9.5.7 Differentiation in the Time Domain 688 9.5.8 Differentiation in the s-Domain 688 9.5.9 Integration in the Time Domain 690 9.5.10 The Initial- and Final-Value Theorems 690 9.5.11 Table of Properties 691 9.6 Some Laplace Transform Pairs 692 9.7 Analysis and Characterization of LTI Systems Using the Laplace Transform 693 9.7.1 Causality 693 9.7.2 Stability 695 9.7.3 LTI Systems Characterized by Linear Constant-Coefficient Differential Equations 698 9.7.4 Examples Relating System Behavior to the System Function 701 9.7.5 Butterworth Filters 703 9.8 System Function Algebra and Block Diagram Representations 706 9.8.1 System Functions for Interconnections of LTI Systems 707 9.8.2 Block Diagram Representations for Causal LTI Systems Described by Differential Equations and Rational System Functions 708 9.9 The Unilateral Laplace Transform 714 9.9.1 Examples of Unilateral Laplace Transforms 714 9.9.2 Properties of the Unilateral Laplace Transform 716 9.9.3 Solving Differential Equations Using the Unilateral Laplace Transform 719 9.10 Summary 720 Problems 721 10 THE Z-TRANSFORM 741 10.0 Introduction 741 10.1 The z-Transform 741 10.2 The Region of Convergence for the z-Transform 748 10.3 The Inverse z-Transform 757 10.4 Geometric Evaluation of the Fourier Transform from the Pole-Zero Plot 763 10.4.1 First-Order Systems 763 10.4.2 Second-Order Systems 765 10.5 Properties of the z-Transform 767 10.5.1 Linearity 767 10.5.2 Time Shifting 767 10.5.3 Scaling in the z-Domain 768 10.5.4 Time Reversal 769 10.5.5 Time Expansion 769 xiv Contents 10.5.6 Conjugation 770 10.5.7 The Convolution Property 770 10.5.8 Differentiation in the z-Domain 772 10.5.9 The Initial-Value Theorem 773 10.5.10 Summary of Properties 774 10.6 Some Common z-Transform Pairs 774 10.7 Analysis and Characterization of LTI Systems Using z-Transforms 774 10.7.1 Causality 776 10.7.2 Stability 777 10.7.3 LTI Systems Characterized by Linear Constant-Coefficient Difference Equations 779 10.7.4 Examples Relating System Behavior to the System Function 781 10.8 System Function Algebra and Block Diagram Representations 783 10.8.1 System Functions for Interconnections of LTI Systems 784 10.8.2 Block Diagram Representations for Causal LTI Systems Described by Difference Equations and Rational System Functions 784 10.9 The Unilateral z-Transform 789 10.9.1 Examples of Unilateral z-Transforms and Inverse Transforms 790 10.9.2 Properties of the Unilateral z-Transform 792 10.9.3 Solving Difference Equations Using the Unilateral z-Transform 795 10.10 Summary 796 Problems 797 11 LINEAR FEEDBACK SYSTEMS 816 11.0 Introduction 816 11.1 Linear Feedback Systems 819 11.2 Some Applications and Consequences of Feedback 820 11.2.1 Inverse System Design 820 11.2.2 Compensation for Nonideal Elements 821 11.2.3 Stabilization of Unstable Systems 823 11.2.4 Sampled-Data Feedback Systems 826 11.2.5 Tracking Systems 828 11.2.6 Destabilization Caused by Feedback 830 11.3 Root-Locus Analysis of Linear Feedback Systems 832 11.3.1 An Introductory Example 833 11.3.2 Equation for the Closed-Loop Poles 834 11.3.3 The End Points of the Root Locus: The Closed-Loop Poles for K = 0 and |K| = +∞ 836 11.3.4 The Angle Criterion 836 11.3.5 Properties of the Root Locus 841 11.4 The Nyquist Stability Criterion 846 11.4.1 The Encirclement Property 847 Contents xv 11.4.2 The Nyquist Criterion for Continuous-Time LTI Feedback Systems 850 11.4.3 The Nyquist Criterion for Discrete-Time LTI Feedback Systems 856 11.5 Gain and Phase Margins 858 11.6 Summary 866 Problems 867 APPENDIX PARTIAL-FRACTION EXPANSION 909 BIBLIOGRAPHY 921 ANSWERS 931 INDEX 941 This page intentionally left blank PREFACE This book is the second edition of a text designed for undergraduate courses in signals and systems. While such courses are frequently found in electrical engineering curricula, the concepts and techniques that form the core of the subject are of fundamental importance in all engineering disciplines. In fact, the scope of potential and actual applications of the methods of signal and system analysis continues to expand as engineers are confronted with new challenges involving the synthesis or analysis of complex processes. For these reasons we feel that a course in signals and systems not only is an essential element in an engineer- ing program but also can be one of the most rewarding, exciting, and useful courses that engineering students take during their undergraduate education. Our treatment of the subject of signals and systems in this second edition maintains the same general philosophy as in the first edition but with significant rewriting, restructuring, and additions. These changes are designed to help both the instructor in presenting the sub- ject material and the student in mastering it. In the preface to the first edition we stated that our overall approach to signals and systems had been guided by the continuing develop- ments in technologies for signal and system design and implementation, which made it in- creasingly important for a student to have equal familiarity with techniques suitable for analyzing and synthesizing both continuous-time and discrete-time systems. As we write the preface to this second edition, that observation and guiding principle are even more true than before. Thus, while students studying signals and systems should certainly have a solid foundation in disciplines based on the laws of physics, they must also have a firm grounding in the use of computers for the analysis of phenomena and the implementation of systems and algorithms. As a consequence, engineering curricula now reflect a blend of subjects, some involving continuous-time models and others focusing on the use of computers and discrete representations. For these reasons, signals and systems courses that bring discrete- time and continuous-time concepts together in a unified way play an increasingly important role in the education of engineering students and in their preparation for current and future developments in their chosen fields. It is with these goals in mind that we have structured this book to develop in parallel the methods of analysis for continuous-time and discrete-time signals and systems. This ap- proach also offers a distinct and extremely important pedagogical advantage. Specifically, we are able to draw on the similarities between continuous- and discrete-time methods in order to share insights and intuition developed in each domain. Similarly, we can exploit the differences between them to sharpen an understanding of the distinct properties of each. In organizing the material both originally and now in the second edition, we have also considered it essential to introduce the student to some of the important uses of the basic methods that are developed in the book. Not only does this provide the student with an appreciation for the range of applications of the techniques being learned and for directions for further study, but it also helps to deepen understanding of the subject. To achieve this xvii xviii Preface goal we include introductory treatments on the subjects of filtering, communications, sam- pling, discrete-time processing of continuous-time signals, and feedback. In fact, in one of the major changes in this second edition, we have introduced the concept of frequency- domain filtering very early in our treatment of Fourier analysis in order to provide both motivation for and insight into this very important topic. In addition, we have again included an up-to-date bibliography at the end of the book in order to assist the student who is inter- ested in pursuing additional and more advanced studies of the methods and applications of signal and system analysis. The organization of the book reflects our conviction that full mastery of a subject of this nature cannot be accomplished without a significant amount of practice in using and apply- ing the tools that are developed. Consequently, in the second edition we have significantly increased the number of worked examples within each chapter. We have also enhanced one of the key assets of the first edition, namely the end-of-chapter homework problems. As in the first edition, we have included a substantial number of problems, totaling more than 600 in number. A majority of the problems included here are new and thus provide additional flexibility for the instructor in preparing homework assignments. In addition, in order to enhance the utility of the problems for both the student and the instructor we have made a number of other changes to the organization and presentation of the problems. In particular, we have organized the problems in each chapter under several specific headings, each of which spans the material in the entire chapter but with a different objective. The first two sections of problems in each chapter emphasize the mechanics of using the basic concepts and methods presented in the chapter. For the first of these two sections, which has the heading Basic Problems with Answers, we have also provided an- swers (but not solutions) at the end of the book. These answers provide a simple and imme- diate way for the student to check his or her understanding of the material. The problems in this first section are generally appropriate for inclusion in homework sets. Also, in order to give the instructor additional flexibility in assigning homework problems, we have provided a second section of Basic Problems for which answers have not been included. A third section of problems in each chapter, organized under the heading of Advanced Problems, is oriented toward exploring and elaborating upon the foundations and practical implications of the material in the text. These problems often involve mathematical deriva- tions and more sophisticated use of the concepts and methods presented in the chapter. Some chapters also include a section of Extension Problems which involve extensions of material presented in the chapter and/or involve the use of knowledge from applications that are outside the scope of the main text (such as advanced circuits or mechanical systems). The overall variety and quantity of problems in each chapter will hopefully provide students with the means to develop their understanding of the material and instructors with consid- erable flexibility in putting together homework sets that are tailored to the specific needs of their students. A solutions manual is also available to instructors through the publisher. Another significant additional enhancement to this second edition is the availability of the companion book Explorations in Signals and Systems Using MATLAB by Buck, Daniel, and Singer. This book contains MATLAB™-based computer exercises for each topic in the text, and should be of great assistance to both instructor and student. Preface xix Students using this book are assumed to have a basic background in calculus as well as some experience in manipulating complex numbers and some exposure to differential equa- tions. With this background, the book is self-contained. In particular, no prior experience with system analysis, convolution, Fourier analysis, or Laplace and z-transforms is as- sumed. Prior to learning the subject of signals and systems most students will have had a course such as basic circuit theory for electrical engineers or fundamentals of dynamics for mechanical engineers. Such subjects touch on some of the basic ideas that are developed more fully in this text. This background can clearly be of great value to students in providing additional perspective as they proceed through the book. The Foreword, which follows this preface, is written to offer the reader motivation and perspective for the subject of signals and systems in general and our treatment of it in par- ticular. We begin Chapter 1 by introducing some of the elementary ideas related to the mathematical representation of signals and systems. In particular we discuss transfor- mations (such as time shifts and scaling) of the independent variable of a signal. We also introduce some of the most important and basic continuous-time and discrete-time signals, namely real and complex exponentials and the continuous-time and discrete-time unit step and unit impulse. Chapter 1 also introduces block diagram representations of interconnec- tions of systems and discusses several basic system properties such as causality, linearity and time-invariance. In Chapter 2 we build on these last two properties, together with the sifting property of unit impulses to develop the convolution-sum representation for discrete- time linear, time-invariant (LTI) systems and the convolution integral representation for continuous-time LTI systems. In this treatment we use the intuition gained from our devel- opment of the discrete-time case as an aid in deriving and understanding its continuous- time counterpart. We then turn to a discussion of causal, LTI systems characterized by linear constant-coefficient differential and difference equations. In this introductory discussion we review the basic ideas involved in solving linear differential equations (to which most stu- dents will have had some previous exposure) and we also provide a discussion of analogous methods for linear difference equations. However, the primary focus of our development in Chapter 2 is not on methods of solution, since more convenient approaches are developed later using transform methods. Instead, in this first look, our intent is to provide the student with some appreciation for these extremely important classes of systems, which will be encountered often in subsequent chapters. Finally, Chapter 2 concludes with a brief discus- sion of singularity functions—steps, impulses, doublets, and so forth—in the context of their role in the description and analysis of continuous-time LTI systems. In particular, we stress the interpretation of these signals in terms of how they are defined under convolu- tion—that is, in terms of the responses of LTI systems to these idealized signals. Chapters 3 through 6 present a thorough and self-contained development of the methods of Fourier analysis in both continuous and discrete time and together represent the most significant reorganization and revision in the second edition. In particular, as we indicated previously, we have introduced the concept of frequency-domain filtering at a much earlier point in the development in order to provide motivation for and a concrete application of the Fourier methods being developed. As in the first edition, we begin the discussions in Chapter 3 by emphasizing and illustrating the two fundamental reasons for the important xx Preface role Fourier analysis plays in the study of signals and systems in both continuous and dis- crete time: (1) extremely broad classes of signals can be represented as weighted sums or integrals of complex exponentials; and (2) the response of an LTI system to a complex exponential input is the same exponential multiplied by a complex-number characteristic of the system. However, in contrast to the first edition, the focus of attention in Chapter 3 is on Fourier series representations for periodic signals in both continuous time and discrete time. In this way we not only introduce and examine many of the properties of Fourier representations without the additional mathematical generalization required to obtain the Fourier transform for aperiodic signals, but we also can introduce the application to filtering at a very early stage in the development. In particular, taking advantage of the fact that complex exponentials are eigenfunctions of LTI systems, we introduce the frequency re- sponse of an LTI system and use it to discuss the concept of frequency-selective filtering, to introduce ideal filters, and to give several examples of nonideal filters described by dif- ferential and difference equations. In this way, with a minimum of mathematical prelimi- naries, we provide the student with a deeper appreciation for what a Fourier representation means and why it is such a useful construct. Chapters 4 and 5 then build on the foundation provided by Chapter 3 as we develop first the continuous-time Fourier transform in Chapter 4 and, in a parallel fashion, the discrete- time Fourier transform in Chapter 5. In both chapters we derive the Fourier transform rep- resentation of an aperiodic signal as the limit of the Fourier series for a signal whose period becomes arbitrarily large. This perspective emphasizes the close relationship between Fou- rier series and transforms, which we develop further in subsequent sections and which al- lows us to transfer the intuition developed for Fourier series in Chapter 3 to the more general context of Fourier transforms. In both chapters we have included a discussion of the many important properties of Fourier transforms, with special emphasis placed on the convolution and multiplication properties. In particular, the convolution property allows us to take a second look at the topic of frequency-selective filtering, while the multiplication property serves as the starting point for our treatment of sampling and modulation in later chapters. Finally, in the last sections in Chapters 4 and 5 we use transform methods to determine the frequency responses of LTI systems described by differential and difference equations and to provide several examples illustrating how Fourier transforms can be used to compute the responses for such systems. To supplement these discussions (and later treatments of La- place and z-transforms) we have again included an Appendix at the end of the book that contains a description of the method of partial fraction expansion. Our treatment of Fourier analysis in these two chapters is characteristic of the parallel treatment we have developed. Specifically, in our discussion in Chapter 5, we are able to build on much of the insight developed in Chapter 4 for the continuous-time case, and to- ward the end of Chapter 5 we emphasize the complete duality in continuous-time and dis- crete-time Fourier representations. In addition, we bring the special nature of each domain into sharper focus by contrasting the differences between continuous- and discrete-time Fourier analysis. As those familiar with the first edition will note, the lengths and scopes of Chapters 4 and 5 in the second edition are considerably smaller than their first edition counterparts. This is due not only to the fact that Fourier series are now dealt with in a separate chapter but also to our moving several topics into Chapter 6. The result, we believe, has several Preface xxi significant benefits. First, the presentation in three shorter chapters of the basic concepts and results of Fourier analysis, together with the introduction of the concept of frequency- selective filtering, should help the student in organizing his or her understanding of this material and in developing some intuition about the frequency domain and appreciation for its potential applications. Then, with Chapters 3-5 as a foundation, we can engage in a more detailed look at a number of important topics and applications. In Chapter 6 we take a deeper look at both the time- and frequency-domain characteristics of LTI systems. For example, we introduce magnitude-phase and Bode plot representations for frequency responses and discuss the effect of frequency response phase on the time domain characteristics of the output of an LTI system. In addition, we examine the time- and frequency-domain behavior of ideal and nonideal filters and the tradeoffs between these that must be addressed in prac- tice. We also take a careful look at first- and second-order systems and their roles as basic building blocks for more complex system synthesis and analysis in both continuous and discrete time. Finally, we discuss several other more complex examples of filters in both continuous and discrete time. These examples together with the numerous other aspects of filtering explored in the problems at the end of the chapter provide the student with some appreciation for the richness and flavor of this important subject. While each of the topics in Chapter 6 was present in the first edition, we believe that by reorganizing and collecting them in a separate chapter following the basic development of Fourier analysis, we have both simplified the introduction of this important topic in Chapters 3-5 and presented in Chapter 6 a considerably more cohesive picture of time- and frequency-domain issues. In response to suggestions and preferences expressed by many users of the first edition we have modified notation in the discussion of Fourier transforms to be more consistent with notation most typically used for continuous-time and discrete-time Fourier transforms. Specifically, beginning with Chapter 3 we now denote the continuous-time Fourier trans- form as X( jω ) and the discrete-time Fourier transform as X(e jω). As with all options with notation, there is not a unique best choice for the notation for Fourier transforms. However, it is our feeling, and that of many of our colleagues, that the notation used in this edition represents the preferable choice. Our treatment of sampling in Chapter 7 is concerned primarily with the sampling theo- rem and its implications. However, to place this subject in perspective we begin by discuss- ing the general concepts of representing a continuous-time signal in terms of its samples and the reconstruction of signals using interpolation. After using frequency-domain meth- ods to derive the sampling theorem, we consider both the frequency and time domains to provide intuition concerning the phenomenon of aliasing resulting from undersampling. One of the very important uses of sampling is in the discrete-time processing of continuous- time signals, a topic that we explore at some length in this chapter. Following this, we turn to the sampling of discrete-time signals. The basic result underlying discrete-time sampling is developed in a manner that parallels that used in continuous time, and the applications of this result to problems of decimation and interpolation are described. Again a variety of other applications, in both continuous and discrete time, are addressed in the problems. Once again the reader acquainted with our first edition will note a change, in this case involving the reversal in the order of the presentation of sampling and communications. We have chosen to place sampling before communications in the second edition both because xxii Preface we can call on simple intuition to motivate and describe the processes of sampling and reconstruction from samples and also because this order of presentation then allows us in Chapter 8 to talk more easily about forms of communication systems that are closely related to sampling or rely fundamentally on using a sampled version of the signal to be transmitted. Our treatment of communications in Chapter 8 includes an in -depth discussion of con- tinuous-time sinusoidal amplitude modulation (AM), which begins with the straightforward application of the multiplication property to describe the effect of sinusoidal AM in the frequency domain and to suggest how the original modulating signal can be recovered. Fol- lowing this, we develop a number of additional issues and applications related to sinusoidal modulation, including frequency-division multiplexing and single-sideband modulation. Many other examples and applications are described in the problems. Several additional topics are covered in Chapter 8. The first of these is amplitude modulation of a pulse train and time-division multiplexing, which has a close connection to the topic of sampling in Chapter 7. Indeed we make this tie even more explicit and provide a look into the important field of digital communications by introducing and briefly describing the topics of pulse- amplitude modulation (PAM) and intersymbol interference. Finally, our discussion of fre- quency modulation (FM) provides the reader with a look at a nonlinear modulation problem. Although the analysis of FM systems is not as straightforward as for the AM case, our introductory treatment indicates how frequency-domain methods can be used to gain a sig- nificant amount of insight into the characteristics of FM signals and systems. Through these discussions and the many other aspects of modulation and communications explored in the problems in this chapter we believe that the student can gain an appreciation both for the richness of the field of communications and for the central role that the tools of signals and systems analysis play in it. Chapters 9 and 10 treat the Laplace and z-transforms, respectively. For the most part, we focus on the bilateral versions of these transforms, although in the last section of each chapter we discuss unilateral transforms and their use in solving differential and difference equations with nonzero initial conditions. Both chapters include discussions on: the close relationship between these transforms and Fourier transforms; the class of rational trans- forms and their representation in terms of poles and zeros; the region of convergence of a Laplace or z-transform and its relationship to properties of the signal with which it is asso- ciated; inverse transforms using partial fraction expansion; the geometric evaluation of sys- tem functions and frequency responses from pole-zero plots; and basic transform properties. In addition, in each chapter we examine the properties and uses of system functions for LTI systems. Included in these discussions are the determination of system functions for systems characterized by differential and difference equations; the use of system function algebra for interconnections of LTI systems; and the construction of cascade, parallel- and direct- form block-diagram representations for systems with rational system functions. The tools of Laplace and z-transforms form the basis for our examination of linear feed- back systems in Chapter 11. We begin this chapter by describing a number of the important uses and properties of feedback systems, including stabilizing unstable systems, designing tracking systems, and reducing system sensitivity. In subsequent sections we use the tools that we have developed in previous chapters to examine three topics that are of importance for both continuous-time and discrete-time feedback systems. These are root locus analysis, Preface xxiii Nyquist plots and the Nyquist criterion, and log-magnitude/phase plots and the concepts of phase and gain margins for stable feedback systems. The subject of signals and systems is an extraordinarily rich one, and a variety of ap- proaches can be taken in designing an introductory course. It was our intention with the first edition and again with this second edition to provide instructors with a great deal of flexi- bility in structuring their presentations of the subject. To obtain this flexibility and to max- imize the usefulness of this book for instructors, we have chosen to present thorough, in- depth treatments of a cohesive set of topics that forms the core of most introductory courses on signals and systems. In achieving this depth we have of necessity omitted introductions to topics such as descriptions of random signals and state space models that are sometimes included in first courses on signals and systems. Traditionally, at many schools, such topics are not included in introductory courses but rather are developed in more depth in follow- on undergraduate courses or in courses explicitly devoted to their investigation. Although we have not included an introduction to state space in the book, instructors of introductory courses can easily incorporate it into the treatments of differential and difference equations that can be found throughout the book. In particular, the discussions in Chapters 9 and I 0 on block diagram representations for systems with rational system functions and on unilat- eral transforms and their use in solving differential and difference equations with initial conditions form natural points of departure for the discussions of state-space representa- tions. A typical one-semester course at the sophomore-junior level using this book would cover Chapters 1-5 in reasonable depth (although various topics in each chapter are easily omitted at the discretion of the instructor) with selected topics chosen from the remaining chapters. For example, one possibility is to present several of the basic topics in Chapters 6-8 together with a treatment of Laplace and z-transforms and perhaps a brief introduction to the use of system function concepts to analyze feedback systems. A variety of alternate formats are possible, including one that incorporates an introduction to state space or one in which more focus is placed on continuous-time systems by de-emphasizing Chapters 5 and 10 and the discrete-time topics in Chapters 3, 7, 8, and 11. In addition to these course formats this book can be used as the basic text for a thorough, two-semester sequence on linear systems. Alternatively, the portions of the book not used in a first course on signals and systems can, together with other sources, form the basis for a subsequent course. For example, much of the material in this book forms a direct bridge to subjects such as state space analysis, control systems, digital signal processing, commu- nications and statistical signal processing. Consequently, a follow-on course can be con- structed that uses some of the topics in this book together with supplementary material in order to provide an introduction to one or more of these advanced subjects. In fact, a new course following this model has been developed at MIT and has proven not only to be a popular course among our students but also a crucial component of our signals and systems curriculum. As it was with the first edition, in the process of writing this book we have been fortunate to have received assistance, suggestions, and support from numerous colleagues, students and friends. The ideas and perspectives that form the heart of this book have continued to evolve as a result of our own experiences in teaching signals and systems and the influences xxiv Preface of the many colleagues and students with whom we have worked. We would like to thank Professor Ian T. Young for his contributions to the first edition of this book and to thank and welcome Professor Hamid Nawab for the significant role he played in the development and complete restructuring of the examples and problems for this second edition. We also express our appreciation to John Buck, Michael Daniel and Andrew Singer for writing the MATLAB companion to the text. In addition, we would like to thank Jason Oppenheim for the use of one of his original photographs and Vivian Berman for her ideas and help in arriving at a cover design. Also, as indicated on the acknowledgement page that follows, we are deeply grateful to the many students and colleagues who devoted a signifi- cant number of hours to a variety of aspects of the preparation of this second edition. We would also like to express our sincere thanks to Mr. Ray Stata and Analog Devices, Inc. for their generous and continued support of signal processing and this text through funding of the Distinguished Professor Chair in Electrical Engineering. We also thank M.I.T. for providing support and an invigorating environment in which to develop our ideas. The encouragement, patience, technical support, and enthusiasm provided by Prentice- Hall, and in particular by Marcia Horton, Tom Robbins, Don Fowley, and their predecessors and by Ralph Pescatore of TKM Productions and the production staff at Prentice-Hall, have been crucial in making this second edition a reality. Alan V. Oppenheim Alan S. Willsky Cambridge, Massachusetts AcKNOWLEDGMENTS In producing this second edition we were fortunate to receive the assistance of many col- leagues, students, and friends who were extremely generous with their time. We express our deep appreciation to: Jon Maiara and Ashok Popat for their help in generating many of the figures and images. Babak Ayazifar and Austin Frakt for their help in updating and assembling the bibliography. Ramamurthy Mani for preparing the solutions manual for the text and for his help in generating many of t.he figures. Michael Daniel for coordinating and managing the LaTeX files as the various drafts of the second edition were being produced and modified. John Buck for his thorough reading of the entire draft of this second edition. Robert Becker, Sally Bemus, Maggie Beucler, Ben Halpern, Jon Maira, Chirag Patel, and Jerry Weinstein for their efforts in producing the various LaTeX drafts of the book. And to all who helped in careful reviewing of the page proofs: Babak Ayazifar Christina Lamarre Richard Barron Nicholas Laneman Rebecca Bates Li Lee George Bevis Sean Lindsay Sarit Birzon Jeffrey T. Ludwig Nabil Bitar Seth Pappas Nirav Dagli Adrienne Prahler Anne Findlay Ryan Riddolls Austin Frakt Alan Seefeldt Siddhartha Gupta Sekhar Tatikonda Christoforos Hadjicostis Shawn Verbout Terrence Ho Kathleen Wage Mark Ibanez Alex Wang Seema Jaggi Joseph Winograd Patrick Kreidl XXV This page intentionally left blank FoREWORD The concepts of signals and systems arise in a wide variety of fields, and the ideas and techniques associated with these concepts play an important role in such diverse areas of science and technology as communications, aeronautics and astronautics, circuit design, acoustics, seismology, biomedical engineering, energy generation and distribution sys- tems, chemical process control, and speech processing. Although the physical nature of the signals and systems that arise in these various disciplines may be drastically different, they all have two very basic features in common. The signals, which are functions of one or more independent variables, contain information about the behavior or nature of some phenomenon, whereas the systems respond to particular signals by producing other sig- nals or some desired behavior. Voltages and currents as a function of time in an electrical circuit are examples of signals, and a circuit is itself an example of a system, which in this case responds to applied voltages and currents. As another example, when an automobile driver depresses the accelerator pedal, the automobile responds by increasing the speed of the vehicle. In this case, the system is the automobile, the pressure on the accelerator pedal the input to the system, and the automobile speed the response. A computer program for the automated diagnosis of electrocardiograms can be viewed as a system which has as its input a digitized electrocardiogram and which produces estimates of parameters such as heart rate as outputs. A camera is a system that receives light from different sources and reflected from objects and produces a photograph. A robot arm is a system whose movements are the response to control inputs. In the many contexts in which signals and systems arise, there are a variety of prob- lems and questions that are of importance. In some cases, we are presented with a specific system and are interested in characterizing it in detail to understand how it will respond to various inputs. Examples include the analysis of a circuit in order to quantify its response to different voltage and current sources and the determination of an aircraft's response characteristics both to pilot commands and to wind gusts. In other problems of signal and system analysis, rather than analyzing existing sys- tems, our interest may be focused on designing systems to process signals in particular ways. One very common context in which such problems arise is in the design of systems to enhance or restore signals that have been degraded in some way. For example, when a pilot is communicating with an air traffic control tower, the communication can be de- graded by the high level of background noise in the cockpit. In this and many similar cases, it is possible to design systems that will retain the desired signal, in this case the pilot's voice, and reject (at least approximately) the unwanted signal, i.e., the noise. A similar set of objectives can also be found in the general area of image restoration and image enhancement. For example, images from deep space probes or earth-observing satellites typically represent degraded versions of the scenes being imaged because of limitations of the imaging equipment, atmospheric effects, and errors in signal transmission in returning the images to earth. Consequently, images returned from space are routinely processed by systems to compensate for some of these degradations. In addition, such images are usu- xxvii xxviii Foreworc ally processed to enhance certain features, such as lines (corresponding, for example, to river beds or faults) or regional boundaries in which there are sharp contrasts in color or darkness. In addition to enhancement and restoration, in many applications there is a need to design systems to extract specific pieces of information from signals. The estimation of heart rate from an electrocardiogram is one example. Another arises in economic forecast- ing. We may, for example, wish to analyze the history of an economic time series, such as a set of stock market averages, in order to estimate trends and other characteristics such as seasonal variations that may be of use in making predictions about future behavior. In other applications, the focus may be on the design of signals with particular properties. Specifically, in communications applications considerable attention is paid to designing signals to meet the constraints and requirements for successful transmission. For exam- ple, long distance communication through the atmosphere requires the use of signals with frequencies in a particular part of the electromagnetic spectrum. The design of communi- cation signals must also take into account the need for reliable reception in the presence of both distortion due to transmission through the atmosphere and interference from other signals being transmitted simultaneously by other users. Another very important class of applications in which the concepts and techniques of signal and system analysis arise are those in which we wish to modify or control the characteristics of a given system, perhaps through the choice of specific input signals or by combining the system with other systems. Illustrative of this kind of application is the design of control systems to regulate chemical processing plants. Plants of this type are equipped with a variety of sensors that measure physical signals such as temperature, hu- midity, and chemical composition. The control system in such a plant responds to these sensor signals by adjusting quantities such as flow rates and temperature in order to regu- late the ongoing chemical process. The design of aircraft autopilots and computer control systems represents another example. In this case, signals measuring aircraft speed, alti- tude, and heading are used by the aircraft's control system in order to adjust variables such as throttle setting and the position of the rudder and ailerons. These adjustments are made to ensure that the aircraft follows a specified course, to smooth out the aircraft's ride, and to enhance its responsiveness to pilot commands. In both this case and in the previous ex- ample of chemical process control, an important concept, referred to as feedback, plays a major role, as measured signals are fed back and used to adjust the response characteristics of a system. The examples in the preceding paragraphs represent only a few of an extraordinarily wide variety of applications for the concepts of signals and systems. The importance of these concepts stems not only from the diversity of phenomena and processes in which they arise, but also from the collection of ideas, analytical techniques, and methodologies that have been and are being developed and used to solve problems involving signals and systems. The history of this development extends back over many centuries, and although most of this work was motivated by specific applications, many of these ideas have proven to be of central importance to problems in a far larger variety of contexts than those for which they were originally intended. For example, the tools of Fourier analysis, which form the basis for the frequency-domain analysis of signals and systems, and which we will develop in some detail in this book, can be traced from problems of astronomy studied by the ancient Babylonians to the development of mathematical physics in the eighteenth and nineteenth centuries. Foreword xxix In some of the examples that we have mentioned, the signals vary continuously in time, whereas in others, their evolution is described only at discrete points in time. For example, in the analysis of electrical circuits and mechanical systems we are concerned with signals that vary continuously. On the other hand, the daily closing stock market average is by its very nature a signal that evolves at discrete points in time (i.e., at the close of each day). Rather than a curve as a function of a continuous variable, then, the closing stock market average is a sequence of numbers associated with the discrete time instants at which it is specified. This distinction in the basic description of the evolution of signals and of the systems that respond to or process these signals leads naturally to two parallel frameworks for signal and system analysis, one for phenomena and processes that are described in continuous time and one for those that are described in discrete time. The concepts and techniques associated both with continuous-time signals and sys- tems and with discrete-time signals and systems have a rich history and are conceptually closely related. Historically, however, because their applications have in the past been suf- ficiently different, they have for the most part been studied and developed somewhat sepa- rately. Continuous-time signals and systems have very strong roots in problems associated with physics and, in the more recent past, with electrical circuits and communications. The techniques of discrete-time signals and systems have strong roots in numerical analy- sis, statistics, and time-series analysis associated with such applications as the analysis of economic and demographic data. Over the past several decades, however, the disciplines of continuous-time and discrete-time signals and systems have become increasingly en- twined and the applications have become highly interrelated. The major impetus for this has come from the dramatic advances in technology for the implementation of systems and for the generation of signals. Specifically, the continuing development of high-speed digital computers, integrated circuits, and sophisticated high-density device fabrication techniques has made it increasingly advantageous to consider processing continuous-time signals by representing them by time samples (i.e., by converting them to discrete-time signals). As one example, the computer control system for a modem high-performance aircraft digitizes sensor outputs such as vehicle speed in order to produce a sequence of sampled measurements which are then processed by the control system. Because of the growing interrelationship between continuous-time signals and sys- tems and discrete-time signals and systems and because of the close relationship among the concepts and techniques associated with each, we have chosen in this text to develop the concepts of continuous-time and discrete-time signals and systems in parallel. Since many of the concepts are similar (but not identical), by treating them in parallel, insight and intuition can be shared and both the similarities and differences between them become better focused. In addition, as will be evident as we proceed through the material, there are some concepts that are inherently easier to understand in one framework than the other and, once understood, the insight is easily transferable. Furthermore, this parallel treatment greatly facilitates our understanding of the very important practical context in which con- tinuous and discrete time are brought together, namely the sampling of continuous-time signals and the processing of continuous-time signals using discrete-time systems. As we have so far described them, the notions of signals and systems are extremely general concepts. At this level of generality, however, only the most sweeping statements can be made about the nature of signals and systems, and their properties can be discussed only in the most elementary terms. On the other hand, an important and fundamental notion in dealing with signals and systems is that by carefully choosing subclasses of each with XXX Foreword particular properties that can then be exploited, we can analyze and characterize these signals and systems in great depth. The principal focus in this book is on the particular class of linear time-invariant systems. The properties of linearity and time invariance that define this class lead to a remarkable set of concepts and techniques which are not only of major practical importance but also analytically tractable and intellectually satisfying. As we have emphasized in this foreword, signal and system analysis has a long his- tory out of which have emerged some basic techniques and fundamental principles which have extremely broad areas of application. Indeed, signal and system analysis is constantly evolving and developing in response to new problems, techniques, and opportunities. We fully expect this development to accelerate in pace as improved technology makes possi- ble the implementation of increasingly complex systems and signal processing techniques. In the future we will see signals and systems tools and concepts applied to an expanding scope of applications. For these reasons, we feel that the topic of signal and system analy- sis represents a body of knowledge that is of essential concern to the scientist and engineer. We have chosen the set of topics presented in this book, the organization of the presen- tation, and the problems in each chapter in a way that we feel will most help the reader to obtain a solid foundation in the fundamentals of signal and system analysis; to gain an understanding of some of the very important and basic applications of these fundamentals to problems in filtering, sampling, communications, and feedback system analysis; and to develop some appreciation for an extremely powerful and broadly applicable approach to formulating and solving complex problems. 1 SIGNALS AND SYSTEMS 1.0 INTRODUCTION As described in the Foreword, the intuitive notions of signals and systems arise in a rich va- riety of contexts. Moreover, as we will see in this book, there is an analytical framework- that is, a language for describing signals and systems and an extremely powerful set of tools for analyzing them-that applies equally well to problems in many fields. In this chapter, we begin our development of the analytical framework for signals and systems by intro- ducing their mathematical description and representations. In the chapters that follow, we build on this foundation in order to develop and describe additional concepts and methods that add considerably both to our understanding of signals and systems and to our ability to analyze and solve problems involving signals and systems that arise in a broad array of applications. 1. 1 CONTINUOUS-TIME AND DISCRETE-TIME SIGNALS 1. 1. 1 Examples and Mathematical Representation Signals may describe a wide variety of physical phenomena. Although signals can be rep- resented in many ways, in all cases the information in a signal is contained in a pattern of variations of some form. For example, consider the simple circuit in Figure 1.1. In this case, the patterns of variation over time in the source and capacitor voltages, v, and Vc, are exam- ples of signals. Similarly, as depicted in Figure 1.2, the variations over time of the applied force f and the resulting automobile velocity v are signals. As another example, consider the human vocal mechanism, which produces speech by creating fluctuations in acous- tic pressure. Figure 1.3 is an illustration of a recording of such a speech signal, obtained by 1 2 Signals and Systems Chap. 1 R c ~pv Figure 1. 1 A simple RC circuit with source Figure 1.2 An automobile responding to an voltage Vs and capacitor voltage Vc. applied force t from the engine and to a re- tarding frictional force pv proportional to the automobile's velocity v. ~-------------------200msec--------------------~ I I I I 1_ _ _ _ _.!_ _____ 1_ _ _ _ _ !._ _____1 _ _ _ _ _ _ _ _ _ _ ~ _ _ _ _ _ I_ _ _ _ _ J j sh oul d r - - - - -~- - - - - I - - - - I - - - - - ~- - - - - I - - - - -~- - - - - I - - - - -~ I I I I I I I I I ~------------------------------------------- w e Figure 1.3 Example of a record- r - - - - -~- - - - - I - - - - I - - - - - ~- - - - - I - - - - -~- - - - - I - - - - -~ ing of speech. [Adapted from Ap- I I I plications of Digital Signal Process- ing, A.V. Oppenheim, ed. (Englewood Cliffs, N.J.: Prentice-Hall, Inc., 1978), I ~ _ _ _ _ _ 1 _ _ _ _ _ ~ _ _ _ _ ~ _ _ _ _ _ 1_ _ _ _ _.!_ _____ I _ _ _ _ _ ~ _____I p. 121.] The signal represents acous- ch a tic pressure variations as a function of time for the spoken words "should we chase." The top line of the figure corresponds to the word "should," the second line to the word "we," and the last two lines to the word "chase." (We have indicated the ap- I 1_ _ _ _ _ ~ I _ _ _ _ _ 1_ _ _ _ _ ~ _ _ _ _ _1 _ _ _ _ _ I_ _ _ _ _ I ~ _ _ _ _ _ 1_ _ _ _ _ J I proximate beginnings and endings a se of each successive sound in each I word.) using a microphone to sense variations in acoustic pressure, which are then converted into an electrical signal. As can be seen in the figure, different sounds correspond to different patterns in the variations of acoustic pressure, and the human vocal system produces intel- ligible speech by generating particular sequences of these patterns. Alternatively, for the monochromatic picture, shown in Figure 1.4, it is the pattern of variations in brightness across the image, that is important. Sec. 1.1 Continuous-Time and Discrete-Time Signals 3 Figure 1.4 A monochromatic picture. Signals are represented mathematically as functions of one or more independent variables. For example, a speech signal can be represented mathematically by acoustic pressure as a function of time, and a picture can be represented by brightness as a func- tion of two spatial variables. In this book, we focus our attention on signals involving a single independent variable. For convenience, we will generally refer to the independent variable as time, although it may not in fact represent time in specific applications. For example, in geophysics, signals representing variations with depth of physical quantities such as density, porosity, and electrical resistivity are used to study the structure of the earth. Also, knowledge of the variations of air pressure, temperature, and wind speed with altitude are extremely important in meteorological investigations. Figure 1.5 depicts a typ- ical example of annual average vertical wind profile as a function of height. The measured variations of wind speed with height are used in examining weather patterns, as well as wind conditions that may affect an aircraft during final approach and landing. Throughout this book we will be considering two basic types of signals: continuous- time signals and discrete-time signals. In the case of continuous-time signals the inde- pendent variable is continuous, and thus these signals are defined for a continuum of values 26 24 22 20 18 :§'16 0 ~ 14 ~ 12 ~ 10 (f) 8 6 4 Figure 1.s Typical annual vertical 2 wind profile. (Adapted from Crawford and Hudson, National Severe Storms 0 200 400 600 800 1,000 1,200 1,400 1,600 Laboratory Report, ESSA ERLTM-NSSL Height (feet) 48, August 1970.) 4 Signals and Systems Chap. 1 400- 350 - 300 250 200 150 100 50 ot Jan. 5,1929 Jan. 4,1930 Figure 1.6 An example of a discrete-time signal: The weekly Dow-Jones stock market index from January 5, 1929, to January 4, 1930. of the independent variable. On the other hand, discrete-time signals are defined only at discrete times, and consequently, for these signals, the independent variable takes on only a discrete set of values. A speech signal as a function of time and atmospheric pressure as a function of altitude are examples of continuous-time signals. The weekly Dow-Jones stock market index, as illustrated in Figure 1.6, is an example of a discrete-time signal. Other examples of discrete-time signals can be found in demographic studies in which various attributes, such as average budget, crime rate, or pounds of fish caught, are tab- ulated against such discrete variables as family size, total population, or type of fishing vessel, respectively. To distinguish between continuous-time and discrete-time signals, we will use the symbol t to denote the continuous-time independent variable and n to denote the discrete- time independent variable. In addition, for continuous-time signals we will enclose the independent variable in parentheses ( · ), whereas for discrete-time signals we will use brackets [ · ] to enclose the independent variable. We will also have frequent occasions when it will be useful to represent signals graphically. Illustrations of a continuous-time signal x(t) and a discrete-time signal x[n] are shown in Figure 1.7. It is important to note that the discrete-time signal x[n] is defined only for integer values of the independent variable. Our choice of graphical representation for x[ n] emphasizes this fact, and for further emphasis we will on occasion refer to x[n] as a discrete-time sequence. A discrete-time signal x[n] may represent a phenomenon for which the independent variable is inherently discrete. Signals such as demographic data are examples of this. On the other hand, a very important class of discrete-time signals arises from the sampling of continuous-time signals. In this case, the discrete-time signal x[n] represents successive samples of an underlying phenomenon for which the independent variable is continuous. Because of their speed, computational power, and flexibility, modem digital processors are used to implement many practical systems, ranging from digital autopilots to digital audio systems. Such systems require the use of discrete-time sequences representing sampled versions of continuous-time signals--e.g., aircraft position, velocity, and heading for an Sec. 1.1 Continuous-Time and Discrete-Time Signals 5 x(t) 0 (a) x[n] x[O] n Figure 1. 7 Graphical representations of (a) continuous-time and (b) discrete- time signals. autopilot or speech and music for an audio system. Also, pictures in newspapers-or in this book, for that matter-actually consist of a very fine grid of points, and each of these points represents a sample of the brightness of the corresponding point in the original image. No matter what the source of the data, however, the signal x[n] is defined only for integer values of n. It makes no more sense to refer to the 3 ~ th sample of a digital speech signal than it does to refer to the average budget for a family with 2~ family members. Throughout most of this book we will treat discrete-time signals and continuous-time signals separately but in parallel, so that we can draw on insights developed in one setting to aid our understanding of another. In Chapter 7 we will return to the question of sampling, and in that context we will bring continuous-time and discrete-time concepts together in order to examine the relationship between a continuous-time signal and a discrete-time signal obtained from it by sampling. 1. 1.2 Signal Energy and Power From the range of examples provided so far, we see that signals may represent a broad variety of phenomena. In many, but not all, applications, the signals we consider are di- rectly related to physical quantities capturing power and energy in a physical system. For example, if v(t) and i(t) are, respectively, the voltage and current across a resistor with resistance R, then the instantaneous power is p(t) = v(t)i(t) = ~v2 (t). (1.1) 6 Signals and Systems Chap. 1 The total energy expended over the time interval t 1 :s t :s t 2 is ~v 2 (t) dt, 12 12 { p(t) dt = { (1.2) Jt] Jf] and the average power over this time interval is 1 -1- J 2 p(t)dt = -1- Jt2 -Rv1 2(t)dt. (1.3) t2 - t! f] t2 - tl f] Similarly, for the automobile depicted in Figure 1.2, the instantaneous power dissipated through friction is p(t) = bv 2(t), and we can then define the total energy and average power over a time interval in the same way as in eqs. (1.2) and (1.3). With simple physical examples such as these as motivation, it is a common and worthwhile convention to use similar terminology for power and energy for any continuous- time signal x(t) or any discrete-time signal x[n]. Moreover, as we will see shortly, we will frequently find it convenient to consider signals that take on complex values. In this case, the total energy over the time interval t 1 :s t :s t 2 in a continuous-time signal x(t) is defined as (1.4) where lxl denotes the magnitude of the (possibly complex) number x. The time-averaged power is obtained by dividing eq. (1.4) by the length, t2 - t 1, of the time interval. Simi- larly, the total energy in a discrete-time signal x[n] over the time interval n 1 :s n :s n 2 is defined as (1.5) and dividing by the number of points in the interval, n2 - n 1 + 1, yields the average power over the interval. It is important to remember that the terms "power" and "energy" are used here independently of whether the quantities in eqs. (1.4) and (1.5) actually are related to physical energy. 1 Nevertheless, we will find it convenient to use these terms in a general fashion. Furthermore, in many systems we will be interested in examining power and energy in signals over an infinite time interval, i.e., for -oo < t < +oo or for -oo < n < +oo. In these cases, we define the total energy as limits of eqs. (1.4) and (1.5) as the time interval increases without bound. That is, in continuous time, Eoo ~ }~ I T -T 2 lx(t)l dt = I +oc -oc 2 lx(t)l dt, (1.6) and in discrete time, +N +oo Eoo ~ lim N~oo L n=-N lx[nJI 2 = L n=-oc lx[nJI 2. (1.7) 1 Even if such a relationship does exist, eqs. ( 1.4) and ( 1.5) may have the wrong dimensions and scalings. For example, comparing eqs. (1.2) and (1.4 ), we see that if x(t) represents the voltage across a resistor, then eq. (1.4) must be divided by the resistance (measured, for example, in ohms) to obtain units of physical energy. Sec. 1.2 Transformations of the Independent Variable 7 Note that for some signals the integral in eq. (1.6) or sum in eq. (1.7) might not converge- e.g., if x(t) or x[n] equals a nonzero constant value for all time. Such signals have infinite energy, while signals with E~ < co have finite energy. In an analogous fashion, we can define the time-averaged power over an infinite interval as (', lim - 1 PeN = IT /x(t)/ 2 dt (1.8) T---+~ 2 T -T and +N PeN ~ lim 1 L /x[n]/ 2 (1.9) N---+~ 2N + 1 n=-N in continuous time and discrete time, respectively. With these definitions, we can identify three important classes of signals. The first of these is the class of signals with finite total energy, i.e., those signals for which Eoo 0, then, of necessity, Ex = co. This, of course, makes sense, since if there is a nonzero average energy per unit time (i.e., nonzero power), then integrating or summing this over an infinite time interval yields an infinite amount of energy. For example, the constant signal x[n] = 4 has infinite energy, but average power Px = 16. There are also signals for which neither Px nor Ex are finite. A simple example is the signal x(t) = t. We will encounter other examples of signals in each of these classes in the remainder of this and the following chapters. 1.2 TRANSFORMATIONS OF THE INDEPENDENT VARIABlE A central concept in signal and system analysis is that of the transformation of a signal. For example, in an aircraft control system, signals corresponding to the actions of the pilot are transformed by electrical and mechanical systems into changes in aircraft thrust or the positions of aircraft control surfaces such as the rudder or ailerons, which in tum are transformed through the dynamics and kinematics of the vehicle into changes in aircraft velocity and heading. Also, in a high-fidelity audio system, an input signal representing music as recorded on a cassette or compact disc is modified in order to enhance desirable characteristics, to remove recording noise, or to balance the several components of the signal (e.g., treble and bass). In this section, we focus on a very limited but important class of elementary signal transformations that involve simple modification of the independent variable, i.e., the time axis. As we will see in this and subsequent sections of this chapter, these elementary transformations allow us to introduce several basic properties of signals and systems. In later chapters, we will find that they also play an important role in defining and characterizing far richer and important classes of systems. 8 Signals and Systems Chap. 1 1.2. 1 Examples of Transformations of the Independent Variable A simple and very important example of transforming the independent variable of a signal is a time shift. A time shift in discrete time is illustrated in Figure 1.8, in which we have two signals x[n] and x[n- n0 ] that are identical in shape, but that are displaced or shifted relative to each other. We will also encounter time shifts in continuous time, as illustrated in Figure 1.9, in which x(t - t0 ) represents a delayed (if to is positive) or advanced (if to is negative) version of x(t). Signals that are related in this fashion arise in applications such as radar, sonar, and seismic signal processing, in which several receivers at different locations observe a signal being transmitted through a medium (water, rock, air, etc.). In this case, the difference in propagation time from the point of origin of the transmitted signal to any two receivers results in a time shift between the signals at the two receivers. A second basic transformation of the time axis is that of time reversal. For example, as illustrated in Figure 1.1 0, the signal x[- n] is obtained from the signal x[ n] by a reflec- tion about n = 0 (i.e., by reversing the signal). Similarly, as depicted in Figure 1.11, the signal x(- t) is obtained from the signal x(t) by a reflection about t = 0. Thus, if x(t) rep- resents an audio tape recording, then x( -t) is the same tape recording played backward. Another transformation is that of time scaling. In Figure 1.12 we have illustrated three signals, x(t), x(2t), and x(t/2), that are related by linear scale changes in the independent variable. If we again think of the example of x(t) as a tape recording, then x(2t) is that recording played at twice the speed, and x(t/2) is the recording played at half-speed. It is often of interest to determine the effect of transforming the independent variable of a given signal x(t) to obtain a signal of the form x(at + {3), where a and {3 are given numbers. Such a transformation of the independent variable preserves the shape of x(t), except that the resulting signal may be linearly stretched if Ia I < 1, linearly compressed if Ia I > 1, reversed in time if a < 0, and shifted in time if {3 is nonzero. This is illustrated in the following set of examples. x[n] n x[n-n 0] Figure 1.8 Discrete-time signals related by a time shift. In this figure n0 > 0, so that x[n- n0 ] is a delayed 0 n verson of x[n] (i.e., each point in x[n] occurs later in x[n- n0 ]). Sec. 1.2 Transformations of the Independent Variable 9 x[n) n (a) x[-n) n Figure 1.9 Continuous-time signals related by a time shift. In this figure t0 < 0, so that (b) x(t - to) is an advanced version of x(t) (i.e., each point in x(t) occurs at an earlier time in Figure 1.1 O (a) A discrete-time signal x[n]; (b) its reflec- x(t - to)). tion x[-n] about n = 0. x(t) x(t) d\ x(2t) (a) x(-t) & x(t/2) (b) ~ Figure 1.11 (a) A continuous-time signal x(t); (b) its Figure 1. 12 Continuous-time signals reflection x( - t) about t = 0. related by time scaling. 10 Signals and Systems Chap. 1 Example 1.1 Given the signal x(t) shown in Figure l.13(a), the signal x(t + 1) corresponds to an advance (shift to the left) by one unit along the taxis as illustrated in Figure l.13(b). Specifically, we note that the value of x(t) at t = to occurs in x(t + 1) at t = to - 1. For 1 1 'l'I 0 1 2 (a) 1~ -1 0 1 2 (b) -1 0 1 (c) I 1 '1i11 -~ 0 2/3 4/3 (d) -2/3 0 2/3 (e) Figure 1. 13 (a) The continuous-time signal x(t) used in Examples 1.1-1.3 to illustrate transformations of the independent variable; (b) the time-shifted signal x(t + 1); (c) the signal x(-t + 1) obtained by a time shift and a time reversal; (d) the time-scaled signal by time-shifting and scaling. xat); and (e) the signal xa t + 1) obtained Sec. 1.2 Transformations of the Independent Variable 11 example, the value of x(t) at t = 1 is found in x(t + 1) at t = 1 - 1 = 0. Also, since x(t) is zero fort < 0, we have x(t + 1) zero fort < -1. Similarly, since x(t) is zero for t > 2, x(t + 1) is zero for t > 1. Let us also consider the signal x( - t + 1), which may be obtained by replacing t with -t in x(t + 1). That is, x(-t + 1) is the time reversed version of x(t + 1). Thus, x( - t + 1) may be obtained graphically by reflecting x( t + 1) about the t axis as shown in Figure 1.13(c). Example 1.2 Given the signal x(t), shown in Figure l.13(a), the signal x(~t) corresponds to a linear compression of x(t) by a factor of~ as illustrated in Figure l.13(d). Specifically we note that the value of x(t) at t = to occurs in x(~t) at t = ~t0. For example, the value of x(t) at t = 1 is found in x(~t) at t = ~ (1) = ~-Also, since x(t) is zero fort< 0, we have x(~t) zero fort< 0. Similarly, since x(t) is zero fort> 2, x(~t) is zero fort> ~- Example 1.3 Suppose that we would like to determine the effect of transforming the independent vari- able of a given signal, x(t), to obtain a signal of the form x(at + /3), where a and f3 are given numbers. A systematic approach to doing this is to first delay or advance x(t) in accordance with the value of f3, and then to perform time scaling and/or time reversal on the resulting signal in accordance with the value of a. The delayed or advanced signal is linearly stretched if fa[ < 1, linearly compressed if fa[ > 1, and reversed in time if a < 0. To illustrate this approach, let us show how x( ~ t + 1) may be determined for the signal x(t) shown in Figure 1.13(a). Since f3 = 1, we first advance (shift to the left) x(t) by 1 as shown· in Figure 1.l 3(b ). Since fa [ = ~, we may linearly compress the shifted signal of Figure 1.13(b) by a factor of~ to obtain the signal shown in Figure 1.13(e). In addition to their use in representing physical phenomena such as the time shift in a sonar signal and the speeding up or reversal of an audiotape, transformations of the independent variable are extremely useful in signal and system analysis. In Section 1.6 and in Chapter 2, we will use transformations of the independent variable to introduce and analyze the properties of systems. These transformations are also important in defining and examining some important properties of signals. 1.2.2 Periodic Signals An important class of signals that we will encounter frequently throughout this book is the class of periodic signals. A periodic continuous-time signal x(t) has the property that there is a positive value of T for which x(t) = x(t + T) (l.11) for all values oft. In other words, a periodic signal has the property th