Nonlinear Dynamics and Chaos 2018 PDF
Document Details
Uploaded by Deleted User
2018
Steven H. Strogatz
Tags
Related
- Fundamentals of Data Structures and Algorithms PDF
- CUESTIONARIO PARA PARCIAL DE MODELADO PDF
- Nonlinear Affixation - Meeting 5 - Morphosyntax - PDF
- Hydraulic Pumps PDF
- Empowerment Technologies 1st Semester 2nd Quarter - Multimedia Review PDF
- Principles of Biopharmaceutics & Kinetics 2, Fall 2024 Non-Linear Kinetics PDF
Summary
This book covers nonlinear dynamics and chaos with applications to physics, biology, chemistry, and engineering. The material is suitable for a first course in the subject, emphasizing analytical methods, concrete examples, and geometric intuition. The book explores topics like mechanical vibrations, lasers, biological rhythms, and chaotic water-wheels.
Full Transcript
NONLINEAR DYNAMICS AND CHAOS NONLINEAR DYNAMICS AND CHAOS With Applications to Physics, Biology, Chemistry, and Engineering Steven H. Strogatz Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Group, an informa business A CHAPMAN & HALL BOOK First publish...
NONLINEAR DYNAMICS AND CHAOS NONLINEAR DYNAMICS AND CHAOS With Applications to Physics, Biology, Chemistry, and Engineering Steven H. Strogatz Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Group, an informa business A CHAPMAN & HALL BOOK First published 2015 by Westview Press Published 2018 by CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 CRC Press is an imprint of the Taylor & Francis Group, an informa business Copyright © 2015 by Steven H. Strogatz No claim to original U.S. Government works This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com Every effort has been made to secure required permissions for all text, images, maps, and other art reprinted in this volume. A CIP catalog record for the print version of this book is available from the Library of Congress ISBN 13: 978-0-8133-4910-7 (pbk) Text design by Robert B. Kern Set in Times LT Std by TIPS Technical Publishing, Inc. CONTENTS Preface to the Second Edition ix Preface to the First Edition xi 1 Overview 1 1.0 Chaos, Fractals, and Dynamics 1 1.1 Capsule History of Dynamics 2 1.2 The Importance of Being Nonlinear 4 1.3 A Dynamical View of the World 9 Part I One-Dimensional Flows 2 Flows on the Line 15 2.0 Introduction 15 2.1 A Geometric Way of Thinking 16 2.2 Fixed Points and Stability 18 2.3 Population Growth 21 2.4 Linear Stability Analysis 24 2.5 Existence and Uniqueness 26 2.6 Impossibility of Oscillations 28 2.7 Potentials 30 2.8 Solving Equations on the Computer 32 Exercises for Chapter 2 36 3 Bifurcations 45 3.0 Introduction 45 3.1 Saddle-Node Bifurcation 46 3.2 Transcritical Bifurcation 51 3.3 Laser Threshold 54 3.4 Pitchfork Bifurcation 56 V 3.5 Overdamped Bead on a Rotating Hoop 62 3.6 Imperfect Bifurcations and Catastrophes 70 3.7 Insect Outbreak 74 Exercises for Chapter 3 80 4 Flows on the Circle 95 4.0 Introduction 95 4.1 Examples and Definitions 95 4.2 Uniform Oscillator 97 4.3 Nonuniform Oscillator 98 4.4 Overdamped Pendulum 103 4.5 Fireflies 105 4.6 Superconducting Josephson Junctions 109 Exercises for Chapter 4 115 Part II Two-Dimensional Flows 5 Linear Systems 125 5.0 Introduction 125 5.1 Definitions and Examples 125 5.2 Classification of Linear Systems 131 5.3 Love Affairs 139 Exercises for Chapter 5 142 6 Phase Plane 146 6.0 Introduction 146 6.1 Phase Portraits 146 6.2 Existence, Uniqueness, and Topological Consequences 149 6.3 Fixed Points and Linearization 151 6.4 Rabbits versus Sheep 156 6.5 Conservative Systems 160 6.6 Reversible Systems 164 6.7 Pendulum 168 6.8 Index Theory 174 Exercises for Chapter 6 181 7 Limit Cycles 198 7.0 Introduction 198 7.1 Examples 199 7.2 Ruling Out Closed Orbits 201 7.3 Poincaré−Bendixson Theorem 205 7.4 Liénard Systems 212 7.5 Relaxation Oscillations 213 VI 7.6 Weakly Nonlinear Oscillators 217 Exercises for Chapter 7 230 8 Bifurcations Revisited 244 8.0 Introduction 244 8.1 Saddle-Node, Transcritical, and Pitchfork Bifurcations 244 8.2 Hopf Bifurcations 251 8.3 Oscillating Chemical Reactions 257 8.4 Global Bifurcations of Cycles 264 8.5 Hysteresis in the Driven Pendulum and Josephson Junction 268 8.6 Coupled Oscillators and Quasiperiodicity 276 8.7 Poincaré Maps 281 Exercises for Chapter 8 287 Part III Chaos 9 Lorenz Equations 309 9.0 Introduction 309 9.1 A Chaotic Waterwheel 310 9.2 Simple Properties of the Lorenz Equations 319 9.3 Chaos on a Strange Attractor 325 9.4 Lorenz Map 333 9.5 Exploring Parameter Space 337 9.6 Using Chaos to Send Secret Messages 342 Exercises for Chapter 9 348 10 One-Dimensional Maps 355 10.0 Introduction 355 10.1 Fixed Points and Cobwebs 356 10.2 Logistic Map: Numerics 360 10.3 Logistic Map: Analysis 364 10.4 Periodic Windows 368 10.5 Liapunov Exponent 373 10.6 Universality and Experiments 376 10.7 Renormalization 386 Exercises for Chapter 10 394 11 Fractals 405 11.0 Introduction 405 11.1 Countable and Uncountable Sets 406 11.2 Cantor Set 408 11.3 Dimension of Self-Similar Fractals 411 VII 11.4 Box Dimension 416 11.5 Pointwise and Correlation Dimensions 418 Exercises for Chapter 11 423 12 Strange Attractors 429 12.0 Introduction 429 12.1 The Simplest Examples 429 12.2 Hénon Map 435 12.3 Rössler System 440 12.4 Chemical Chaos and Attractor Reconstruction 443 12.5 Forced Double-Well Oscillator 447 Exercises for Chapter 12 454 Answers to Selected Exercises 460 References 470 Author Index 483 Subject Index 487 VIII PREFACE TO THE SECOND EDITION Welcome to this second edition of Nonlinear Dynamics and Chaos, now avail- able in e-book format as well as traditional print. In the twenty years since this book first appeared, the ideas and techniques of nonlinear dynamics and chaos have found application in such exciting new fields as systems biology, evolutionary game theory, and sociophysics. To give you a taste of these recent developments, I’ve added about twenty substantial new exercises that I hope will entice you to learn more. The fields and applica- tions include (with the associated exercises listed in parentheses): Animal behavior: calling rhythms of Japanese tree frogs (8.6.9) Classical mechanics: driven pendulum with quadratic damping (8.5.5) Ecology: predator-prey model; periodic harvesting (7.2.18, 8.5.4) Evolutionary biology: survival of the fittest (2.3.5, 6.4.8) Evolutionary game theory: rock-paper-scissors (6.5.20, 7.3.12) Linguistics: language death (2.3.6) Prebiotic chemistry: hypercycles (6.4.10) Psychology and literature: love dynamics in Gone with the Wind (7.2.19) Macroeconomics: Keynesian cross model of a national economy (6.4.9) Mathematics: repeated exponentiation (10.4.11) Neuroscience: binocular rivalry in visual perception (8.1.14, 8.2.17) Sociophysics: opinion dynamics (6.4.11, 8.1.15) Systems biology: protein dynamics (3.7.7, 3.7.8) Thanks to my colleagues Danny Abrams, Bob Behringer, Dirk Brockmann, Michael Elowitz, Roy Goodman, Jeff Hasty, Chad Higdon-Topaz, Mogens Jensen, Nancy Kopell, Tanya Leise, Govind Menon, Richard Murray, Mary PREFACE TO THE SECOND EDITION IX Silber, Jim Sochacki, Jean-Luc Thiffeault, John Tyson, Chris Wiggins, and Mary Lou Zeeman for their suggestions about possible new exercises. I am especially grateful to Bard Ermentrout for devising the exercises about Japanese tree frogs (8.6.9) and binocular rivalry (8.1.14, 8.2.17), and to Jordi Garcia-Ojalvo for sharing his exercises about systems biology (3.7.7, 3.7.8). In all other respects, the aims, organization, and text of the first edition have been left intact, except for a few corrections and updates here and there. Thanks to all the teachers and students who wrote in with suggestions. It has been a pleasure to work with Sue Caulfield, Priscilla McGeehon, and Cathleen Tetro at Westview Press. Many thanks for your guidance and atten- tion to detail. Finally, all my love goes out to my wife Carole, daughters Leah and Jo, and dog Murray, for putting up with my distracted air and making me laugh. Steven H. Strogatz Ithaca, New York 2014 X PREFACE TO THE SECOND EDITION PREFACE TO THE FIRST EDITION This textbook is aimed at newcomers to nonlinear dynamics and chaos, espe- cially students taking a first course in the subject. It is based on a one-semester course I’ve taught for the past several years at MIT. My goal is to explain the mathematics as clearly as possible, and to show how it can be used to under- stand some of the wonders of the nonlinear world. The mathematical treatment is friendly and informal, but still careful. Analytical methods, concrete examples, and geometric intuition are stressed. The theory is developed systematically, starting with first-order differential equations and their bifurcations, followed by phase plane analysis, limit cycles and their bifurcations, and culminating with the Lorenz equations, chaos, iter- ated maps, period doubling, renormalization, fractals, and strange attractors. A unique feature of the book is its emphasis on applications. These include mechanical vibrations, lasers, biological rhythms, superconducting circuits, insect outbreaks, chemical oscillators, genetic control systems, chaotic water- wheels, and even a technique for using chaos to send secret messages. In each case, the scientific background is explained at an elementary level and closely integrated with the mathematical theory. Prerequisites The essential prerequisite is single-variable calculus, including curve- sketch ing, Taylor series, and separable differential equations. In a few places, multivariable calculus (partial derivatives, Jacobian matrix, divergence theo- rem) and linear algebra (eigenvalues and eigenvectors) are used. Fourier analy- sis is not assumed, and is developed where needed. Introductory physics is used throughout. Other scientific prerequisites would depend on the applications considered, but in all cases, a first course should be adequate preparation. PREFACE TO THE FIRST EDITION XI Possible Courses The book could be used for several types of courses: A broad introduction to nonlinear dynamics, for students with no prior exposure to the subject. (This is the kind of course I have taught.) Here one goes straight through the whole book, covering the core material at the beginning of each chapter, selecting a few applications to discuss in depth and giving light treatment to the more advanced theoretical topics or skipping them altogether. A reasonable schedule is seven weeks on Chapters 1-8, and five or six weeks on Chapters 9-12. Make sure there’s enough time left in the semester to get to chaos, maps, and fractals. A traditional course on nonlinear ordinary differential equations, but with more emphasis on applications and less on perturbation theory than usual. Such a course would focus on Chapters 1-8. A modern course on bifurcations, chaos, fractals, and their applications, for students who have already been exposed to phase plane analysis. Topics would be selected mainly from Chapters 3, 4, and 8-12. For any of these courses, the students should be assigned homework from the exercises at the end of each chapter. They could also do computer projects; build chaotic circuits and mechanical systems; or look up some of the refer- ences to get a taste of current research. This can be an exciting course to teach, as well as to take. I hope you enjoy it. Conventions Equations are numbered consecutively within each section. For instance, when we’re working in Section 5.4, the third equation is called (3) or Equation (3), but elsewhere it is called (5.4.3) or Equation (5.4.3). Figures, examples, and exercises are always called by their full names, e.g., Exercise 1.2.3. Examples and proofs end with a loud thump, denoted by the symbol. Acknowledgments Thanks to the National Science Foundation for financial support. For help with the book, thanks to Diana Dabby, Partha Saha, and Shinya Watanabe (students); Jihad Touma and Rodney Worthing (teaching assistants); Andy Christian, Jim Crutchfield, Kevin Cuomo, Frank DeSimone, Roger Eckhardt, Dana Hobson, and Thanos Siapas (for providing figures); Bob Devaney, Irv Epstein, Danny Kaplan, Willem Malkus, Charlie Marcus, Paul Matthews, XII PREFACE TO THE FIRST EDITION Arthur Mattuck, Rennie Mirollo, Peter Renz, Dan Rockmore, Gil Strang, Howard Stone, John Tyson, Kurt Wiesenfeld, Art Winfree, and Mary Lou Zeeman (friends and colleagues who gave advice); and to my editor Jack Repcheck, Lynne Reed, Production Supervisor, and all the other helpful peo- ple at Addison-Wesley. Finally, thanks to my family and Elisabeth for their love and encouragement. Steven H. Strogatz Cambridge, Massachusetts 1994 PREFACE TO THE FIRST EDITION XIII 1 OVERVIEW 1.0 Chaos, Fractals, and Dynamics There is a tremendous fascination today with chaos and fractals. James Gleick’s book Chaos (Gleick 1987) was a bestseller for months—an amazing accomplish- ment for a book about mathematics and science. Picture books like The Beauty of Fractals by Peitgen and Richter (1986) can be found on coffee tables in living rooms everywhere. It seems that even nonmathematical people are captivated by the infinite patterns found in fractals (Figure 1.0.1). Perhaps most important of all, chaos and fractals represent hands-on mathematics that is alive and changing. You can turn on a home computer and create stunning mathematical images that no one has ever seen before. The aesthetic appeal of chaos and fractals may explain why so many people have become intrigued by these ideas. But maybe you feel the urge to go deeper— to learn the mathematics behind the pictures, and to see how the ideas can be applied to problems in science and engi- neering. If so, this is a textbook for you. The style of the book is informal (as you can see), with an emphasis on con- crete examples and geometric thinking, rather than proofs and abstract argu- ments. It is also an extremely “applied” book—virtually every idea is illustrated by some application to science or engi- neering. In many cases, the applications are drawn from the recent research liter- ature. Of course, one problem with such an applied approach is that not every- Figure 1.0.1 one is an expert in physics and biology 1.0 CHAOS, FRACTALS, AND DYNAMICS 1 and fluid mechanics... so the science as well as the mathematics will need to be explained from scratch. But that should be fun, and it can be instructive to see the connections among different fields. Before we start, we should agree about something: chaos and fractals are part of an even grander subject known as dynamics. This is the subject that deals with change, with systems that evolve in time. Whether the system in question settles down to equilibrium, keeps repeating in cycles, or does something more com- plicated, it is dynamics that we use to analyze the behavior. You have probably been exposed to dynamical ideas in various places—in courses in differential equations, classical mechanics, chemical kinetics, population biology, and so on. Viewed from the perspective of dynamics, all of these subjects can be placed in a common framework, as we discuss at the end of this chapter. Our study of dynamics begins in earnest in Chapter 2. But before digging in, we present two overviews of the subject, one historical and one logical. Our treat- ment is intuitive; careful definitions will come later. This chapter concludes with a “dynamical view of the world,” a framework that will guide our studies for the rest of the book. 1.1 Capsule History of Dynamics Although dynamics is an interdisciplinary subject today, it was originally a branch of physics. The subject began in the mid-1600s, when Newton invented differen- tial equations, discovered his laws of motion and universal gravitation, and com- bined them to explain Kepler’s laws of planetary motion. Specifically, Newton solved the two-body problem—the problem of calculating the motion of the earth around the sun, given the inverse-square law of gravitational attraction between them. Subsequent generations of mathematicians and physicists tried to extend Newton’s analytical methods to the three-body problem (e.g., sun, earth, and moon) but curiously this problem turned out to be much more difficult to solve. After decades of effort, it was eventually realized that the three-body problem was essentially impossible to solve, in the sense of obtaining explicit formulas for the motions of the three bodies. At this point the situation seemed hopeless. The breakthrough came with the work of Poincaré in the late 1800s. He intro- duced a new point of view that emphasized qualitative rather than quantitative questions. For example, instead of asking for the exact positions of the planets at all times, he asked “Is the solar system stable forever, or will some planets even- tually fly off to infinity?” Poincaré developed a powerful geometric approach to analyzing such questions. That approach has flowered into the modern subject of dynamics, with applications reaching far beyond celestial mechanics. Poincaré was also the first person to glimpse the possibility of chaos, in which a determinis- tic system exhibits aperiodic behavior that depends sensitively on the initial condi- tions, thereby rendering long-term prediction impossible. 2 OVERVIEW But chaos remained in the background in the first half of the twentieth century; instead dynamics was largely concerned with nonlinear oscillators and their appli- cations in physics and engineering. Nonlinear oscillators played a vital role in the development of such technologies as radio, radar, phase-locked loops, and lasers. On the theoretical side, nonlinear oscillators also stimulated the invention of new mathematical techniques—pioneers in this area include van der Pol, Andronov, Littlewood, Cartwright, Levinson, and Smale. Meanwhile, in a separate develop- ment, Poincaré’s geometric methods were being extended to yield a much deeper understanding of classical mechanics, thanks to the work of Birkhoff and later Kolmogorov, Arnol’d, and Moser. The invention of the high-speed computer in the 1950s was a watershed in the history of dynamics. The computer allowed one to experiment with equations in a way that was impossible before, and thereby to develop some intuition about nonlinear systems. Such experiments led to Lorenz’s discovery in 1963 of chaotic motion on a strange attractor. He studied a simplified model of convection rolls in the atmosphere to gain insight into the notorious unpredictability of the weather. Lorenz found that the solutions to his equations never settled down to equilibrium or to a periodic state—instead they continued to oscillate in an irregular, aperi- odic fashion. Moreover, if he started his simulations from two slightly different initial conditions, the resulting behaviors would soon become totally different. The implication was that the system was inherently unpredictable—tiny errors in measuring the current state of the atmosphere (or any other chaotic system) would be amplified rapidly, eventually leading to embarrassing forecasts. But Lorenz also showed that there was structure in the chaos—when plotted in three dimensions, the solutions to his equations fell onto a butterfly-shaped set of points (Figure 1.1.1). He argued that this set had to be “an infinite complex of surfaces”— today we would regard it as an example of a fractal. z x Figure 1.1.1 1.1 CAPSULE HISTORY OF DYNAMICS 3 Lorenz’s work had little impact until the 1970s, the boom years for chaos. Here are some of the main developments of that glorious decade. In 1971, Ruelle and Takens proposed a new theory for the onset of turbulence in fluids, based on abstract considerations about strange attractors. A few years later, May found examples of chaos in iterated mappings arising in population biology, and wrote an influential review article that stressed the pedagogical importance of studying simple nonlinear systems, to counterbalance the often misleading linear intuition fostered by traditional education. Next came the most surprising discovery of all, due to the physicist Feigenbaum. He discovered that there are certain universal laws governing the transition from regular to chaotic behavior; roughly speaking, completely different systems can go chaotic in the same way. His work established a link between chaos and phase transitions, and enticed a generation of physicists to the study of dynamics. Finally, experimentalists such as Gollub, Libchaber, Swinney, Linsay, Moon, and Westervelt tested the new ideas about chaos in exper- iments on fluids, chemical reactions, electronic circuits, mechanical oscillators, and semiconductors. Although chaos stole the spotlight, there were two other major developments in dynamics in the 1970s. Mandelbrot codified and popularized fractals, produced magnificent computer graphics of them, and showed how they could be applied in a variety of subjects. And in the emerging area of mathematical biology, Winfree applied the geometric methods of dynamics to biological oscillations, especially circadian (roughly 24-hour) rhythms and heart rhythms. By the 1980s many people were working on dynamics, with contributions too numerous to list. Table 1.1.1 summarizes this history. 1.2 The Importance of Being Nonlinear Now we turn from history to the logical structure of dynamics. First we need to introduce some terminology and make some distinctions. There are two main types of dynamical systems: differential equations and iter- ated maps (also known as difference equations). Differential equations describe the evolution of systems in continuous time, whereas iterated maps arise in prob- lems where time is discrete. Differential equations are used much more widely in science and engineering, and we shall therefore concentrate on them. Later in the book we will see that iterated maps can also be very useful, both for providing sim- ple examples of chaos, and also as tools for analyzing periodic or chaotic solutions of differential equations. Now confining our attention to differential equations, the main distinction is between ordinary and partial differential equations. For instance, the equation for a damped harmonic oscillator 4 OVERVIEW Dynamics — A Capsule History 1666 Newton Invention of calculus, explanation of planetary motion 1700s Flowering of calculus and classical mechanics 1800s Analytical studies of planetary motion 1890s Poincaré Geometric approach, nightmares of chaos 1920–1950 Nonlinear oscillators in physics and engineering, invention of radio, radar, laser 1920–1960 Birkhoff Complex behavior in Hamiltonian mechanics Kolmogorov Arnol’d Moser 1963 Lorenz Strange attractor in simple model of convection 1970s Ruelle & Takens Turbulence and chaos May Chaos in logistic map Feigenbaum Universality and renormalization, connection between chaos and phase transitions Experimental studies of chaos Winfree Nonlinear oscillators in biology Mandelbrot Fractals 1980s Widespread interest in chaos, fractals, oscillators, and their applications Table 1.1.1 d 2x dx m + b + kx = 0 (1) dt 2 dt is an ordinary differential equation, because it involves only ordinary derivatives dx / dt and d 2x / dt2. That is, there is only one independent variable, the time t. In contrast, the heat equation ∂u ∂ 2 u = ∂t ∂x 2 1.2 THE IMPORTANCE OF BEING NONLINEAR 5 is a partial differential equation—it has both time t and space x as independent variables. Our concern in this book is with purely temporal behavior, and so we deal with ordinary differential equations almost exclusively. A very general framework for ordinary differential equations is provided by the system x1 f1 ( x1 , … , xn ) (2) x n fn ( x1 , … , xn ). Here the overdots denote differentiation with respect to t. Thus xi w dxi /dt. The variables x1, … , xn might represent concentrations of chemicals in a reactor, pop- ulations of different species in an ecosystem, or the positions and velocities of the planets in the solar system. The functions f1, … , fn are determined by the problem at hand. For example, the damped oscillator (1) can be rewritten in the form of (2), thanks to the following trick: we introduce new variables x1 x and x2 x. Then x1 x2 , from the definitions, and b k x 2 = x = − x − x m m b k = − x2 − x1 m m from the definitions and the governing equation (1). Hence the equivalent system (2) is x1 = x2 b k x 2 = − x2 − x1. m m This system is said to be linear, because all the xi on the right-hand side appear to the first power only. Otherwise the system would be nonlinear. Typical nonlinear terms are products, powers, and functions of the xi , such as x1 x2, ( x1 )3, or cos x2. For example, the swinging of a pendulum is governed by the equation g x + sin x = 0, L where x is the angle of the pendulum from vertical, g is the acceleration due to gravity, and L is the length of the pendulum. The equivalent system is nonlinear: 6 OVERVIEW x1 = x2 g x 2 = − sin x1. L Nonlinearity makes the pendulum equation very difficult to solve analytically. The usual way around this is to fudge, by invoking the small angle approximation sin x x x for x 1. This converts the problem to a linear one, which can then be solved easily. But by restricting to small x, we’re throwing out some of the physics, like motions where the pendulum whirls over the top. Is it really necessary to make such drastic approximations? It turns out that the pendulum equation can be solved analytically, in terms of elliptic functions. But there ought to be an easier way. After all, the motion of the pendulum is simple: at low energy, it swings back and forth, and at high energy it whirls over the top. There should be some way of extracting this information from the system directly. This is the sort of problem we’ll learn how to solve, using geometric methods. Here’s the rough idea. Suppose we happen to know a solution to the pendulum system, for a particular initial condition. This solution would be a pair of func- tions x1(t) and x2(t), representing the position and velocity of the pendulum. If we construct an abstract space with coordinates (x1, x2 ), then the solution ( x1(t), x2(t)) corresponds to a point moving along a curve in this space (Figure 1.2.1). x2 (x1(t), x2(t)) x1 (x1(0), x2(0)) Figure 1.2.1 This curve is called a trajectory, and the space is called the phase space for the system. The phase space is completely filled with trajectories, since each point can serve as an initial condition. Our goal is to run this construction in reverse: given the system, we want to draw the trajectories, and thereby extract information about the solutions. In 1.2 THE IMPORTANCE OF BEING NONLINEAR 7 many cases, geometric reasoning will allow us to draw the trajectories without actually solving the system! Some terminology: the phase space for the general system (2) is the space with coordinates x1, … , xn. Because this space is n-dimensional, we will refer to (2) as an n-dimensional system or an nth-order system. Thus n represents the dimension of the phase space. Nonautonomous Systems You might worry that (2) is not general enough because it doesn’t include any explicit time dependence. How do we deal with time-dependent or nonautonomous equations like the forced harmonic oscillator mx + bx + kx = F cos t ? In this case too there’s an easy trick that allows us to rewrite the system in the form (2). We let x1 x and x2 x as before but now we introduce x3 t. Then x3 1 and so the equivalent system is x1 = x2 1 x 2 = (−kx1 − bx2 + F cos x3 ) (3) m x3 = 1 which is an example of a three-dimensional system. Similarly, an nth-order time-dependent equation is a special case of an (n 1)-dimensional system. By this trick, we can always remove any time dependence by adding an extra dimen- sion to the system. The virtue of this change of variables is that it allows us to visualize a phase space with trajectories frozen in it. Otherwise, if we allowed explicit time depen- dence, the vectors and the trajectories would always be wiggling—this would ruin the geometric picture we’re trying to build. A more physical motivation is that the state of the forced harmonic oscillator is truly three-dimensional: we need to know three numbers, x, x , and t, to predict the future, given the present. So a three-di- mensional phase space is natural. The cost, however, is that some of our terminology is nontraditional. For exam- ple, the forced harmonic oscillator would traditionally be regarded as a second-or- der linear equation, whereas we will regard it as a third-order nonlinear system, since (3) is nonlinear, thanks to the cosine term. As we’ll see later in the book, forced oscillators have many of the properties associated with nonlinear systems, and so there are genuine conceptual advantages to our choice of language. Why Are Nonlinear Problems So Hard? As we’ve mentioned earlier, most nonlinear systems are impossible to solve analytically. Why are nonlinear systems so much harder to analyze than linear ones? The essential difference is that linear systems can be broken down into parts. Then each part can be solved separately and finally recombined to get the answer. 8 OVERVIEW This idea allows a fantastic simplification of complex problems, and underlies such methods as normal modes, Laplace transforms, superposition arguments, and Fourier analysis. In this sense, a linear system is precisely equal to the sum of its parts. But many things in nature don’t act this way. Whenever parts of a system inter- fere, or cooperate, or compete, there are nonlinear interactions going on. Most of everyday life is nonlinear, and the principle of superposition fails spectacularly. If you listen to your two favorite songs at the same time, you won’t get double the pleasure! Within the realm of physics, nonlinearity is vital to the operation of a laser, the formation of turbulence in a fluid, and the superconductivity of Josephson junctions. 1.3 A Dynamical View of the World Now that we have established the ideas of nonlinearity and phase space, we can present a framework for dynamics and its applications. Our goal is to show the logical structure of the entire subject. The framework presented in Figure 1.3.1 will guide our studies thoughout this book. The framework has two axes. One axis tells us the number of variables needed to characterize the state of the system. Equivalently, this number is the dimension of the phase space. The other axis tells us whether the system is linear or nonlinear. For example, consider the exponential growth of a population of organisms. This system is described by the first-order differential equation x rx where x is the population at time t and r is the growth rate. We place this system in the column labeled “n 1” because one piece of information—the current value of the population x—is sufficient to predict the population at any later time. The system is also classified as linear because the differential equation x rx is linear in x. As a second example, consider the swinging of a pendulum, governed by g x + sin x = 0. L In contrast to the previous example, the state of this system is given by two vari- ables: its current angle x and angular velocity x. (Think of it this way: we need the initial values of both x and x to determine the solution uniquely. For example, if we knew only x, we wouldn’t know which way the pendulum was swinging.) Because two variables are needed to specify the state, the pendulum belongs in the n 2 column of Figure 1.3.1. Moreover, the system is nonlinear, as discussed in the previous section. Hence the pendulum is in the lower, nonlinear half of the n 2 column. One can continue to classify systems in this way, and the result will be some- thing like the framework shown here. Admittedly, some aspects of the picture are 1.3 A DYNAMICAL VIEW OF THE WORLD 9 10 Number of variables n=1 n=2 n≥3 n >> 1 Continuum Figure 1.3.1 Growth, decay, or Oscillations Collective phenomena Waves and patterns equilibrium Linear oscillator Civil engineering, Coupled harmonic oscillators Elasticity Exponential growth structures Mass and spring Solid-state physics Wave equations Linear RC circuit Electrical engineering OVERVIEW RLC circuit Molecular dynamics Electromagnetism (Maxwell) Radioactive decay 2-body problem Equilibrium statistical Quantum mechanics (Kepler, Newton) mechanics (Schrödinger, Heisenberg, Dirac) Heat and diffusion Acoustics Viscous fluids The frontier Nonlinearity Chaos Spatio-temporal complexity Fixed points Pendulum Strange attractors Coupled nonlinear oscillators Nonlinear waves (shocks, solitons) Anharmonic oscillators (Lorenz) Plasmas Bifurcations Lasers, nonlinear optics Overdamped systems, Limit cycles 3-body problem (Poincaré) Nonequilibrium statistical Earthquakes relaxational dynamics mechanics Biological oscillators Chemical kinetics General relativity (Einstein) Nonlinear (neurons, heart cells) Iterated maps (Feigenbaum) Nonlinear solid-state physics Quantum field theory Logistic equation for single species (semiconductors) Predator-prey cycles Fractals Reaction-diffusion, (Mandelbrot) biological and chemical waves Nonlinear electronics Josephson arrays (van der Pol, Josephson) Forced nonlinear oscillators Heart cell synchronization Fibrillation (Levinson, Smale) Neural networks Epilepsy Immune system Turbulent fluids (Navier-Stokes) Practical uses of chaos Ecosystems Life Quantum chaos ? Economics debatable. You might think that some topics should be added, or placed differ- ently, or even that more axes are needed—the point is to think about classifying systems on the basis of their dynamics. There are some striking patterns in Figure 1.3.1. All the simplest systems occur in the upper left-hand corner. These are the small linear systems that we learn about in the first few years of college. Roughly speaking, these linear systems exhibit growth, decay, or equilibrium when n 1, or oscillations when n 2. The italicized phrases in Figure 1.3.1 indicate that these broad classes of phenomena first arise in this part of the diagram. For example, an RC circuit has n 1 and cannot oscillate, whereas an RLC circuit has n 2 and can oscillate. The next most familiar part of the picture is the upper right-hand corner. This is the domain of classical applied mathematics and mathematical physics where the linear partial differential equations live. Here we find Maxwell’s equations of electricity and magnetism, the heat equation, Schrödinger’s wave equation in quantum mechanics, and so on. These partial differential equations involve an infinite “continuum” of variables because each point in space contributes addi- tional degrees of freedom. Even though these systems are large, they are tractable, thanks to such linear techniques as Fourier analysis and transform methods. In contrast, the lower half of Figure 1.3.1—the nonlinear half—is often ignored or deferred to later courses. But no more! In this book we start in the lower left cor- ner and systematically head to the right. As we increase the phase space dimension from n 1 to n 3, we encounter new phenomena at every step, from fixed points and bifurcations when n 1, to nonlinear oscillations when n 2, and finally chaos and fractals when n 3. In all cases, a geometric approach proves to be very powerful, and gives us most of the information we want, even though we usu- ally can’t solve the equations in the traditional sense of finding a formula for the answer. Our journey will also take us to some of the most exciting parts of modern science, such as mathematical biology and condensed-matter physics. You’ll notice that the framework also contains a region forbiddingly marked “The frontier.” It’s like in those old maps of the world, where the mapmakers wrote, “Here be dragons” on the unexplored parts of the globe. These topics are not completely unexplored, of course, but it is fair to say that they lie at the limits of current understanding. The problems are very hard, because they are both large and nonlinear. The resulting behavior is typically complicated in both space and time, as in the motion of a turbulent fluid or the patterns of electrical activity in a fibrillating heart. Toward the end of the book we will touch on some of these prob- lems—they will certainly pose challenges for years to come. 1.3 A DYNAMICAL VIEW OF THE WORLD 11 Part I ONE-DIMENSIONAL FLOWS 2 FLOWS ON THE LINE 2.0 Introduction In Chapter 1, we introduced the general system x1 f1 ( x1 ,... , xn ) xn fn ( x1 ,... , xn ) and mentioned that its solutions could be visualized as trajectories flowing through an n-dimensional phase space with coordinates ( x1, … , xn ). At the moment, this idea probably strikes you as a mind-bending abstraction. So let’s start slowly, beginning here on earth with the simple case n 1. Then we get a single equation of the form x f ( x ). Here x ( t ) is a real-valued function of time t, and f ( x ) is a smooth real-valued func- tion of x. We’ll call such equations one-dimensional or first-order systems. Before there’s any chance of confusion, let’s dispense with two fussy points of terminology: 1. The word system is being used here in the sense of a dynamical system, not in the classical sense of a collection of two or more equations. Thus a single equation can be a “system.” 2. We do not allow f to depend explicitly on time. Time-dependent or “nonautonomous” equations of the form x f ( x, t ) are more com- plicated, because one needs two pieces of information, x and t, to pre- dict the future state of the system. Thus x f ( x, t ) should really be regarded as a two-dimensional or second-order system, and will there- fore be discussed later in the book. 2.0 INTRODUCTION 15 2.1 A Geometric Way of Thinking Pictures are often more helpful than formulas for analyzing nonlinear systems. Here we illustrate this point by a simple example. Along the way we will introduce one of the most basic techniques of dynamics: interpreting a differential equation as a vector field. Consider the following nonlinear differential equation: x sin x. (1) To emphasize our point about formulas versus pictures, we have chosen one of the few nonlinear equations that can be solved in closed form. We separate the variables and then integrate: dx dt , sin x which implies t = ∫ csc x dx = − ln csc x + cot x + C. To evaluate the constant C, suppose that x x0 at t 0. Then C ln | csc x0 cot x0 |. Hence the solution is csc x0 + cot x0 t = ln. (2) csc x + cot x This result is exact, but a headache to interpret. For example, can you answer the following questions? 1. Suppose x0 Q / 4; describe the qualitative features of the solution x ( t ) for all t > 0. In particular, what happens as t l d? 2. For an arbitrary initial condition x0, what is the behavior of x ( t ) as t ld? Think about these questions for a while, to see that formula (2) is not transparent. In contrast, a graphical analysis of (1) is clear and simple, as shown in Figure 2.1.1. We think of t as time, x as the position of an imaginary particle mov- ing along the real line, and x as the velocity of that particle. Then the differential equation x sin x represents a vector field on the line: it dictates the velocity vec- tor x at each x. To sketch the vector field, it is convenient to plot x versus x, and then draw arrows on the x-axis to indicate the corresponding velocity vector at each x. The arrows point to the right when x 0 and to the left when x 0. 16 FLOWS ON THE LINE ẋ x π 2π Figure 2.1.1 Here’s a more physical way to think about the vector field: imagine that fluid is flowing steadily along the x-axis with a velocity that varies from place to place, according to the rule x sin x. As shown in Figure 2.1.1, the flow is to the right when x 0 and to the left when x 0. At points where x 0, there is no flow; such points are therefore called fixed points. You can see that there are two kinds of fixed points in Figure 2.1.1: solid black dots represent stable fixed points (often called attractors or sinks, because the flow is toward them) and open circles repre- sent unstable fixed points (also known as repellers or sources). Armed with this picture, we can now easily understand the solutions to the dif- ferential equation x sin x. We just start our imaginary particle at x0 and watch how it is carried along by the flow. This approach allows us to answer the questions above as follows: 1. Figure 2.1.1 shows that a particle starting at x0 Q / 4 moves to the right faster and faster until it crosses x Q / 2 (where sin x reaches its maximum). Then the particle starts slowing down and eventually approaches the stable fixed point x Q from the left. Thus, the quali- tative form of the solution is as shown in Figure 2.1.2. Note that the curve is concave up at first, and then concave down; this corresponds to the initial acceleration for x Q / 2, followed by the deceleration toward x Q. 2. The same reasoning applies to any initial condition x0. Figure 2.1.1 shows that if x 0 initially, the particle heads to the right and asymp- x totically approaches the near- est stable fixed point. Similarly, π if x 0 initially, the particle approaches the nearest stable fixed point to its left. If x 0, then x remains constant. The –π qualitative form of the solu- 4 t tion for any initial condition is sketched in Figure 2.1.3. Figure 2.1.2 2.1 A GEOMETRIC WAY OF THINKING 17 2π π x 0 t −π −2π Figure 2.1.3 In all honesty, we should admit that a picture can’t tell us certain quantitative things: for instance, we don’t know the time at which the speed x is greatest. But in many cases qualitative information is what we care about, and then pictures are fine. 2.2 Fixed Points and Stability The ideas developed in the last section can be extended to any one-dimensional system x f ( x ). We just need to draw the graph of f ( x ) and then use it to sketch the vector field on the real line (the x-axis in Figure 2.2.1). ẋ f (x) x Figure 2.2.1 18 FLOWS ON THE LINE As before, we imagine that a fluid is flowing along the real line with a local velocity f ( x ). This imaginary fluid is called the phase fluid, and the real line is the phase space. The flow is to the right where f ( x ) > 0 and to the left where f ( x ) 0. To find the solution to x f ( x ) starting from an arbitrary initial condition x0, we place an imaginary particle (known as a phase point) at x0 and watch how it is carried along by the flow. As time goes on, the phase point moves along the x-axis according to some function x ( t ). This function is called the trajectory based at x0, and it represents the solution of the differential equation starting from the initial condition x0. A picture like Figure 2.2.1, which shows all the qualitatively different trajectories of the system, is called a phase portrait. The appearance of the phase portrait is controlled by the fixed points x*, defined by f ( x*) 0; they correspond to stagnation points of the flow. In Figure 2.2.1, the solid black dot is a stable fixed point (the local flow is toward it) and the open dot is an unstable fixed point (the flow is away from it). In terms of the original differential equation, fixed points represent equilibrium solutions (sometimes called steady, constant, or rest solutions, since if x x* ini- tially, then x ( t ) x* for all time). An equilibrium is defined to be stable if all suf- ficiently small disturbances away from it damp out in time. Thus stable equilibria are represented geometrically by stable fixed points. Conversely, unstable equilib- ria, in which disturbances grow in time, are represented by unstable fixed points. EXAMPLE 2.2.1: Find all fixed points for x = x 2 −1 , and classify their stability. Solution: Here f ( x ) x2 – 1. To find the fixed points, we set f ( x*) 0 and solve for x*. Thus x* o1. To determine stability, we plot x2 –1 and then sketch the vector field (Figure 2.2.2). The flow is to the right where x2 – 1 > 0 and to the left where x2 – 1 0. Thus x* –1 is stable, and x* 1 is unstable. ẋ f (x) = x2 − 1 x Figure 2.2.2 2.2 FIXED POINTS AND STABILITY 19 Note that the definition of stable equilibrium is based on small disturbances; certain large disturbances may fail to decay. In Example 2.2.1, all small distur- bances to x* –1 will decay, but a large disturbance that sends x to the right of x 1 will not decay—in fact, the phase point will be repelled out to d. To emphasize this aspect of stability, we sometimes say that x* –1 is locally stable, but not globally stable. EXAMPLE 2.2.2: Consider the electrical circuit shown in Figure 2.2.3. A resistor R and a capacitor C are in series with a battery of constant dc voltage V0. Suppose that the switch is closed at t 0, and that there is no charge on the capacitor initially. Let Q ( t ) denote the charge on the capacitor at time t p 0. Sketch the graph of Q ( t ). Solution: This type of circuit problem I is probably familiar to you. It is governed by linear equations and can be solved R analytically, but we prefer to illustrate the geometric approach. + C First we write the circuit equations. V0 As we go around the circuit, the total − voltage drop must equal zero; hence –V0 RI Q / C 0, where I is the cur- rent flowing through the resistor. This Figure 2.2.3 current causes charge to accumulate on the capacitor at a rate Q I. Hence −V0 + RQ + Q C = 0 or V Q Q = f (Q ) = 0 −. R RC The graph of f ( Q ) is a straight line with a negative slope (Figure 2.2.4). The corre- sponding vector field has a fixed point where f ( Q ) 0, which occurs at Q* CV0. The flow is to the right where f ( Q ) > 0 and Q̇ to the left where f ( Q ) 0. Thus the flow is always toward Q*—it is a stable fixed point. f (Q) In fact, it is globally stable, in the sense that it is approached from all initial conditions. Q To sketch Q ( t ), we start a phase point at Q* the origin of Figure 2.2.4 and imagine how it would move. The flow carries the phase point monotonically toward Q*. Its speed Q Figure 2.2.4 20 FLOWS ON THE LINE decreases linearly as it approaches the fixed point; therefore Q ( t ) is increasing and concave down, as shown in Figure 2.2.5. Q EXAMPLE 2.2.3: Sketch the phase portrait corresponding CV0 to x = x − cos x, and determine the sta- bility of all the fixed points. Solution: One approach would be to plot the function f ( x ) x – cos x and then sketch the associated vector field. t This method is valid, but it requires you Figure 2.2.5 to figure out what the graph of x – cos x looks like. There’s an easier solution, which exploits the fact that we know how to graph y x and y cos x separately. We plot both graphs on the same axes and then observe that they intersect in exactly one point (Figure 2.2.6). y=x y = cos x x x* Figure 2.2.6 This intersection corresponds to a fixed point, since x* cos x* and therefore f ( x*) 0. Moreover, when the line lies above the cosine curve, we have x > cos x and so x 0 : the flow is to the right. Similarly, the flow is to the left where the line is below the cosine curve. Hence x* is the only fixed point, and it is unstable. Note that we can classify the stability of x*, even though we don’t have a formula for x* itself! 2.3 Population Growth The simplest model for the growth of a population of organisms is N rN , where N ( t ) is the population at time t, and r 0 is the growth rate. This model 2.3 POPULATION GROWTH 21 Growth rate predicts exponential growth: N ( t ) N0 ert, where N0 is the r population at t 0. Of course such exponen- tial growth cannot go on for- ever. To model the effects of N overcrowding and limited K resources, population biolo- gists and demographers often assume that the per capita Figure 2.3.1 growth rate N N decreases when N becomes sufficiently Growth rate large, as shown in Figure 2.3.1. r For small N, the growth rate equals r, just as before. However, for populations larger than a certain carrying capacity K, the growth rate K N actually becomes negative; the death rate is higher than the Figure 2.3.2 birth rate. A mathematically conve- nient way to incorporate these ideas is to assume that the per capita growth rate N N decreases linearly with N (Figure 2.3.2). This leads to the logistic equation ⎛ N⎞ N = rN ⎜⎜1− ⎟⎟⎟ ⎜⎝ K ⎠ first suggested to describe the growth of human populations by Verhulst in 1838. This equation can be solved analytically (Exercise 2.3.1) but once again we prefer a graphical approach. We plot N versus N to see what the vector field looks like. Note that we plot only N p 0, since it makes no sense to think about a negative population (Figure 2.3.3). Fixed points occur at N* 0 and N* K, as found by setting N 0 and solving for N. By looking at the flow in Figure 2.3.3, we see that N* 0 is an unstable fixed point and N* K is a stable fixed point. In biological terms, N 0 is an unstable equilibrium: a small population will grow exponen- tially fast and run away from N 0. On the other hand, if N is disturbed slightly from K, the disturbance will decay monotonically and N ( t ) l K as t l d. In fact, Figure 2.3.3 shows that if we start a phase point at any N0 > 0, it will always flow toward N K. Hence the population always approaches the carrying capacity. The only exception is if N0 0; then there’s nobody around to start reproducing, and so N 0 for all time. (The model does not allow for spontaneous generation!) 22 FLOWS ON THE LINE Ṅ N K/2 K Figure 2.3.3 Figure 2.3.3 also allows us to deduce the qualitative shape of the solutions. For example, if N0 K / 2, the phase point moves faster and faster until it crosses N K / 2, where the parabola in Figure 2.3.3 reaches its maximum. Then the phase point slows down and eventually creeps toward N K. In biological terms, this means that the population initially grows in an accelerating fashion, and the graph of N ( t ) is concave up. But after N K / 2, the derivative N begins to decrease, and so N ( t ) is concave down as it asymptotes to the horizontal line N K (Figure 2.3.4). Thus the graph of N ( t ) is S-shaped or sigmoid for N0 K / 2. N K K/2 t Figure 2.3.4 Something qualitatively different occurs if the initial condition N0 lies between K / 2 and K; now the solutions are decelerating from the start. Hence these solutions are concave down for all t. If the population initially exceeds the carrying capacity ( N0 > K ), then N ( t ) decreases toward N K and is concave up. Finally, if N0 0 or N0 K, then the population stays constant. Critique of the Logistic Model Before leaving this example, we should make a few comments about the biolog- ical validity of the logistic equation. The algebraic form of the model is not to be taken literally. The model should really be regarded as a metaphor for populations that have a tendency to grow from zero population up to some carrying capacity K. 2.3 POPULATION GROWTH 23 Originally a much stricter interpretation was proposed, and the model was argued to be a universal law of growth (Pearl 1927). The logistic equation was tested in laboratory experiments in which colonies of bacteria, yeast, or other simple organisms were grown in conditions of constant climate, food supply, and absence of predators. For a good review of this literature, see Krebs (1972, pp. 190–200). These experiments often yielded sigmoid growth curves, in some cases with an impressive match to the logistic predictions. On the other hand, the agreement was much worse for fruit flies, flour beetles, and other organisms that have complex life cycles involving eggs, larvae, pupae, and adults. In these organisms, the predicted asymptotic approach to a steady carrying capacity was never observed—instead the populations exhibited large, persistent fluctuations after an initial period of logistic growth. See Krebs (1972) for a discussion of the possible causes of these fluctuations, including age structure and time-delayed effects of overcrowding in the population. For further reading on population biology, see Pielou (1969) or May (1981). Edelstein–Keshet (1988) and Murray (2002, 2003) are excellent textbooks on math- ematical biology in general. 2.4 Linear Stability Analysis So far we have relied on graphical methods to determine the stability of fixed points. Frequently one would like to have a more quantitative measure of stability, such as the rate of decay to a stable fixed point. This sort of information may be obtained by linearizing about a fixed point, as we now explain. Let x* be a fixed point, and let I ( t ) x ( t ) – x* be a small perturbation away from x*. To see whether the perturbation grows or decays, we derive a differential equation for I. Differentiation yields d I = ( x − x*) = x , dt since x* is constant. Thus I = x = f ( x ) = f ( x * + I ). Now using Taylor’s expan- sion we obtain f ( x* I ) f ( x*) I f ′ ( x*) O ( I2 ) , where O ( I 2 ) denotes quadratically small terms in I. Finally, note that f ( x*) 0 since x* is a fixed point. Hence I = I f ′( x*) + O( I 2 ). Now if f ′( x*) v 0, the O ( I 2 ) terms are negligible and we may write the approximation 24 FLOWS ON THE LINE I ≈ I f ′( x*). This is a linear equation in I, and is called the linearization about x*. It shows that the perturbation I ( t ) grows exponentially if f ′( x*) 0 and decays if f ′ ( x*) 0. If f ′( x*) 0, the O ( I 2) terms are not negligible and a nonlinear analysis is needed to determine stability, as discussed in Example 2.4.3 below. The upshot is that the slope f ′( x*) at the fixed point determines its stability. If you look back at the earlier examples, you’ll see that the slope was always negative at a stable fixed point. The importance of the sign of f ′( x*) was clear from our graphical approach; the new feature is that now we have a measure of how stable a fixed point is—that’s determined by the magnitude of f ′( x*). This magnitude plays the role of an exponential growth or decay rate. Its reciprocal 1 / | f ′( x*)| is a characteristic time scale; it determines the time required for x ( t ) to vary signifi- cantly in the neighborhood of x*. EXAMPLE 2.4.1: Using linear stability analysis, determine the stability of the fixed points for x sin x. Solution: The fixed points occur where f ( x ) sin x 0. Thus x* kQ, where k is an integer. Then ⎧⎪ 1, k even f ′( x*) = cos k Q = ⎪⎨ ⎪⎪⎩−1, k odd. Hence x* is unstable if k is even and stable if k is odd. This agrees with the results shown in Figure 2.1.1. EXAMPLE 2.4.2: Classify the fixed points of the logistic equation, using linear stability analysis, and find the characteristic time scale in each case. Solution: Here f ( N ) = rN (1− NK ) , with fixed points N* 0 and N* K. Then f ′( N ) = r − 2KrN and so f ′(0) r and f ′( K ) –r. Hence N* 0 is unstable and N* K is stable, as found earlier by graphical arguments. In either case, the char- acteristic time scale is 1 f ′( N *) 1 r. EXAMPLE 2.4.3: What can be said about the stability of a fixed point when f ′( x*) 0 ? Solution: Nothing can be said in general. The stability is best determined on a case-by-case basis, using graphical methods. Consider the following examples: (a) x = −x3 (b) x x3 (c) x x 2 (d) x 0 2.4 LINEAR STABILITY ANALYSIS 25 Each of these systems has a fixed point x* 0 with f ′( x*) 0. However the stabil- ity is different in each case. Figure 2.4.1 shows that (a) is stable and (b) is unstable. Case (c) is a hybrid case we’ll call half-stable, since the fixed point is attracting from the left and repelling from the right. We therefore indicate this type of fixed point by a half-filled circle. Case (d) is a whole line of fixed points; perturbations neither grow nor decay. These examples may seem artificial, but we will see that they arise naturally in the context of bifurcations—more about that later. ẋ (a) ẋ (b) x x ẋ (c) ẋ (d) x x Figure 2.4. 2.5 Existence and Uniqueness Our treatment of vector fields has been very informal. In particular, we have taken a cavalier attitude toward questions of existence and uniqueness of solutions to the system x f ( x ). That’s in keeping with the “applied” spirit of this book. Nevertheless, we should be aware of what can go wrong in pathological cases. 26 FLOWS ON THE LINE EXAMPLE 2.5.1: Show that the solution to x x1/ 3 starting from x0 0 is not unique. Solution: The point x 0 is a fixed point, so one obvious solution is x ( t ) 0 for all t. The surprising fact is that there is another solution. To find it we separate variables and integrate: ∫x dx = ∫ dt −1 / 3 so 32 x 2 / 3 = t + C. Imposing the initial condition x (0) 0 yields C 0. Hence x(t ) = ( 23 t ) is also a solution! 3/ 2 When uniqueness fails, our geometric approach collapses because the phase point doesn’t know how to move; if a phase point were started at the origin, would it stay there or would it move according to x(t ) = ( 23 t ) ? (Or as my friends in 3/ 2 elementary school used to say when discussing the problem of the irresistible force and the immovable object, perhaps the phase point would explode!) Actually, the situation in Example 2.5.1 is even worse than we’ve let on—there are infinitely many solutions starting from the ẋ same initial condition (Exercise 2.5.4). What’s the source of the non-uniqueness? A hint comes from looking at the vector field x (Figure 2.5.1). We see that the fixed point x* 0 is very unstable—the slope f ′(0) is infinite. Figure 2.5.1 Chastened by this example, we state a the- orem that provides sufficient conditions for existence and uniqueness of solutions to x f ( x ). Existence and Uniqueness Theorem: Consider the initial value problem x f ( x ), x (0) x0. Suppose that f ( x ) and f ′( x ) are continuous on an open interval R of the x-axis, and suppose that x0 is a point in R. Then the initial value problem has a solution x ( t ) on some time interval (U, U) about t 0, and the solution is unique. For proofs of the existence and uniqueness theorem, see Borrelli and Coleman (1987), Lin and Segel (1988), or virtually any text on ordinary differential equations. This theorem says that if f ( x ) is smooth enough, then solutions exist and are unique. Even so, there’s no guarantee that solutions exist forever, as shown by the next example. 2.5 EXISTENCE AND UNIQUENESS 27 EXAMPLE 2.5.2: Discuss the existence and uniqueness of solutions to the initial value problem x = 1 + x 2 , x (0) x0. Do solutions exist for all time? Solution: Here f ( x ) 1 x2. This function is continuous and has a continuous derivative for all x. Hence the theorem tells us that solutions exist and are unique for any initial condition x0. But the theorem does not say that the solutions exist for all time; they are only guaranteed to exist in a (possibly very short) time interval around t 0. For example, consider the case where x (0) 0. Then the problem can be solved analytically by separation of variables: dx ∫ 1+ x 2 = ∫ dt, which yields tan–1 x t C. The initial condition x (0) 0 implies C 0. Hence x ( t ) tan t is the solution. But notice that this solution exists only for –Q / 2 t Q / 2, because x ( t ) l od as t l oQ / 2. Outside of that time interval, there is no solution to the initial value problem for x0 0. The amazing thing about Example 2.5.2 is that the system has solutions that reach infinity in finite time. This phenomenon is called blow-up. As the name suggests, it is of physical relevance in models of combustion and other runaway processes. There are various ways to extend the existence and uniqueness theorem. One can allow f to depend on time t, or on several variables x1, … , xn. One of the most useful generalizations will be discussed later in Section 6.2. From now on, we will not worry about issues of existence and uniqueness—our vector fields will typically be smooth enough to avoid trouble. If we happen to come across a more dangerous example, we’ll deal with it then. 2.6 Impossibility of Oscillations Fixed points dominate the dynamics of first-order systems. In all our examples so far, all trajectories either approached a fixed point, or diverged to od. In fact, those are the only things that can happen for a vector field on the real line. The rea- son is that trajectories are forced to increase or decrease monotonically, or remain constant (Figure 2.6.1). To put it more geometrically, the phase point never reverses direction. 28 FLOWS ON THE LINE ẋ x Figure 2.6.1 Thus, if a fixed point is regarded as an equilibrium solution, the approach to equilibrium is always monotonic—overshoot and damped oscillations can never occur in a first-order system. For the same reason, undamped oscillations are impossible. Hence there are no periodic solutions to x f ( x ). These general results are fundamentally topological in origin. They reflect the fact that x f ( x ) corresponds to flow on a line. If you flow monotonically on a line, you’ll never come back to your starting place—that’s why periodic solutions are impossible. (Of course, if we were dealing with a circle rather than a line, we could eventually return to our starting place. Thus vector fields on the circle can exhibit periodic solutions, as we discuss in Chapter 4.) Mechanical Analog: Overdamped Systems It may seem surprising that solutions to x f ( x ) can’t oscillate. But this result becomes obvious if we think in terms of a mechanical analog. We regard x f ( x ) as a limiting case of Newton’s law, in the limit where the “inertia term” mx is negligible. For example, suppose a mass m is attached to a nonlinear spring whose restor- ing force is F ( x ), where x is the displacement from the origin. Furthermore, sup- pose that the mass is immersed in a vat of very viscous fluid, like honey or motor oil (Figure 2.6.2), so that it is subject to a damping force bx. Then Newton’s law is mx + bx = F ( x ). If the viscous damping is strong compared honey to the inertia term (bx >> mx), the system should behave like bx F ( x ), or equivalently F(x) x f ( x ) , where f ( x ) b–1F ( x ). In this over damped limit, the behavior of the mechanical system is clear. The mass prefers to sit at a sta- m ble equilibrium, where f ( x ) 0 and f ′( x ) 0. If displaced a bit, the mass is slowly dragged Figure 2.6.2 back to equilibrium by the restoring force. No overshoot can occur, because the damping is 2.6 IMPOSSIBILITY OF OSCILLATIONS 29 enormous. And undamped oscillations are out of the question! These conclusions agree with those obtained earlier by geometric reasoning. Actually, we should confess that this argument contains a slight swindle. The neglect of the inertia term mx is valid, but only after a rapid initial transient during which the inertia and damping terms are of comparable size. An honest discussion of this point requires more machinery than we have available. We’ll return to this matter in Section 3.5. 2.7 Potentials There’s another way to visualize the dynamics of the first-order system x f ( x ), based on the physical idea of potential energy. We picture a particle sliding down the walls of a potential well, where the potential V ( x ) is defined by dV f (x) = −. dx As before, you should imagine that the particle is heavily damped—its inertia is completely negligible compared to the damping force and the force due to the potential. For example, suppose that the particle has to slog through a thick layer of goo that covers the walls of the potential (Figure 2.7.1). V(x) goo x Figure 2.7.1 The negative sign in the definition of V follows the standard convention in physics; it implies that the particle always moves “downhill” as the motion proceeds. To see this, we think of x as a function of t, and then calculate the time-derivative of V ( x ( t )). Using the chain rule, we obtain 30 FLOWS ON THE LINE dV dV dx . dt dx dt Now for a first-order system, dx dV =− , dt dx since x = f ( x ) = −dV /dx, by the definition of the potential. Hence, ⎛ dV ⎞⎟ 2 dV = −⎜⎜ ≤ 0. dt ⎜⎝ dx ⎟⎟⎠ Thus V ( t ) decreases along trajectories, and so the particle always moves toward lower potential. Of course, if the particle happens to be at an equilibrium point where dV / dx 0, then V remains constant. This is to be expected, since dV / dx 0 implies x 0; equilibria occur at the fixed points of the vector field. Note that local minima of V ( x ) correspond to stable fixed points, as we’d expect intuitively, and local maxima correspond to unstable fixed points. V(x) EXAMPLE 2.7.1: Graph the potential for the system x = −x, and iden- tify all the equilibrium points. Solution: We need to find V ( x ) such that x –dV / dx –x. The general solution is V ( x ) = 12 x 2 + C , where C is an arbitrary constant. (It always happens Figure 2.7.2 that the potential is only defined up to an additive constant. For convenience, we usually choose C 0.) The graph of V ( x ) is shown in Figure 2.7.2. The only equilibrium point occurs at x 0, and it’s stable. V(x) EXAMPLE 2.7.2: Graph the potential for the system x = x − x3 , and x identify all equilibrium points. −1 1 Solution: Solving –dV / dx x – x3 yields V = − 12 x 2 + 14 x 4 + C. Once again we set C 0. Figure 2.7.3 shows the graph of V. The local minima Figure 2.7.3 at x ±1 correspond to stable equilibria, and the local maximum at x 0 corresponds to an unstable equilibrium. The potential shown in Figure 2.7.3 is often called a double-well potential, and the system is said to be bistable, since it has two stable equilibria. 2.7 POTENTIALS 31 2.8 Solving Equations on the Computer Throughout this chapter we have used graphical and analytical methods to ana- lyze first-order systems. Every budding dynamicist should master a third tool: numerical methods. In the old days, numerical methods were impractical because they required enormous amounts of tedious hand-calculation. But all that has changed, thanks to the computer. Computers enable us to approximate the solu- tions to analytically intractable problems, and also to visualize those solutions. In this section we take our first look at dynamics on the computer, in the context of numerical integration of x f ( x ). Numerical integration is a vast subject. We will barely scratch the surface. See Chapter 17 of Press et al. (2007) for an excellent treatment. Euler’s Method The problem can be posed this way: given the differential equation x f ( x ), subject to the condition x x0 at t t0, find a systematic way to approximate the solution x ( t ). Suppose we use the vector field interpretation of x f ( x ). That is, we think of a fluid flowing steadily on the x-axis, with velocity f ( x ) at the location x. Imagine we’re riding along with a phase point being carried downstream by the fluid. Initially we’re at x0, and the local velocity is f ( x0). If we flow for a short time %t, we’ll have moved a distance f ( x0)%t, because distance rate q time. Of course, that’s not quite right, because our velocity was changing a little bit throughout the step. But over a sufficiently small step, the velocity will be nearly constant and our approximation should be reasonably good. Hence our new position x ( t0 %t ) is approximately x0 f ( x0) %t. Let’s call this approximation x1. Thus x ( t0 %t ) x x1 x0 f ( x0)%t. Now we iterate. Our approximation has taken us to a new location x1 ; our new velocity is f ( x1) ; we step forward to x2 x1 f ( x1) %t ; and so on. In general, the update rule is xn+1 xn f ( xn) %t. This is the simplest possible numerical integration scheme. It is known as Euler’s method. Euler’s method can be visualized by plotting x versus t (Figure 2.8.1). The curve shows the exact solution x ( t ), and the open dots show its values x ( tn ) at the dis- crete times tn t0 n%t. The black dots show the approximate values given by the Euler method. As you can see, the approximation gets bad in a hurry unless %t is extremely small. Hence Euler’s method is not recommended in practice, but it con- tains the conceptual essence of the more accurate methods to be discussed next. 32 FLOWS ON THE LINE Euler exact x1 x(t1) x0 t0 t1 t2 Figure 2.8.1 Refinements One problem with the Euler method is that it estimates the derivative only at the left end of the time interval between tn and tn+1. A more sensible approach would be to use the average derivative across this interval. This is the idea behind the improved Euler method. We first take a trial step across the interval, using the Euler method. This produces a trial value x n+1 = xn + f ( xn )Δt ; the tilde above the x indicates that this is a tentative step, used only as a probe. Now that we’ve esti- mated the derivative on both ends of the interval, we average f ( xn ) and f ( x n 1 ), and use that to take the real step across the interval. Thus the improved Euler method is x n +1 = xn + f ( xn )Δt (the trial step) xn+1 = xn + 12 ⎡⎣ f ( xn ) + f ( x n+1 )⎤⎦ Δt. (the real step) This method is more accurate than the Euler method, in the sense that it tends to make a smaller error E |x ( tn ) – xn| for a given stepsize %t. In both cases, the error E l 0 as %t l 0, but the error decreases faster for the improved Euler method. One can show that E r %t for the Euler method, but E r (%t )2 for the improved Euler method (Exercises 2.8.7 and 2.8.8). In the jargon of numerical analysis, the Euler method is first order, whereas the improved Euler method is second order. Methods of third, fourth, and even higher orders have been concocted, but you should realize that higher order methods are not necessarily superior. Higher order methods require more calculations and function evaluations, so there’s a computational cost associated with them. In practice, a good balance is achieved by the fourth-order Runge–Kutta method. To find xn+1 in terms of xn , this method first requires us to calculate the following four numbers (cunningly chosen, as you’ll see in Exercise 2.8.9): 2.8 SOLVING EQUATIONS ON THE COMPUTER 33 k1 = f ( xn )Δt k2 = f ( xn + 12 k1 )Δt k3 = f ( xn + 12 k2 )Δt k4 = f ( xn + k3 )Δt. Then xn+1 is given by xn+1 = xn + 61 ( k1 + 2 k2 + 2 k3 + k4 ). This method generally gives accurate results without requiring an excessively small stepsize %t. Of course, some problems are nastier, and may require small steps in certain time intervals, while permitting very large steps elsewhere. In such cases, you may want to use a Runge–Kutta routine with an automatic stepsize control; see Press et al. (2007) for details. Now that computers are so fast, you may wonder why we don’t just pick a tiny %t once and for all. The trouble is that excessively many computations will occur, and each one carries a penalty in the form of round-off error. Computers don’t have infinite accuracy—they don’t distinguish between numbers that differ by some small amount E. For numbers of order 1, typically E x 10 –7 for single precision and E x 10 –16 for double precision. Round-off error occurs during every calculation, and will begin to accumulate in a serious way if %t is too small. See Hubbard and West (1991) for a good discussion. Practical Matters You have several options if you want to solve differential equations on the com- puter. If you like to do things yourself, you can write your own numerical integra- tion routines in your favorite programming language, and plot the results using whatever graphics programs are available. The information given above should be enough to get you started. For further guidance, consult Press et al. (2007). A second option is to use existing packages for numerical methods. Matlab, Mathematica, and Maple all have programs for solving ordinary differential equa- tions and graphing their solutions. The final option is for people who want to explore dynamics, not computing. Dynamical systems software is available for personal computers. All you have to do is type in the equations and the parameters; the program solves the equations numerically and plots the results. Some recommended programs are PPlane (writ- ten by John Polking and available online as a Java applet; this is a pleasant choice for beginners) and XPP (by Bard Ermentrout, available on many platforms includ- ing iPhone and iPad; this is a more powerful tool for researchers and serious users). 34 FLOWS ON THE LINE EXAMPLE 2.8.1: Solve the system x = x(1− x ) numerically. Solution: This is a logistic equation (Section 2.3) with parameters r 1, K 1. Previously we gave a rough sketch of the solutions, based on geometric arguments; now we can draw a more quantitative picture. As a first step, we plot the slope field for the system in the ( t, x ) plane (Figure 2.8.2). Here the equation x = x(1− x ) is being interpreted in a new way: for each point ( t, x ), the equation gives the slope dx / dt of the solution passing through that point. The slopes are indicated by little line segments in Figure 2.8.2. Finding a solution now becomes a problem of drawing a curve that is always tangent to the local slope. Figure 2.8.3 shows four solutions starting from various points in the ( t, x ) plane. 2 x 1 t 0 5 10 Figure 2.8.2 2 x 1 t 0 5 10 Figure 2.8.3 These numerical solutions were computed using the Runge–Kutta method with a stepsize %t 0.1. The solutions have the shape expected from Section 2.3. Computers are indispensable for studying dynamical systems. We will use them liberally throughout this book, and you should do likewise. 2.8 SOLVING EQUATIONS ON THE COMPUTER 35 EXERCISES FOR CHAPTER 2 2.1 A Geometric Way of Thinking In the next three exercises, interpret x sin x as a flow on the line. 2.1.1 Find all the fixed points of the flow. 2.1.2 At which points x does the flow have greatest velocity to the right? 2.1.3 a) Find the flow’s acceleration x as a function of x. b) Find the points where the flow has maximum positive acceleration. 2.1.4 (Exact solution of x sin x ) As shown in the text, x sin x has the solu- tion t ln | (csc x0 cot x0) / (csc x cot x ) |, where x0= x (0) is the initial value of x. a) Given the specific initial condition x0 Q / 4, show that the solution above can be inverted to obtain ⎛ et ⎞⎟ x(t ) = 2 tan−1 ⎜⎜ ⎟. ⎜⎝1 + 2 ⎟⎟⎠ Conclude that x ( t ) l Q as t l d, as claimed in Section 2.1. (You need to be good with trigonometric identities to solve this problem.) b) Try to find the analytical solution for x ( t ), given an arbitrary initial condition x0. 2.1.5 (A mechanical analog) a) Find a mechanical system that is approximately governed by x sin x. b) Using your physical intuition, explain why it now becomes obvious that x* 0 is an unstable fixed point and x* Q is stable. 2.2 Fixed Points and Stability Analyze the following equations graphically. In each case, sketch the vector field on the real line, find all the fixed points, classify their stability, and sketch the graph of x ( t ) for different initial conditions. Then try for a few minutes to obtain the analytical solution for x ( t ); if you get stuck, don’t try for too long since in sev- eral cases it’s impossible to solve the equation in closed form! 2.2.1 x = 4 x 2 −16 2.2.2 x = 1− x14 2.2.3 x = x − x3 2.2.4 x = e−x sin x 2.2.5 x = 1 + 12 cos x 2.2.6 x = 1− 2 cos x 36 FLOWS ON THE LINE 2.2.7 x = e x − cos x (Hint: Sketch the graphs of ex and cos x on the same axes, and look for intersections. You won’t be able to find the fixed points explicitly, but you can still find the qualitative behavior.) 2.2.8 (Working backwards, from flows to equations) Given an equation x f ( x ) , we know how to sketch the corresponding flow on the real line. Here you are asked to solve the opposite problem: For the phase portrait shown in Figure 1, find an equation that is consistent with it. (There are an infinite number of correct answers—and wrong ones too.) –1 0 2 Figure 1 2.2.9 (Backwards again, now from solutions to equations) Find an equation x f ( x ) whose solutions x ( t) are consistent with those shown in Figure 2. x 1 0 t −1 Figure 2 2.2.10 (Fixed points) For each of (a)–(e), find an equation x f ( x ) with the stated properties, or if there are no examples, explain why not. (In all cases, assume that f ( x ) is a smooth function.) a) Every real number is a fixed point. b) Every integer is a fixed point, and there are no others. c) There are precisely three fixed points, and all of them are stable. d) There are no fixed points. e) There are precisely 100 fixed points. 2.2.11 (Analytical solution for charging capacitor) Obtain the analytical solu- V Q tion of the initial value problem Q = 0 − , with Q ( 0) 0, which arose in R RC Example 2.2.2. EXERCISES 37 I 2.2.12 (A nonlinear resistor) Suppose the g(V) resistor in Example 2.2.2 is replaced by a nonlinear resistor. In other words, this resistor does not have a linear relation between voltage and current. Such V nonlinearity arises in certain solid-state devices. Instead of IR V / R, suppose we have IR g ( V ), where g ( V ) has the shape shown in Figure 3. Redo Example 2.2.2 in this case. Derive the cir- cuit equations, find all the fixed points, and ana- Figure 3 lyze their stability. What qualitative effects does the nonlinearity introduce (if any)? 2.2.13 (Terminal velocity) The velocity v ( t ) of a skydiver falling to the ground is governed by mv = mg − kv 2 , where m is the mass of the skydiver, g is the accelera- tion due to gravity, and k > 0 is a constant related to the amount of air resistance. a) Obtain the analytical solution for v ( t ), assuming that v (0) 0. b) Find the limit of v ( t ) as t l d. This limiting velocity is called the terminal velocity. (Beware of bad jokes about the word terminal and parachutes that fail to open.) c) Give a graphical analysis of this problem, and thereby re-derive a formula for the terminal velocity. d) An experimental study (Carlson et al. 1942) confirmed that the equation mv = mg − kv 2 gives a good quantitative fit to data on human skydivers. Six men were dropped from altitudes varying from 10,600 feet to 31,400 feet to a terminal altitude of 2,100 feet, at which they opened their parachutes. The long free fall from 31,400 to 2,100 feet took 116 seconds. The average weight of the men and their equipment was 261.2 pounds. In these units, g 32.2 ft / sec2. Compute the average velocity Vavg. e) Using the data given here, estimate the terminal velocity, and the value of the drag constant k. (Hints: First you need to find an exact formula for s ( t ), the distance fallen, where s (0) 0, s v , and v ( t ) is known from part (a). You should get s(t ) = Vg ln (cosh Vgt ), where V is the terminal velocity. Then solve 2 for V graphically or numerically, using s 29,300, t 116, and g 32.2.) A slicker way to estimate V is to suppose V x Vavg, as a rough first approxi- mation. Then show that gt / V x 15. Since gt / V >> 1, we may use the approxima- tion ln(cosh x ) x x – ln 2 for x >> 1. Derive this approximation and then use it to obtain an analytical estimate of V. Then k follows from part (b). This analysis is from Davis (1962). 2.3 Population Growth 2.3.1 (Exact solution of logistic equation) There are two ways to solve