Digital Image Processing 4th Edition - Global Edition PDF
Document Details
Uploaded by Deleted User
2018
Rafael C. Gonzalez, Richard E. Woods
Tags
Summary
This is a textbook on Digital Image Processing, specifically the 4th Edition Global Edition. It covers the fundamental principles and techniques of digital image processing, including intensity transformations and spatial filtering, as well as filtering in the frequency domain. Resources for digital image processing are listed in the text.
Full Transcript
GLOBAL GLOBAL EDITION EDITION...
GLOBAL GLOBAL EDITION EDITION For these Global Editions, the editorial team at Pearson has Digital Image Processing collaborated with educators across the world to address a wide range of subjects and requirements, equipping students with the best possible learning tools. This Global Edition preserves the cutting-edge approach and pedagogy of the original, but also features alterations, customization, and adaptation from the North American version. Digital Image Processing FOURTH EDITION Rafael C. Gonzalez Richard E. Woods EDITION FOURTH This is a special edition of an established title widely used by colleges and universities Woods Gonzalez throughout the world. Pearson published this exclusive edition for the benefit of students outside the United States and Canada. If you purchased this book within the United States or Canada, you should be aware that it has been imported without the approval of EDITION GLOBAL the Publisher or Author. The Global Edition is not supported in the United States and Canada. Pearson Global Edition www.EBooksWorld.ir Gonzalez_04_1292223049_Final.indd 1 11/08/17 5:27 PM Support Package for Digital Image Processing Your new textbook provides access to support packages that may include reviews in areas like probability and vectors, tutorials on topics relevant to the material in the book, an image database, and more. Refer to the Preface in the textbook for a detailed list of resources. Follow the instructions below to register for the Companion Website for Rafael C. Gonzalez and Richard E. Woods’ Digital Image Processing, Fourth Edition, Global Edition. 1. Go to www.ImageProcessingPlace.com 2. Find the title of your textbook. 3. Click Support Materials and follow the on-screen instructions to create a login name and password. Use the login name and password you created during registration to start using the digital resources that accompany your textbook. IMPORTANT: This serial code can only be used once. This subscription is not transferrable. www.EBooksWorld.ir Gonzalez_04_1292223049_ifc_Final.indd 1 11/08/17 5:33 PM D igital Image Processing 4 Global Edition FOURTH EDITION Rafael C. Gonzalez University of Tennessee Richard E. Woods Interapptics 330 Hudson Street, New York, NY 10013 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 1 7/6/2017 10:55:08 AM Senior Vice President Courseware Portfolio Management: Marcia J. Horton Director, Portfolio Management: Engineering, Computer Science & Global Editions: Julian Partridge Portfolio Manager: Julie Bai Field Marketing Manager: Demetrius Hall Product Marketing Manager: Yvonne Vannatta Marketing Assistant: Jon Bryant Content Managing Producer, ECS and Math: Scott Disanno Content Producer: Michelle Bayman Project Manager: Rose Kernan Assistant Project Editor, Global Editions: Vikash Tiwari Operations Specialist: Maura Zaldivar-Garcia Manager, Rights and Permissions: Ben Ferrini Senior Manufacturing Controller, Global Editions: Trudy Kimber Media Production Manager, Global Editions: Vikram Kumar Cover Designer: Lumina Datamatics Cover Photo: CT image—© zhuravliki.123rf.com/Pearson Asset Library; Gram-negative bacteria—© royaltystockphoto.com/ Shutterstock.com; Orion Nebula—© creativemarc/Shutterstock.com; Fingerprints—© Larysa Ray/Shutterstock.com; Cancer cells—© Greenshoots Communications/Alamy Stock Photo MATLAB is a registered trademark of The MathWorks, Inc., 1 Apple Hill Drive, Natick, MA 01760-2098. Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the World Wide Web at: www.pearsonglobaleditions.com © Pearson Education Limited 2018 The rights of Rafael C. Gonzalez and Richard E. Woods to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. Authorized adaptation from the United States edition, entitled Digital Image Processing, Fourth Edition, ISBN 978-0-13-335672-4, by Rafael C. Gonzalez and Richard E. Woods, published by Pearson Education © 2018. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the pub- lisher or a license permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, Saffron House, 6–10 Kirby Street, London EC1N 8TS. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library 10 9 8 7 6 5 4 3 2 1 ISBN 10: 1-292-22304-9 ISBN 13: 978-1-292-22304-9 Typeset by Richard E. Woods Printed and bound in Malaysia www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 2 7/6/2017 10:55:08 AM To Connie, Ralph, and Rob and To Janice, David, and Jonathan www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 3 6/16/2017 2:01:57 PM This page intentionally left blank www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 4 6/16/2017 2:01:57 PM Contents Preface 9 Acknowledgments 12 The Book Website 13 The DIP4E Support Packages 13 About the Authors 14 1 Introduction 17 What is Digital Image Processing? 18 The Origins of Digital Image Processing 19 Examples of Fields that Use Digital Image Processing 23 Fundamental Steps in Digital Image Processing 41 Components of an Image Processing System 44 2 Digital Image Fundamentals 47 Elements of Visual Perception 48 Light and the Electromagnetic Spectrum 54 Image Sensing and Acquisition 57 Image Sampling and Quantization 63 Some Basic Relationships Between Pixels 79 Introduction to the Basic Mathematical Tools Used in Digital Image Processing 83 3 Intensity Transformations and Spatial Filtering 119 Background 120 Some Basic Intensity Transformation Functions 122 Histogram Processing 133 Fundamentals of Spatial Filtering 153 Smoothing (Lowpass) Spatial Filters 164 Sharpening (Highpass) Spatial Filters 175 Highpass, Bandreject, and Bandpass Filters from Lowpass Filters 188 Combining Spatial Enhancement Methods 191 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 5 6/16/2017 2:01:57 PM 6 Contents 4 Filtering in the Frequency Domain 203 Background 204 Preliminary Concepts 207 Sampling and the Fourier Transform of Sampled Functions 215 The Discrete Fourier Transform of One Variable 225 Extensions to Functions of Two Variables 230 Some Properties of the 2-D DFT and IDFT 240 The Basics of Filtering in the Frequency Domain 260 Image Smoothing Using Lowpass Frequency Domain Filters 272 Image Sharpening Using Highpass Filters 284 Selective Filtering 296 The Fast Fourier Transform 303 5 Image Restoration and Reconstruction 317 A Model of the Image Degradation/Restoration process 318 Noise Models 318 Restoration in the Presence of Noise Only—Spatial Filtering 327 Periodic Noise Reduction Using Frequency Domain Filtering 340 Linear, Position-Invariant Degradations 348 Estimating the Degradation Function 352 Inverse Filtering 356 Minimum Mean Square Error (Wiener) Filtering 358 Constrained Least Squares Filtering 363 Geometric Mean Filter 367 Image Reconstruction from Projections 368 6 Color Image Processing 399 Color Fundamentals 400 Color Models 405 Pseudocolor Image Processing 420 Basics of Full-Color Image Processing 429 Color Transformations 430 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 6 6/16/2017 2:01:57 PM Contents 7 Color Image Smoothing and Sharpening 442 Using Color in Image Segmentation 445 Noise in Color Images 452 Color Image Compression 455 7 Wavelet and Other Image Transforms 463 Preliminaries 464 Matrix-based Transforms 466 Correlation 478 Basis Functions in the Time-Frequency Plane 479 Basis Images 483 Fourier-Related Transforms 484 Walsh-Hadamard Transforms 496 Slant Transform 500 Haar Transform 502 Wavelet Transforms 504 8 Image Compression and Watermarking 539 Fundamentals 540 Huffman Coding 553 Golomb Coding 556 Arithmetic Coding 561 LZW Coding 564 Run-length Coding 566 Symbol-based Coding 572 Bit-plane Coding 575 Block Transform Coding 576 Predictive Coding 594 Wavelet Coding 614 Digital Image Watermarking 624 9 Morphological Image Processing 635 Preliminaries 636 Erosion and Dilation 638 Opening and Closing 644 The Hit-or-Miss Transform 648 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 7 6/16/2017 2:01:57 PM 8 Contents Some Basic Morphological Algorithms 652 Morphological Reconstruction 667 Summary of Morphological Operations on Binary Images 673 Grayscale Morphology 674 10 Image Segmentation 699 Fundamentals 700 Point, Line, and Edge Detection 701 Thresholding 742 Segmentation by Region Growing and by Region Splitting and Merging 764 Region Segmentation Using Clustering and Superpixels 770 Region Segmentation Using Graph Cuts 777 Segmentation Using Morphological Watersheds 786 The Use of Motion in Segmentation 796 11 Feature Extraction 811 Background 812 Boundary Preprocessing 814 Boundary Feature Descriptors 831 Region Feature Descriptors 840 Principal Components as Feature Descriptors 859 Whole-Image Features 868 Scale-Invariant Feature Transform (SIFT) 881 12 Image Pattern Classification 903 Background 904 Patterns and Pattern Classes 906 Pattern Classification by Prototype Matching 910 Optimum (Bayes) Statistical Classifiers 923 Neural Networks and Deep Learning 931 Deep Convolutional Neural Networks 964 Some Additional Details of Implementation 987 Bibliography 995 Index 1009 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 8 7/12/2017 10:23:39 AM Preface When something can be read without effort, great effort has gone into its writing. Enrique Jardiel Poncela This edition of Digital Image Processing is a major revision of the book. As in the 1977 and 1987 editions by Gonzalez and Wintz, and the 1992, 2002, and 2008 editions by Gonzalez and Woods, this sixth-generation edition was prepared with students and instructors in mind. The principal objectives of the book continue to be to provide an introduction to basic concepts and methodologies applicable to digital image processing, and to develop a foundation that can be used as the basis for further study and research in this field. To achieve these objectives, we focused again on material that we believe is fundamental and whose scope of application is not limited to the solution of specialized problems. The mathematical complexity of the book remains at a level well within the grasp of college seniors and first-year graduate students who have introductory preparation in mathematical analysis, vectors, matrices, probability, statistics, linear systems, and computer programming. The book website pro- vides tutorials to support readers needing a review of this background material. One of the principal reasons this book has been the world leader in its field for 40 years is the level of attention we pay to the changing educational needs of our readers. The present edition is based on an extensive survey that involved faculty, students, and independent readers of the book in 150 institutions from 30 countries. The survey revealed a need for coverage of new material that has matured since the last edition of the book. The principal findings of the survey indicated a need for: Expanded coverage of the fundamentals of spatial filtering. A more comprehensive and cohesive coverage of image transforms. A more complete presentation of finite differences, with a focus on edge detec- tion. A discussion of clustering, superpixels, and their use in region segmentation. Coverage of maximally stable extremal regions. Expanded coverage of feature extraction to include the Scale Invariant Feature Transform (SIFT). Expanded coverage of neural networks to include deep neural networks, back- propagation, deep learning, and, especially, deep convolutional neural networks. More homework exercises at the end of the chapters. The new and reorganized material that resulted in the present edition is our attempt at providing a reasonable balance between rigor, clarity of presentation, and the findings of the survey. In addition to new material, earlier portions of the text were updated and clarified. This edition contains 241 new images, 72 new draw- ings, and 135 new exercises. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 9 6/16/2017 2:01:57 PM 10 Preface New to This Edition The highlights of this edition are as follows. Chapter 1: Some figures were updated, and parts of the text were rewritten to cor- respond to changes in later chapters. Chapter 2: Many of the sections and examples were rewritten for clarity. We added 14 new exercises. Chapter 3: Fundamental concepts of spatial filtering were rewritten to include a discussion on separable filter kernels, expanded coverage of the properties of low- pass Gaussian kernels, and expanded coverage of highpass, bandreject, and band- pass filters, including numerous new examples that illustrate their use. In addition to revisions in the text, including 6 new examples, the chapter has 59 new images, 2 new line drawings, and 15 new exercises. Chapter 4: Several of the sections of this chapter were revised to improve the clar- ity of presentation. We replaced dated graphical material with 35 new images and 4 new line drawings. We added 21 new exercises. Chapter 5: Revisions to this chapter were limited to clarifications and a few cor- rections in notation. We added 6 new images and 14 new exercises, Chapter 6: Several sections were clarified, and the explanation of the CMY and CMYK color models was expanded, including 2 new images. Chapter 7: This is a new chapter that brings together wavelets, several new trans- forms, and many of the image transforms that were scattered throughout the book. The emphasis of this new chapter is on the presentation of these transforms from a unified point of view. We added 24 new images, 20 new drawings, and 25 new exer- cises. Chapter 8: The material was revised with numerous clarifications and several improvements to the presentation. Chapter 9: Revisions of this chapter included a complete rewrite of several sec- tions, including redrafting of several line drawings. We added 16 new exercises Chapter 10: Several of the sections were rewritten for clarity. We updated the chapter by adding coverage of finite differences, K-means clustering, superpixels, and graph cuts. The new topics are illustrated with 4 new examples. In total, we added 29 new images, 3 new drawings, and 6 new exercises. Chapter 11: The chapter was updated with numerous topics, beginning with a more detailed classification of feature types and their uses. In addition to improvements in the clarity of presentation, we added coverage of slope change codes, expanded the explanation of skeletons, medial axes, and the distance transform, and added sev- eral new basic descriptors of compactness, circularity, and eccentricity. New mate- rial includes coverage of the Harris-Stephens corner detector, and a presentation of maximally stable extremal regions. A major addition to the chapter is a comprehen- sive discussion dealing with the Scale-Invariant Feature Transform (SIFT). The new material is complemented by 65 new images, 15 new drawings, and 12 new exercises. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 10 6/16/2017 2:01:57 PM Preface 11 Chapter 12: This chapter underwent a major revision to include an extensive rewrite of neural networks and deep learning, an area that has grown significantly since the last edition of the book. We added a comprehensive discussion on fully connected, deep neural networks that includes derivation of backpropagation start- ing from basic principles. The equations of backpropagation were expressed in “tra- ditional” scalar terms, and then generalized into a compact set of matrix equations ideally suited for implementation of deep neural nets. The effectiveness of fully con- nected networks was demonstrated with several examples that included a compari- son with the Bayes classifier. One of the most-requested topics in the survey was coverage of deep convolutional neural networks. We added an extensive section on this, following the same blueprint we used for deep, fully connected nets. That is, we derived the equations of backpropagation for convolutional nets, and showed how they are different from “traditional” backpropagation. We then illustrated the use of convolutional networks with simple images, and applied them to large image data- bases of numerals and natural scenes. The written material is complemented by 23 new images, 28 new drawings, and 12 new exercises. Also for the first time, we have created student and faculty support packages that can be downloaded from the book website. The Student Support Package contains many of the original images in the book and answers to selected exercises The Fac- ulty Support Package contains solutions to all exercises, teaching suggestions, and all the art in the book in the form of modifiable PowerPoint slides. One support pack- age is made available with every new book, free of charge. The book website, established during the launch of the 2002 edition, continues to be a success, attracting more than 25,000 visitors each month. The site was upgraded for the launch of this edition. For more details on site features and content, see The Book Website, following the Acknowledgments section. This edition of Digital Image Processing is a reflection of how the educational needs of our readers have changed since 2008. As is usual in an endeavor such as this, progress in the field continues after work on the manuscript stops. One of the reasons why this book has been so well accepted since it first appeared in 1977 is its continued emphasis on fundamental concepts that retain their relevance over time. This approach, among other things, attempts to provide a measure of stability in a rapidly evolving body of knowledge. We have tried to follow the same principle in preparing this edition of the book. R.C.G. R.E.W. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 11 6/16/2017 2:01:57 PM 12 Acknowledgments Acknowledgments We are indebted to a number of individuals in academic circles, industry, and gov- ernment who have contributed to this edition of the book. In particular, we wish to extend our appreciation to Hairong Qi and her students, Zhifei Zhang and Chengcheng Li, for their valuable review of the material on neural networks, and for their help in generating examples for that material. We also want to thank Ernesto Bribiesca Correa for providing and reviewing material on slope chain codes, and Dirk Padfield for his many suggestions and review of several chapters in the book. We appreciate Michel Kocher’s many thoughtful comments and suggestions over the years on how to improve the book. Thanks also to Steve Eddins for his sugges- tions on MATLAB and related software issues. Numerous individuals have contributed to material carried over from the previ- ous to the current edition of the book. Their contributions have been important in so many different ways that we find it difficult to acknowledge them in any other way but alphabetically. We thank Mongi A. Abidi, Yongmin Kim, Bryan Morse, Andrew Oldroyd, Ali M. Reza, Edgardo Felipe Riveron, Jose Ruiz Shulcloper, and Cameron H.G. Wright for their many suggestions on how to improve the presentation and/or the scope of coverage in the book. We are also indebted to Naomi Fernandes at the MathWorks for providing us with MATLAB software and support that were impor- tant in our ability to create many of the examples and experimental results included in this edition of the book. A significant percentage of the new images used in this edition (and in some cases their history and interpretation) were obtained through the efforts of indi- viduals whose contributions are sincerely appreciated. In particular, we wish to acknowledge the efforts of Serge Beucher, Uwe Boos, Michael E. Casey, Michael W. Davidson, Susan L. Forsburg, Thomas R. Gest, Daniel A. Hammer, Zhong He, Roger Heady, Juan A. Herrera, John M. Hudak, Michael Hurwitz, Chris J. Johannsen, Rhonda Knighton, Don P. Mitchell, A. Morris, Curtis C. Ober, David. R. Pickens, Michael Robinson, Michael Shaffer, Pete Sites, Sally Stowe, Craig Watson, David K. Wehe, and Robert A. West. We also wish to acknowledge other individuals and organizations cited in the captions of numerous figures throughout the book for their permission to use that material. We also thank Scott Disanno, Michelle Bayman, Rose Kernan, and Julie Bai for their support and significant patience during the production of the book. R.C.G. R.E.W. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 12 6/16/2017 2:01:57 PM The Book Website www.ImageProcessingPlace.com Digital Image Processing is a completely self-contained book. However, the compan- ion website offers additional support in a number of important areas. For the Student or Independent Reader the site contains Reviews in areas such as probability, statistics, vectors, and matrices. A Tutorials section containing dozens of tutorials on topics relevant to the mate- rial in the book. An image database containing all the images in the book, as well as many other image databases. For the Instructor the site contains An Instructor’s Manual with complete solutions to all the problems. Classroom presentation materials in modifiable PowerPoint format. Material removed from previous editions, downloadable in convenient PDF format. Numerous links to other educational resources. For the Practitioner the site contains additional specialized topics such as Links to commercial sites. Selected new references. Links to commercial image databases. The website is an ideal tool for keeping the book current between editions by includ- ing new topics, digital images, and other relevant material that has appeared after the book was published. Although considerable care was taken in the production of the book, the website is also a convenient repository for any errors discovered between printings. The DIP4E Support Packages In this edition, we created support packages for students and faculty to organize all the classroom support materials available for the new edition of the book into one easy download. The Student Support Package contains many of the original images in the book, and answers to selected exercises, The Faculty Support Package contains solutions to all exercises, teaching suggestions, and all the art in the book in modifiable PowerPoint slides. One support package is made available with every new book, free of charge. Applications for the support packages are submitted at the book website. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 13 6/16/2017 2:01:57 PM About the Authors RAFAEL C. GONZALEZ R. C. Gonzalez received the B.S.E.E. degree from the University of Miami in 1965 and the M.E. and Ph.D. degrees in electrical engineering from the University of Florida, Gainesville, in 1967 and 1970, respectively. He joined the Electrical and Computer Science Department at the University of Tennessee, Knoxville (UTK) in 1970, where he became Associate Professor in 1973, Professor in 1978, and Distin- guished Service Professor in 1984. He served as Chairman of the department from 1994 through 1997. He is currently a Professor Emeritus at UTK. Gonzalez is the founder of the Image & Pattern Analysis Laboratory and the Robotics & Computer Vision Laboratory at the University of Tennessee. He also founded Perceptics Corporation in 1982 and was its president until 1992. The last three years of this period were spent under a full-time employment contract with Westinghouse Corporation, who acquired the company in 1989. Under his direction, Perceptics became highly successful in image processing, computer vision, and laser disk storage technology. In its initial ten years, Perceptics introduced a series of innovative products, including: The world’s first commercially available computer vision system for automatically reading license plates on moving vehicles; a series of large-scale image processing and archiving systems used by the U.S. Navy at six different manufacturing sites throughout the country to inspect the rocket motors of missiles in the Trident II Submarine Program; the market-leading family of imaging boards for advanced Macintosh computers; and a line of trillion- byte laser disk products. He is a frequent consultant to industry and government in the areas of pattern recognition, image processing, and machine learning. His academic honors for work in these fields include the 1977 UTK College of Engineering Faculty Achievement Award; the 1978 UTK Chancellor’s Research Scholar Award; the 1980 Magnavox Engineering Professor Award; and the 1980 M.E. Brooks Distinguished Professor Award. In 1981 he became an IBM Professor at the University of Tennessee and in 1984 he was named a Distinguished Service Professor there. He was awarded a Distinguished Alumnus Award by the University of Miami in 1985, the Phi Kappa Phi Scholar Award in 1986, and the University of Tennessee’s Nathan W. Dougherty Award for Excellence in Engineering in 1992. Honors for industrial accomplishment include the 1987 IEEE Outstanding Engi- neer Award for Commercial Development in Tennessee; the 1988 Albert Rose National Award for Excellence in Commercial Image Processing; the 1989 B. Otto Wheeley Award for Excellence in Technology Transfer; the 1989 Coopers and Lybrand Entrepreneur of the Year Award; the 1992 IEEE Region 3 Outstanding Engineer Award; and the 1993 Automated Imaging Association National Award for Technology Development. Gonzalez is author or co-author of over 100 technical articles, two edited books, and four textbooks in the fields of pattern recognition, image processing, and robot- ics. His books are used in over 1000 universities and research institutions throughout www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 14 6/16/2017 2:01:57 PM the world. He is listed in the prestigious Marquis Who’s Who in America, Marquis Who’s Who in Engineering, Marquis Who’s Who in the World, and in 10 other national and international biographical citations. He is the co-holder of two U.S. Patents, and has been an associate editor of the IEEE Transactions on Systems, Man and Cyber- netics, and the International Journal of Computer and Information Sciences. He is a member of numerous professional and honorary societies, including Tau Beta Pi, Phi Kappa Phi, Eta Kappa Nu, and Sigma Xi. He is a Fellow of the IEEE. RICHARD E. WOODS R. E. Woods earned his B.S., M.S., and Ph.D. degrees in Electrical Engineering from the University of Tennessee, Knoxville in 1975, 1977, and 1980, respectively. He became an Assistant Professor of Electrical Engineering and Computer Science in 1981 and was recognized as a Distinguished Engineering Alumnus in 1986. A veteran hardware and software developer, Dr. Woods has been involved in the founding of several high-technology startups, including Perceptics Corporation, where he was responsible for the development of the company’s quantitative image analysis and autonomous decision-making products; MedData Interactive, a high- technology company specializing in the development of handheld computer systems for medical applications; and Interapptics, an internet-based company that designs desktop and handheld computer applications. Dr. Woods currently serves on several nonprofit educational and media-related boards, including Johnson University, and was recently a summer English instructor at the Beijing Institute of Technology. He is the holder of a U.S. Patent in the area of digital image processing and has published two textbooks, as well as numerous articles related to digital signal processing. Dr. Woods is a member of several profes- sional societies, including Tau Beta Pi, Phi Kappa Phi, and the IEEE. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 15 6/16/2017 2:01:57 PM This page intentionally left blank www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 4 6/16/2017 2:01:57 PM 1 Introduction One picture is worth more than ten thousand words. Anonymous Preview Interest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation, and processing of image data for tasks such as storage, transmission, and extraction of pictorial information. This chapter has several objectives: (1) to define the scope of the field that we call image processing; (2) to give a historical perspective of the origins of this field; (3) to present an overview of the state of the art in image processing by examining some of the principal areas in which it is applied; (4) to discuss briefly the principal approaches used in digital image processing; (5) to give an overview of the components contained in a typical, general-purpose image processing system; and (6) to provide direction to the literature where image processing work is reported. The material in this chapter is extensively illustrated with a range of images that are represen- tative of the images we will be using throughout the book. Upon completion of this chapter, readers should: Understand the concept of a digital image. Be aware of the different fields in which digi- tal image processing methods are applied. Have a broad overview of the historical under- pinnings of the field of digital image process- Be familiar with the basic processes involved ing. in image processing. Understand the definition and scope of digi- Be familiar with the components that make tal image processing. up a general-purpose digital image process- ing system. Know the fundamentals of the electromag- netic spectrum and its relationship to image Be familiar with the scope of the literature generation. where image processing work is reported. 17 www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 17 6/16/2017 2:01:58 PM 18 Chapter 1 Introduction 1.1 WHAT IS DIGITAL IMAGE PROCESSING? 1.1 An image may be defined as a two-dimensional function, f ( x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates ( x, y) is called the intensity or gray level of the image at that point. When x, y, and the intensity values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of ele- ments, each of which has a particular location and value. These elements are called picture elements, image elements, pels, and pixels. Pixel is the term used most widely to denote the elements of a digital image. We will consider these definitions in more formal terms in Chapter 2. Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultrasound, electron microscopy, and com- puter-generated images. Thus, digital image processing encompasses a wide and var- ied field of applications. There is no general agreement among authors regarding where image process- ing stops and other related areas, such as image analysis and computer vision, start. Sometimes, a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a sin- gle number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use comput- ers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intel- ligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision. There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to con- sider three types of computerized processes in this continuum: low-, mid-, and high- level processes. Low-level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low- level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing of images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A mid-level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higher-level processing www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 18 6/16/2017 2:01:58 PM 1.2 The Origins of Digital Image Processing 19 involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with human vision. Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encom- passes processes whose inputs and outputs are images and, in addition, includes pro- cesses that extract attributes from images up to, and including, the recognition of individual objects. As an illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area con- taining the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense of.” As will become evident shortly, digital image processing, as we have defined it, is used routinely in a broad range of areas of exceptional social and economic value. The concepts devel- oped in the following chapters are the foundation for the methods used in those application areas. 1.2 THE ORIGINS OF DIGITAL IMAGE PROCESSING 1.2 One of the earliest applications of digital images was in the newspaper industry, when pictures were first sent by submarine cable between London and New York. Introduction of the Bartlane cable picture transmission system in the early 1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. Specialized printing equipment coded pictures for cable transmission, then reconstructed them at the receiving end. Figure 1.1 was transmitted in this way and reproduced on a telegraph printer fitted with typefaces simulating a halftone pattern. Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution of FIGURE 1.1 A digital picture produced in 1921 from a coded tape by a telegraph printer with special typefaces. (McFarlane.) [References in the bibliography at the end of the book are listed in alphabetical order by authors’ last names.] www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 19 6/16/2017 2:01:58 PM 20 Chapter 1 Introduction FIGURE 1.2 A digital picture made in 1922 from a tape punched after the signals had crossed the Atlantic twice. (McFarlane.) intensity levels. The printing method used to obtain Fig. 1.1 was abandoned toward the end of 1921 in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal. Figure 1.2 shows an image obtained using this method. The improvements over Fig. 1.1 are evident, both in tonal quality and in resolution. The early Bartlane systems were capable of coding images in five distinct levels of gray. This capability was increased to 15 levels in 1929. Figure 1.3 is typical of the type of images that could be obtained using the 15-tone equipment. During this period, introduction of a system for developing a film plate via light beams that were modulated by the coded picture tape improved the reproduction process consider- ably. Although the examples just cited involve digital images, they are not considered digital image processing results in the context of our definition, because digital com- puters were not used in their creation. Thus, the history of digital image processing is intimately tied to the development of the digital computer. In fact, digital images require so much storage and computational power that progress in the field of digi- tal image processing has been dependent on the development of digital computers and of supporting technologies that include data storage, display, and transmission. FIGURE 1.3 Unretouched cable picture of Generals Pershing (right) and Foch, transmitted in 1929 from London to New York by 15-tone equipment. (McFarlane.) www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 20 6/16/2017 2:01:58 PM 1.2 The Origins of Digital Image Processing 21 The concept of a computer dates back to the invention of the abacus in Asia Minor, more than 5000 years ago. More recently, there have been developments in the past two centuries that are the foundation of what we call a computer today. However, the basis for what we call a modern digital computer dates back to only the 1940s, with the introduction by John von Neumann of two key concepts: (1) a memory to hold a stored program and data, and (2) conditional branching. These two ideas are the foundation of a central processing unit (CPU), which is at the heart of computers today. Starting with von Neumann, there were a series of key advanc- es that led to computers powerful enough to be used for digital image processing. Briefly, these advances may be summarized as follows: (1) the invention of the tran- sistor at Bell Laboratories in 1948; (2) the development in the 1950s and 1960s of the high-level programming languages COBOL (Common Business-Oriented Lan- guage) and FORTRAN (Formula Translator); (3) the invention of the integrated circuit (IC) at Texas Instruments in 1958; (4) the development of operating systems in the early 1960s; (5) the development of the microprocessor (a single chip consist- ing of a CPU, memory, and input and output controls) by Intel in the early 1970s; (6) the introduction by IBM of the personal computer in 1981; and (7) progressive miniaturization of components, starting with large-scale integration (LI) in the late 1970s, then very-large-scale integration (VLSI) in the 1980s, to the present use of ultra-large-scale integration (ULSI) and experimental nonotechnologies. Concur- rent with these advances were developments in the areas of mass storage and display systems, both of which are fundamental requirements for digital image processing. The first computers powerful enough to carry out meaningful image processing tasks appeared in the early 1960s. The birth of what we call digital image processing today can be traced to the availability of those machines, and to the onset of the space program during that period. It took the combination of those two develop- ments to bring into focus the potential of digital image processing for solving prob- lems of practical significance. Work on using computer techniques for improving images from a space probe began at the Jet Propulsion Laboratory (Pasadena, Cali- fornia) in 1964, when pictures of the moon transmitted by Ranger 7 were processed by a computer to correct various types of image distortion inherent in the on-board television camera. Figure 1.4 shows the first image of the moon taken by Ranger 7 on July 31, 1964 at 9:09 A.M. Eastern Daylight Time (EDT), about 17 minutes before impacting the lunar surface (the markers, called reseau marks, are used for geometric corrections, as discussed in Chapter 2).This also is the first image of the moon taken by a U.S. spacecraft. The imaging lessons learned with Ranger 7 served as the basis for improved methods used to enhance and restore images from the Sur- veyor missions to the moon, the Mariner series of flyby missions to Mars, the Apollo manned flights to the moon, and others. In parallel with space applications, digital image processing techniques began in the late 1960s and early 1970s to be used in medical imaging, remote Earth resourc- es observations, and astronomy. The invention in the early 1970s of computerized axial tomography (CAT), also called computerized tomography (CT) for short, is one of the most important events in the application of image processing in medical diagnosis. Computerized axial tomography is a process in which a ring of detectors www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 21 6/16/2017 2:01:58 PM 22 Chapter 1 Introduction FIGURE 1.4 The first picture of the moon by a U.S. spacecraft. Ranger 7 took this image on July 31, 1964 at 9:09 A.M. EDT, about 17 minutes before impacting the lunar surface. (Courtesy of NASA.) encircles an object (or patient) and an X-ray source, concentric with the detector ring, rotates about the object. The X-rays pass through the object and are collected at the opposite end by the corresponding detectors in the ring. This procedure is repeated the source rotates. Tomography consists of algorithms that use the sensed data to construct an image that represents a “slice” through the object. Motion of the object in a direction perpendicular to the ring of detectors produces a set of such slices, which constitute a three-dimensional (3-D) rendition of the inside of the object. Tomography was invented independently by Sir Godfrey N. Hounsfield and Professor Allan M. Cormack, who shared the 1979 Nobel Prize in Medicine for their invention. It is interesting to note that X-rays were discovered in 1895 by Wilhelm Conrad Roentgen, for which he received the 1901 Nobel Prize for Physics. These two inventions, nearly 100 years apart, led to some of the most important applications of image processing today. From the 1960s until the present, the field of image processing has grown vigor- ously. In addition to applications in medicine and the space program, digital image processing techniques are now used in a broad range of applications. Computer pro- cedures are used to enhance the contrast or code the intensity levels into color for easier interpretation of X-rays and other images used in industry, medicine, and the biological sciences. Geographers use the same or similar techniques to study pollu- tion patterns from aerial and satellite imagery. Image enhancement and restoration procedures are used to process degraded images of unrecoverable objects, or experi- mental results too expensive to duplicate. In archeology, image processing meth- ods have successfully restored blurred pictures that were the only available records of rare artifacts lost or damaged after being photographed. In physics and related fields, computer techniques routinely enhance images of experiments in areas such as high-energy plasmas and electron microscopy. Similarly successful applications of image processing concepts can be found in astronomy, biology, nuclear medicine, law enforcement, defense, and industry. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 22 6/16/2017 2:01:59 PM 1.3 Examples of Fields that Use Digital Image Processing 23 These examples illustrate processing results intended for human interpretation. The second major area of application of digital image processing techniques men- tioned at the beginning of this chapter is in solving problems dealing with machine perception. In this case, interest is on procedures for extracting information from an image, in a form suitable for computer processing. Often, this information bears little resemblance to visual features that humans use in interpreting the content of an image. Examples of the type of information used in machine perception are statistical moments, Fourier transform coefficients, and multidimensional distance measures. Typical problems in machine perception that routinely utilize image pro- cessing techniques are automatic character recognition, industrial machine vision for product assembly and inspection, military recognizance, automatic processing of fingerprints, screening of X-rays and blood samples, and machine processing of aer- ial and satellite imagery for weather prediction and environmental assessment. The continuing decline in the ratio of computer price to performance, and the expansion of networking and communication bandwidth via the internet, have created unprec- edented opportunities for continued growth of digital image processing. Some of these application areas will be illustrated in the following section. 1.3 EXAMPLES OF FIELDS THAT USE DIGITAL IMAGE PROCESSING 1.3 Today, there is almost no area of technical endeavor that is not impacted in some way by digital image processing. We can cover only a few of these applications in the context and space of the current discussion. However, limited as it is, the material presented in this section will leave no doubt in your mind regarding the breadth and importance of digital image processing. We show in this section numerous areas of application, each of which routinely utilizes the digital image processing techniques developed in the following chapters. Many of the images shown in this section are used later in one or more of the examples given in the book. Most images shown are digital images. The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image pro- cessing applications is to categorize images according to their source (e.g., X-ray, visual, infrared, and so on).The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we will discuss briefly how images are generated in these various categories, and the areas in which they are applied. Methods for con- verting images into digital form will be discussed in the next chapter. Images based on radiation from the EM spectrum are the most familiar, espe- cially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 23 6/16/2017 2:01:59 PM 24 Chapter 1 Introduction Energy of one photon (electron volts) 106 105 104 103 102 101 100 101 102 103 104 105 106 107 108 109 Gamma rays X-rays Ultraviolet Visible Infrared Microwaves Radio waves FIGURE 1.5 The electromagnetic spectrum arranged according to energy per photon. bands are grouped according to energy per photon, we obtain the spectrum shown in Fig. 1.5, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct, but rather transition smoothly from one to the other. GAMMA-RAY IMAGING Major uses of imaging based on gamma rays include nuclear medicine and astro- nomical observations. In nuclear medicine, the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions collected by gamma-ray detectors. Figure 1.6(a) shows an image of a complete bone scan obtained by using gamma-ray imaging. Images of this sort are used to locate sites of bone pathology, such as infections or tumors. Figure 1.6(b) shows another major modality of nuclear imaging called positron emission tomogra- phy (PET). The principle is the same as with X-ray tomography, mentioned briefly in Section 1.2. However, instead of using an external source of X-ray energy, the patient is given a radioactive isotope that emits positrons as it decays. When a pos- itron meets an electron, both are annihilated and two gamma rays are given off. These are detected and a tomographic image is created using the basic principles of tomography. The image shown in Fig. 1.6(b) is one sample of a sequence that con- stitutes a 3-D rendition of the patient. This image shows a tumor in the brain and another in the lung, easily visible as small white masses. A star in the constellation of Cygnus exploded about 15,000 years ago, generat- ing a superheated, stationary gas cloud (known as the Cygnus Loop) that glows in a spectacular array of colors. Figure 1.6(c) shows an image of the Cygnus Loop in the gamma-ray band. Unlike the two examples in Figs. 1.6(a) and (b), this image was obtained using the natural radiation of the object being imaged. Finally, Fig. 1.6(d) shows an image of gamma radiation from a valve in a nuclear reactor. An area of strong radiation is seen in the lower left side of the image. X-RAY IMAGING X-rays are among the oldest sources of EM radiation used for imaging. The best known use of X-rays is medical diagnostics, but they are also used extensively in industry and other areas, such as astronomy. X-rays for medical and industrial imag- ing are generated using an X-ray tube, which is a vacuum tube with a cathode and anode. The cathode is heated, causing free electrons to be released. These electrons flow at high speed to the positively charged anode. When the electrons strike a www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 24 6/16/2017 2:01:59 PM 1.3 Examples of Fields that Use Digital Image Processing 25 a b c d FIGURE 1.6 Examples of gamma-ray imaging. (a) Bone scan. (b) PET image. (c) Cygnus Loop. (d) Gamma radia- tion (bright spot) from a reactor valve. (Images courtesy of (a) G.E. Medical Systems; (b) Dr. Michael E. Casey, CTI PET Systems; (c) NASA; (d) Professors Zhong He and David K. Wehe, University of Michigan.) nucleus, energy is released in the form of X-ray radiation. The energy (penetrat- ing power) of X-rays is controlled by a voltage applied across the anode, and by a current applied to the filament in the cathode. Figure 1.7(a) shows a familiar chest X-ray generated simply by placing the patient between an X-ray source and a film sensitive to X-ray energy. The intensity of the X-rays is modified by absorption as they pass through the patient, and the resulting energy falling on the film develops it, much in the same way that light develops photographic film. In digital radiography, www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 25 6/16/2017 2:01:59 PM 26 Chapter 1 Introduction a d c b e FIGURE 1.7 Examples of X-ray imaging. (a) Chest X-ray. (b) Aortic angiogram. (c) Head CT. (d) Circuit boards. (e) Cygnus Loop. (Images courtesy of (a) and (c) Dr. David R. Pickens, Dept. of Radiology & Radiological Sciences, Vanderbilt University Medical Center; (b) Dr. Thomas R. Gest, Division of Anatomical Sciences, Univ. of Michigan Medical School; (d) Mr. Joseph E. Pascente, Lixi, Inc.; and (e) NASA.) digital images are obtained by one of two methods: (1) by digitizing X-ray films; or; (2) by having the X-rays that pass through the patient fall directly onto devices (such as a phosphor screen) that convert X-rays to light. The light signal in turn is captured by a light-sensitive digitizing system. We will discuss digitization in more detail in Chapters 2 and 4. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 26 6/16/2017 2:01:59 PM 1.3 Examples of Fields that Use Digital Image Processing 27 Angiography is another major application in an area called contrast enhancement radiography. This procedure is used to obtain images of blood vessels, called angio- grams. A catheter (a small, flexible, hollow tube) is inserted, for example, into an artery or vein in the groin. The catheter is threaded into the blood vessel and guided to the area to be studied. When the catheter reaches the site under investigation, an X-ray contrast medium is injected through the tube. This enhances the contrast of the blood vessels and enables a radiologist to see any irregularities or blockages. Figure 1.7(b) shows an example of an aortic angiogram. The catheter can be seen being inserted into the large blood vessel on the lower left of the picture. Note the high contrast of the large vessel as the contrast medium flows up in the direction of the kidneys, which are also visible in the image. As we will discuss further in Chapter 2, angiography is a major area of digital image processing, where image subtraction is used to further enhance the blood vessels being studied. Another important use of X-rays in medical imaging is computerized axial tomog- raphy (CAT). Due to their resolution and 3-D capabilities, CAT scans revolution- ized medicine from the moment they first became available in the early 1970s. As noted in Section 1.2, each CAT image is a “slice” taken perpendicularly through the patient. Numerous slices are generated as the patient is moved in a longitudinal direction. The ensemble of such images constitutes a 3-D rendition of the inside of the body, with the longitudinal resolution being proportional to the number of slice images taken. Figure 1.7(c) shows a typical CAT slice image of a human head. Techniques similar to the ones just discussed, but generally involving higher energy X-rays, are applicable in industrial processes. Figure 1.7(d) shows an X-ray image of an electronic circuit board. Such images, representative of literally hundreds of industrial applications of X-rays, are used to examine circuit boards for flaws in manufacturing, such as missing components or broken traces. Industrial CAT scans are useful when the parts can be penetrated by X-rays, such as in plastic assemblies, and even large bodies, such as solid-propellant rocket motors. Figure 1.7(e) shows an example of X-ray imaging in astronomy. This image is the Cygnus Loop of Fig. 1.6(c), but imaged in the X-ray band. IMAGING IN THE ULTRAVIOLET BAND Applications of ultraviolet “light” are varied. They include lithography, industrial inspection, microscopy, lasers, biological imaging, and astronomical observations. We illustrate imaging in this band with examples from microscopy and astronomy. Ultraviolet light is used in fluorescence microscopy, one of the fastest growing areas of microscopy. Fluorescence is a phenomenon discovered in the middle of the nineteenth century, when it was first observed that the mineral fluorspar fluoresces when ultraviolet light is directed upon it. The ultraviolet light itself is not visible, but when a photon of ultraviolet radiation collides with an electron in an atom of a fluo- rescent material, it elevates the electron to a higher energy level. Subsequently, the excited electron relaxes to a lower level and emits light in the form of a lower-energy photon in the visible (red) light region. Important tasks performed with a fluores- cence microscope are to use an excitation light to irradiate a prepared specimen, and then to separate the much weaker radiating fluorescent light from the brighter www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 27 6/16/2017 2:01:59 PM 28 Chapter 1 Introduction a b c FIGURE 1.8 Examples of ultraviolet imaging. (a) Normal corn. (b) Corn infected by smut. (c) Cygnus Loop. (Images (a) and (b) courtesy of Dr. Michael W. Davidson, Florida State University, (c) NASA.) excitation light. Thus, only the emission light reaches the eye or other detector. The resulting fluorescing areas shine against a dark background with sufficient contrast to permit detection. The darker the background of the nonfluorescing material, the more efficient the instrument. Fluorescence microscopy is an excellent method for studying materials that can be made to fluoresce, either in their natural form (primary fluorescence) or when treat- ed with chemicals capable of fluorescing (secondary fluorescence). Figures 1.8(a) and (b) show results typical of the capability of fluorescence microscopy. Figure 1.8(a) shows a fluorescence microscope image of normal corn, and Fig. 1.8(b) shows corn infected by “smut,” a disease of cereals, corn, grasses, onions, and sorghum that can be caused by any one of more than 700 species of parasitic fungi. Corn smut is particularly harmful because corn is one of the principal food sources in the world. As another illustration, Fig. 1.8(c) shows the Cygnus Loop imaged in the high-energy region of the ultraviolet band. IMAGING IN THE VISIBLE AND INFRARED BANDS Considering that the visual band of the electromagnetic spectrum is the most famil- iar in all our activities, it is not surprising that imaging in this band outweighs by far all the others in terms of breadth of application. The infrared band often is used in conjunction with visual imaging, so we have grouped the visible and infrared bands in this section for the purpose of illustration. We consider in the following discus- sion applications in light microscopy, astronomy, remote sensing, industry, and law enforcement. Figure 1.9 shows several examples of images obtained with a light microscope. The examples range from pharmaceuticals and microinspection to materials char- acterization. Even in microscopy alone, the application areas are too numerous to detail here. It is not difficult to conceptualize the types of processes one might apply to these images, ranging from enhancement to measurements. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 28 6/16/2017 2:01:59 PM 1.3 Examples of Fields that Use Digital Image Processing 29 a b c d e f FIGURE 1.9 Examples of light microscopy images. (a) Taxol (antican- cer agent), magni- fied 250 ×. (b) Cholesterol— 40 ×. (c) Microproces- sor—60 ×. (d) Nickel oxide thin film—600 ×. (e) Surface of audio CD—1750 ×. (f) Organic super- conductor— 450 ×. (Images courtesy of Dr. Michael W. Davidson, Florida State University.) Another major area of visual processing is remote sensing, which usually includes several bands in the visual and infrared regions of the spectrum. Table 1.1 shows the so-called thematic bands in NASA’s LANDSAT satellites. The primary function of LANDSAT is to obtain and transmit images of the Earth from space, for purposes of monitoring environmental conditions on the planet. The bands are expressed in terms of wavelength, with 1mm being equal to 10 −6 m (we will discuss the wave- length regions of the electromagnetic spectrum in more detail in Chapter 2). Note the characteristics and uses of each band in Table 1.1. In order to develop a basic appreciation for the power of this type of multispec- tral imaging, consider Fig. 1.10, which shows one image for each of the spectral bands in Table 1.1. The area imaged is Washington D.C., which includes features such as buildings, roads, vegetation, and a major river (the Potomac) going though the city. www.EBooksWorld.ir DIP4E_GLOBAL_Print_Ready.indb 29 6/16/2017 2:02:00 PM 30 Chapter 1 Introduction TABLE 1.1 Wavelength Thematic bands Band No. Name Characteristics and Uses (Mm) of NASA’s LANDSAT 1 Visible blue 0.45– 0.52 Maximum water penetration satellite. 2 Visible green 0.53– 0.61 Measures plant vigor 3 Visible red 0.63– 0.69 Vegetation discrimination 4 Near infrared 0.78– 0.90 Biomass and shoreline mapping 5 Middle infrared 1.55–1.75 Moisture content: soil/vegetation 6 Thermal infrared 10.4–12.5 Soil moisture; thermal mapping 7 Short-wave infrared 2.09–2.35 Mineral mapping Images of population centers are used over time to assess population growth and shift patterns, pollution, and other factors affecting the environment. The differenc- es between visual and infrared image features are quite noticeable in these images. Observe, for example, how well defined the river is from its surroundings in Bands 4 and 5. Weather observation and prediction also are major applications of multispectral imaging from satellites. For example, Fig. 1.11 is an image of Hurricane Katrina, one of the most devastating storms in recent memory in the Western Hemisphere. This image was taken by a National Oceanographic and Atmospheric Administration (NOAA) satellite using sensors in the visible and infrared bands. The eye of the hur-