Fondamenti Di Informatica PDF

Document Details

Uploaded by Deleted User

J. Glenn Brookshear and Dennis Brylow

Tags

computer science introduction to computer science computer algorithms programming

Summary

This is an introduction to computer science from a textbook. It covers the role of algorithms, the history of computing, and a wide range of topics in computer science. The book is intended for an undergraduate computer science curriculum.

Full Transcript

computer science AN OVERVIEW 12th Edition Global Edition J. Glenn Brookshear and Dennis Brylow Global Edition...

computer science AN OVERVIEW 12th Edition Global Edition J. Glenn Brookshear and Dennis Brylow Global Edition contributions by Manasa S. Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Contents Chapter 0 Introduction 13 0.1 The Role of Algorithms 14 0.2 The History of Computing 16 0.3 An Outline of Our Study 21 0.4 The Overarching Themes of Computer Science 23 Chapter 1 Data Storage 31 1.1 Bits and Their Storage 32 1.2 Main Memory 38 1.3 Mass Storage 41 1.4 Representing Information as Bit Patterns 46 *1.5 The Binary System 52 *1.6 Storing Integers 58 *1.7 Storing Fractions 64 *1.8 Data and Programming 69 *1.9 Data Compression 75 *1.10 Communication Errors 81 Chapter 2 Data Manipulation 93 2.1 Computer Architecture 94 2.2 Machine Language 97 2.3 Program Execution 103 *2.4 Arithmetic/Logic Instructions 110 *2.5 Communicating with Other Devices 115 *2.6 Programming Data Manipulation 120 *2.7 Other Architectures 129 Chapter 3 Operating Systems 139 3.1 The History of Operating Systems 140 3.2 Operating System Architecture 144 3.3 Coordinating the Machine’s Activities 152 *Asterisks indicate suggestions for optional sections. 10 Contents 11 *3.4 Handling Competition Among Processes 155 3.5 Security 160 Chapter 4 Networking and the Internet 169 4.1 Network Fundamentals 170 4.2 The Internet 179 4.3 The World Wide Web 188 *4.4 Internet Protocols 197 4.5 Security 203 Chapter 5 Algorithms 217 5.1 The Concept of an Algorithm 218 5.2 Algorithm Representation 221 5.3 Algorithm Discovery 228 5.4 Iterative Structures 234 5.5 Recursive Structures 245 5.6 Efficiency and Correctness 253 Chapter 6 Programming Languages 271 6.1 Historical Perspective 272 6.2 Traditional Programming Concepts 280 6.3 Procedural Units 292 6.4 Language Implementation 300 6.5 Object-Oriented Programming 308 *6.6 Programming Concurrent Activities 315 *6.7 Declarative Programming 318 Chapter 7 Software Engineering 331 7.1 The Software Engineering Discipline 332 7.2 The Software Life Cycle 334 7.3 Software Engineering Methodologies 338 7.4 Modularity 341 7.5 Tools of the Trade 348 7.6 Quality Assurance 356 7.7 Documentation 360 7.8 The Human-Machine Interface 361 7.9 Software Ownership and Liability 364 Chapter 8 Data Abstractions 373 8.1 Basic Data Structures 374 8.2 Related Concepts 377 8.3 Implementing Data Structures 380 8.4 A Short Case Study 394 8.5 Customized Data Types 399 8.6 Classes and Objects 403 *8.7 Pointers in Machine Language 405 12 Contents Chapter 9 Database Systems 415 9.1 Database Fundamentals 416 9.2 The Relational Model 421 *9.3 Object-Oriented Databases 432 *9.4 Maintaining Database Integrity 434 *9.5 Traditional File Structures 438 9.6 Data Mining 446 9.7 Social Impact of Database Technology 448 Chapter 10 Computer Graphics 457 10.1 The Scope of Computer Graphics 458 10.2 Overview of 3D Graphics 460 10.3 Modeling 461 10.4 Rendering 469 *10.5 Dealing with Global Lighting 480 10.6 Animation 483 Chapter 11 Artificial Intelligence 491 11.1 Intelligence and Machines 492 11.2 Perception 497 11.3 Reasoning 503 11.4 Additional Areas of Research 514 11.5 Artificial Neural Networks 519 11.6 Robotics 526 11.7 Considering the Consequences 529 Chapter 12 Theory of Computation 539 12.1 Functions and Their Computation 540 12.2 Turing Machines 542 12.3 Universal Programming Languages 546 12.4 A Noncomputable Function 552 12.5 Complexity of Problems 556 *12.6 Public-Key Cryptography 565 Appendixes 575 A ASCII 577 B Circuits to Manipulate Two’s Complement Representations 578 C A Simple Machine Language 581 D High-Level Programming Languages 583 E The Equivalence of Iterative and Recursive Structures 585 F Answers to Questions & Exercises 587 Index 629 0 C H A P T E R Introduction In this preliminary chapter we consider the scope of computer science, develop a historical perspective, and establish a foundation from which to launch our study. 0.1 The Role of Algorithms 0.4 The Overarching Data Themes of Computer Science Programming 0.2 The History of Internet Algorithms Computing Impact Abstraction 0.3 An Outline of Our Study Creativity 14 Chapter 0 Introduction Computer science is the discipline that seeks to build a scientific foundation for such topics as computer design, computer programming, information processing, algorithmic solutions of problems, and the algorithmic process itself. It provides the underpinnings for today’s computer applications as well as the foundations for tomorrow’s computing infrastructure. This book provides a comprehensive introduction to this science. We will investigate a wide range of topics including most of those that constitute a typical university computer science curriculum. We want to appreciate the full scope and dynamics of the field. Thus, in addition to the topics themselves, we will be interested in their historical development, the current state of research, and prospects for the future. Our goal is to establish a functional understanding of computer science—one that will support those who wish to pursue more special- ized studies in the science as well as one that will enable those in other fields to flourish in an increasingly technical society. 0.1 The Role of Algorithms We begin with the most fundamental concept of computer science—that of an algorithm. Informally, an algorithm is a set of steps that defines how a task is performed. (We will be more precise later in Chapter!5.) For example, there are algorithms for cooking (called recipes), for finding your way through a strange city (more commonly called directions), for operating washing machines (usually displayed on the inside of the washer’s lid or perhaps on the wall of a laundro- mat), for playing music (expressed in the form of sheet music), and for perform- ing magic tricks (Figure!0.1). Before a machine such as a computer can perform a task, an algorithm for performing that task must be discovered and represented in a form that is compat- ible with the machine. A representation of an algorithm is called a program. For the convenience of humans, computer programs are usually printed on paper or displayed on computer screens. For the convenience of machines, programs are encoded in a manner compatible with the technology of the machine. The process of developing a program, encoding it in machine-compatible form, and inserting it into a machine is called programming. Programs, and the algorithms they represent, are collectively referred to as software, in contrast to the machinery itself, which is known as hardware. The study of algorithms began as a subject in mathematics. Indeed, the search for algorithms was a significant activity of mathematicians long before the development of today’s computers. The goal was to find a single set of directions that described how all problems of a particular type could be solved. One of the best known examples of this early research is the long division algorithm for finding the quotient of two multiple-digit numbers. Another example is the Euclidean algorithm, discovered by the ancient Greek math- ematician Euclid, for finding the greatest common divisor of two positive integers (Figure!0.2). Once an algorithm for performing a task has been found, the performance of that task no longer requires an understanding of the principles on which the algorithm is based. Instead, the performance of the task is reduced to the process of merely following directions. (We can follow the long division algorithm to find a quotient or the Euclidean algorithm to find a greatest common divisor without 0.1 The Role of Algorithms 15 Figure 0.1 An algorithm for a magic trick ESEMPIO Effect: The performer places some cards from a normal deck of playing cards face down on a table and mixes them thoroughly while spreading them out on the table. Then, as the audience requests either red or black cards, the performer turns over cards of the requested color. Secret and Patter: Step 1. From a normal deck of cards, select ten red cards and ten black cards. Deal these cards face up in two piles on the table according to color. Step 2. Announce that you have selected some red cards and some black cards. Step 3. Pick up the red cards. Under the pretense of aligning them into a small deck, hold them face down in your left hand and, with the thumb and first finger of your right hand, pull back on each end of the deck so that each card is given a slightly backward curve. Then place the deck of red cards face down on the table as you say, “Here are the red cards in this stack.” Step 4. Pick up the black cards. In a manner similar to that in step 3, give these cards a slight forward curve. Then return these cards to the table in a face-down deck as you say, “And here are the black cards in this stack.” Step 5. Immediately after returning the black cards to the table, use both hands to mix the red and black cards (still face down) as you spread them out on the tabletop. Explain that you are thoroughly mixing the cards. Step 6. As long as there are face-down cards on the table, repeatedly execute the following steps: 6.1. Ask the audience to request either a red or a black card. 6.2. If the color requested is red and there is a face-down card with a concave appearance, turn over such a card while saying, “Here is a red card.” 6.3. If the color requested is black and there is a face-down card with a convex appearance, turn over such a card while saying, “Here is a black card.” 6.4. Otherwise, state that there are no more cards of the requested color and turn over the remaining cards to prove your claim. Figure 0.2 The Euclidean algorithm for finding the greatest common divisor of two positive!integers Description: This algorithm assumes that its input consists of two positive integers and proceeds to compute the greatest common divisor of these two values. Procedure: Step 1. Assign M and N the value of the larger and smaller of the two input values, respectively. Step 2. Divide M by N, and call the remainder R. Step 3. If R is not 0, then assign M the value of N, assign N the value of R, and return to step 2; otherwise, the greatest common divisor is the value currently assigned to N. 16 Chapter 0 Introduction understanding why the algorithm works.) In a sense, the intelligence required to solve the problem at hand is encoded in the algorithm. Capturing and conveying intelligence (or at least intelligent behavior) by means of algorithms allows us to build machines that perform useful tasks. Consequently, the level of intelligence displayed by machines is limited by the intelligence that can be conveyed through algorithms. We can construct a machine to perform a task only if an algorithm exists for performing that task. In turn, if no algorithm exists for solving a problem, then the solution of that prob- lem lies beyond the capabilities of machines. Identifying the limitations of algorithmic capabilities solidified as a subject in mathematics in the 1930s with the publication of Kurt Gödel’s incomplete- ness theorem. This theorem essentially states that in any mathematical theory encompassing our traditional arithmetic system, there are statements whose truth or falseness cannot be established by algorithmic means. In short, any com- plete study of our arithmetic system lies beyond the capabilities of algorithmic activities. This realization shook the foundations of mathematics, and the study of algorithmic capabilities that ensued was the beginning of the field known today as computer science. Indeed, it is the study of algorithms that forms the core of computer science. 0.2 The History of Computing Today’s computers have an extensive genealogy. One of the earlier computing devices was the abacus. History tells us that it probably had its roots in ancient China and was used in the early Greek and Roman civilizations. The machine is quite simple, consisting of beads strung on rods that are in turn mounted in a rectangular frame (Figure!0.3). As the beads are moved back and forth on the rods, their positions represent stored values. It is in the positions of the beads that this “computer” represents and stores data. For control of an algorithm’s execu- tion, the machine relies on the human operator. Thus the abacus alone is merely a data storage system; it must be combined with a human to create a complete computational machine. In the time period after the Middle Ages and before the Modern Era, the quest for more sophisticated computing machines was seeded. A few inventors began to experiment with the technology of gears. Among these were Blaise Pascal (1623–1662) of France, Gottfried Wilhelm Leibniz (1646–1716) of Germany, and Charles Babbage (1792–1871) of England. These machines represented data through gear positioning, with data being entered mechanically by establishing initial gear positions. Output from Pascal’s and Leibniz’s machines was achieved by observing the final gear positions. Babbage, on the other hand, envisioned machines that would print results of computations on paper so that the possibility of transcription errors would be eliminated. As for the ability to follow an algorithm, we can see a progression of flex- ibility in these machines. Pascal’s machine was built to perform only addition. Consequently, the appropriate sequence of steps was embedded into the struc- ture of the machine itself. In a similar manner, Leibniz’s machine had its algo- rithms firmly embedded in its architecture, although the operator could select from a variety of arithmetic operations it offered. Babbage’s Difference Engine 0.2 The History of Computing 17 Figure 0.3 Chinese wooden abacus (Pink Badger/Fotolia) (of!which only a demonstration model was constructed) could be modified to perform a variety of calculations, but his Analytical Engine (never funded for con- struction) was designed to read instructions in the form of holes in paper cards. Thus Babbage’s Analytical Engine was programmable. In fact, Augusta Ada!Byron (Ada!Lovelace), who published a paper in which she demonstrated how Babbage’s Analytical Engine could be programmed to perform various computations, is often identified today as the world’s first programmer. The idea of communicating an algorithm via holes in paper was not origi- nated by Babbage. He got the idea from Joseph Jacquard (1752–1834), who, in 1801, had developed a weaving loom in which the steps to be performed dur- ing the weaving process were determined by patterns of holes in large thick cards made of wood (or cardboard). In this manner, the algorithm followed by the loom could be changed easily to produce different woven designs. Another beneficiary of Jacquard’s idea was Herman Hollerith (1860–1929), who applied the concept of representing information as holes in paper cards to speed up the tabulation process in the 1890 U.S. census. (It was this work by Hollerith that led to the creation of IBM.) Such cards ultimately came to be known as punched cards and survived as a popular means of communicating with computers well into the 1970s. Nineteenth-century technology was unable to produce the complex gear- driven machines of Pascal, Leibniz, and Babbage cost-effectively. But with the advances in electronics in the early 1900s, this barrier was overcome. Examples of this progress include the electromechanical machine of George Stibitz, completed in 1940 at Bell Laboratories, and the Mark I, completed in 1944 at Harvard University by Howard Aiken and a group of IBM engineers. These machines made heavy use of electronically controlled mechanical relays. In this sense they were obsolete almost as soon as they were built, because other researchers were applying the technology of vacuum tubes to construct totally 18 Chapter 0 Introduction Figure 0.4 Three women operating the ENIAC’s (Electronic Numerical Integrator And Computer) main control panel while the machine was at the Moore School. The machine was later moved to the U.S. Army’s Ballistics Research Laboratory. (Courtesy U.S. Army.) electronic computers. The first of these vacuum tube machines was apparently the Atanasoff-Berry machine, constructed during the period from 1937 to 1941 at Iowa State College (now Iowa State University) by John Atanasoff and his assistant, Clifford Berry. Another was a machine called Colossus, built under the!direction of Tommy Flowers in England to decode German messages dur- ing!the latter part of World War II. (Actually, as many as ten of these machines were apparently built, but military secrecy and issues of national security kept their existence from becoming part of the “computer family tree.”) Other, more flexible machines, such as the ENIAC (electronic numerical integrator and calcu- lator) developed by John Mauchly and J. Presper Eckert at the Moore School of Electrical Engineering, University of Pennsylvania, soon followed (Figure!0.4). From that point on, the history of computing machines has been closely linked to advancing technology, including the invention of transistors (for which physicists William Shockley, John Bardeen, and Walter Brattain were awarded a Nobel Prize) and the subsequent development of complete circuits constructed as single units, called integrated circuits (for which Jack Kilby also won a Nobel Prize in physics). With these developments, the room-sized machines of the 1940s were reduced over the decades to the size of single cabinets. At the same time, the processing power of computing machines began to double every two years (a trend that has continued to this day). As work on integrated circuitry progressed, many of the components within a computer became readily available on the open market as integrated circuits encased in toy-sized blocks of plastic called chips. A major step toward popularizing computing was the development of desk- top computers. The origins of these machines can be traced to the computer hobbyists who built homemade computers from combinations of chips. It was within this “underground” of hobby activity that Steve Jobs and Stephen Wozniak 0.2 The History of Computing 19 Babbage’s Difference Engine The machines designed by Charles Babbage were truly the forerunners of modern computer design. If technology had been able to produce his machines in an eco- nomically feasible manner and if the data processing demands of commerce and government had been on the scale of today’s requirements, Babbage’s ideas could have led to a computer revolution in the 1800s. As it was, only a demonstration model of his Difference Engine was constructed in his lifetime. This machine determined numerical values by computing “successive differences.” We can gain an insight to this technique by considering the problem of computing the squares of the integers. We begin with the knowledge that the square of 0 is 0, the square of 1 is 1, the square of 2 is 4, and the square of 3 is 9. With this, we can determine the square of 4 in the following manner (see the following diagram). We first compute the differ- ences of the squares we already know: 12 - 02 = 1, 22 - 12 = 3, and 32 - 22 = 5. Then we compute the differences of these results: 3 - 1 = 2, and 5 - 3 = 2. Note that these differences are both 2. Assuming that this consistency continues (mathe- matics can show that it does), we conclude that the difference between the value (42 - 32) and the value (32 - 22) must also be 2. Hence (42 - 32) must be 2 greater than (32 - 22), so 42 - 32 = 7 and thus 42 = 32 + 7 = 16. Now that we know the square of 4, we could continue our procedure to compute the square of 5 based on the values of 12, 22, 32, and 42. (Although a more in"depth discussion of successive differences is beyond the scope of our current study, students of calculus may wish to observe that the preceding example is based on the fact that the derivative of y = x 2 is a straight line with a slope of 2.) First Second x x2 difference difference 0 0 1 1 1 2 3 2 4 2 5 3 9 2 7 4 16 2 5 built a commercially viable home computer and, in 1976, established Apple Com- puter, Inc. (now Apple Inc.) to manufacture and market their products. Other companies that marketed similar products were Commodore, Heathkit, and Radio Shack. Although these products were popular among computer hobbyists, they were not widely accepted by the business community, which continued to look to the well-established IBM and its large mainframe computers for the majority of its computing needs. In 1981, IBM introduced its first desktop computer, called the personal computer, or PC, whose underlying software was developed by a newly formed company known as Microsoft. The PC was an instant success and legitimized 20 Chapter 0 Introduction Augusta Ada Byron Augusta Ada Byron, Countess of Lovelace, has been the subject of much commentary in the computing community. She lived a somewhat tragic life of less than 37 years (1815–1852) that was complicated by poor health and the fact that she was a non- conformist in a society that limited the professional role of women. Although she was interested in a wide range of science, she concentrated her studies in mathematics. Her interest in “compute science” began when she became fascinated by the machines of Charles Babbage at a demonstration of a prototype of his Difference Engine in 1833. Her contribution to computer science stems from her translation from French into English of a paper discussing Babbage’s designs for the Analytical Engine. To this translation, Babbage encouraged her to attach an addendum describ- ing applications of the engine and containing examples of how the engine could be programmed to perform various tasks. Babbage’s enthusiasm for Ada Byron’s work was apparently motivated by his hope that its publication would lead to financial backing for the construction of his Analytical Engine. (As the daughter of Lord Byron, Ada Byron held celebrity status with potentially significant financial connections.) This!backing never materialized, but Ada Byron’s addendum has survived and is considered to contain the first examples of computer programs. The degree to which Babbage influenced Ada Byron’s work is debated by historians. Some argue that Babbage made major contributions, whereas others contend that he was more of an obstacle than an aid. Nonetheless, Augusta Ada Byron is recognized today as the world’s first programmer, a status that was certified by the U.S. Department of Defense when it named a prominent programming language (Ada) in her honor. the desktop computer as an established commodity in the minds of the business community. Today, the term PC is widely used to refer to all those machines (from various manufacturers) whose design has evolved from IBM’s initial desktop computer, most of which continue to be marketed with software from Microsoft. At times, however, the term PC is used interchangeably with the generic terms desktop or laptop. As the twentieth century drew to a close, the ability to connect individual computers in a world-wide system called the Internet was revolutionizing com- munication. In this context, Tim Berners-Lee (a British scientist) proposed a sys- tem by which documents stored on computers throughout the Internet could be linked together producing a maze of linked information called the World Wide Web (often shortened to “Web”). To make the information on the Web accessible, software systems, called search engines, were developed to “sift through” the Web, “categorize” their findings, and then use the results to assist users research- ing particular topics. Major players in this field are Google, Yahoo, and Microsoft. These companies continue to expand their Web-related activities, often in direc- tions that challenge our traditional way of thinking. At the same time that desktop and laptop computers were being accepted and used in homes, the miniaturization of computing machines continued. Today, tiny computers are embedded within a wide variety of electronic appliances and devices. Automobiles may now contain dozens of small computers running Global Positioning Systems (GPS), monitoring the function of the engine, and providing 0.3 An Outline of Our Study 21 Google Founded in 1998, Google Inc. has become one of the world’s most recognized tech- nology companies. Its core service, the Google search engine, is used by millions of people to find documents on the World Wide Web. In addition, Google provides electronic mail service (called Gmail), an Internet-based video-sharing service (called YouTube), and a host of other Internet services (including Google Maps, Google Calendar, Google Earth, Google Books, and Google Translate). However, in addition to being a prime example of the entrepreneurial spirit, Google also provides examples of how expanding technology is challenging society. For example, Google’s search engine has led to questions regarding the extent to which an international company should comply with the wishes of individual governments; YouTube has raised questions regarding the extent to which a company should be liable for information that others distribute through its services as well as the degree to which the company can claim ownership of that information; Google Books has generated concerns regarding the scope and limitations of intellectual property rights; and Google Maps has been accused of violating privacy rights. voice command services for controlling the car’s audio and phone communica- tion systems. Perhaps the most revolutionary application of computer miniaturization is found in the expanding capabilities of smartphones, hand-held general-purpose computers on which telephony is only one of many applications. More power- ful than the supercomputers of prior decades, these pocket-sized devices are equipped with a rich array of sensors and interfaces including cameras, micro- phones, compasses, touch screens, accelerometers (to detect the phone’s orienta- tion and motion), and a number of wireless technologies to communicate with other smartphones and computers. Many argue that the smartphone is having a greater effect on global society than the PC revolution. 0.3 An Outline of Our Study This text follows a bottom-up approach to the study of computer science, begin- ning with such hands-on topics as computer hardware and leading to the more abstract topics such as algorithm complexity and computability. The result is that our study follows a pattern of building larger and larger abstract tools as our understanding of the subject expands. We begin by considering topics dealing with the design and construction of machines for executing algorithms. In Chapter!1 (Data Storage), we look at how information is encoded and stored within modern computers, and in Chapter!2 (Data Manipulation), we investigate the basic internal operation of a simple com- puter. Although part of this study involves technology, the general theme is tech- nology independent. That is, such topics as digital circuit design, data encoding and compression systems, and computer architecture are relevant over a wide range of technology and promise to remain relevant regardless of the direction of future technology. 22 Chapter 0 Introduction In Chapter!3 (Operating Systems), we study the software that controls the overall operation of a computer. This software is called an operating system. It is a computer’s operating system that controls the interface between the machine and its outside world, protecting the machine and the data stored within from unauthorized access, allowing a computer user to request the execution of vari- ous programs, and coordinating the internal activities required to fulfill the user’s requests. In Chapter!4 (Networking and the Internet), we study how computers are connected to each other to form computer networks and how networks are con- nected to form internets. This study leads to topics such as network protocols, the Internet’s structure and internal operation, the World Wide Web, and numerous issues of security. Chapter!5 (Algorithms) introduces the study of algorithms from a more for- mal perspective. We investigate how algorithms are discovered, identify sev- eral fundamental algorithmic structures, develop elementary techniques for representing algorithms, and introduce the subjects of algorithm efficiency and correctness. In Chapter!6 (Programming Languages), we consider the subject of algorithm representation and the program development process. Here we find that the search for better programming techniques has led to a variety of programming methodologies or paradigms, each with its own set of programming languages. We investigate these paradigms and languages as well as consider issues of grammar and language translation. Chapter!7 (Software Engineering) introduces the branch of computer sci- ence known as software engineering, which deals with the problems encoun- tered when developing large software systems. The underlying theme is that the design of large software systems is a complex task that embraces problems beyond those of traditional engineering. Thus, the subject of software engineering has become an important field of research within computer science, drawing from such diverse fields as engineering, project management, personnel management, programming language design, and even architecture. In the next two chapters we look at ways data can be organized within a computer system. In Chapter!8 (Data Abstractions), we introduce techniques traditionally used for organizing data in a computer’s main memory and then trace the evolution of data abstraction from the concept of primitives to today’s object-oriented techniques. In Chapter! 9 (Database Systems), we consider methods traditionally used for organizing data in a computer’s mass storage and investigate how extremely large and complex database systems are implemented. In Chapter!10 (Computer Graphics), we explore the subject of graphics and animation, a field that deals with creating and photographing virtual worlds. Based on advancements in the more traditional areas of computer science such as machine architecture, algorithm design, data structures, and software engi- neering, the discipline of graphics and animation has seen significant progress and has now blossomed into an exciting, dynamic subject. Moreover, the field exemplifies how various components of computer science combine with other disciplines such as physics, art, and photography to produce striking results. In Chapter!11 (Artificial Intelligence), we learn that to develop more use- ful machines computer science has turned to the study of human intelligence for insight. The hope is that by understanding how our own minds reason and 0.4 The Overarching Themes of Computer Science 23 perceive, researchers will be able to design algorithms that mimic these processes and thus transfer comparable capabilities to machines. The result is the area of computer science known as artificial intelligence, which leans heavily on research in such areas as psychology, biology, and linguistics. We close our study with Chapter!12 (Theory of Computation) by investigat- ing the theoretical foundations of computer science—a subject that allows us to understand the limitations of algorithms (and thus machines). Here we identify some problems that cannot be solved algorithmically (and therefore lie beyond the capabilities of machines) as well as learn that the solutions to many other problems require such enormous time or space that they are also unsolvable from a practical perspective. Thus, it is through this study that we are able to grasp the scope and limitations of algorithmic systems. In each chapter, our goal is to explore the subject deeply enough to enable true understanding. We want to develop a working knowledge of computer science—a knowledge that will allow you to understand the technical society in which you live and to provide a foundation from which you can learn on your own as science and technology advance. 0.4 The Overarching Themes of Computer Science In addition to the main topics of each chapter as listed above, we also hope to broaden your understanding of computer science by incorporating several over- arching themes. The miniaturization of computers and their expanding capabilities have brought computer technology to the forefront of today’s society, and computer technology is so prevalent that familiarity with it is fundamental to being a mem- ber of the modern world. Computing technology has altered the ability of govern- ments to exert control; had enormous impact on global economics; led to startling advances in scientific research; revolutionized the role of data collection, storage, and applications; provided new means for people to communicate and interact; and has repeatedly challenged society’s status quo. The result is a proliferation of subjects surrounding computer science, each of which is now a significant field of study in its own right. Moreover, as with mechanical engineering and physics, it is often difficult to draw a line between these fields and computer science itself. Thus, to gain a proper perspective, our study will not only cover topics central to the core of computer science but also will explore a variety of disciplines dealing with both applications and consequences of the science. Indeed, an introduction to computer science is an interdisciplinary undertaking. As we set out to explore the breadth of the field of computing, it is helpful to keep in mind the main themes that unite computer science. While the codifica- tion of the “Seven Big Ideas of Computer Science”1 postdates the first ten editions of this book, they closely parallel the themes of the chapters to come. The “Seven Big Ideas” are, briefly: Algorithms, Abstraction, Creativity, Data, Programming, Internet, and Impact. In the chapters that follow, we include a variety of topics, in each case introducing central ideas of the topic, current areas of research, and some of the techniques being applied to advance knowledge in that realm. Watch for the “Big Ideas” as we return to them again and again. 1 www.csprinciples.org 24 Chapter 0 Introduction Algorithms Limited data storage capabilities and intricate, time-consuming programming pro- cedures restricted the complexity of the algorithms used in the earliest comput- ing machines. However, as these limitations began to disappear, machines were applied to increasingly larger and more complex tasks. As attempts to express the composition of these tasks in algorithmic form began to tax the abilities of the human mind, more and more research efforts were directed toward the study of algorithms and the programming process. It was in this context that the theoretical work of mathematicians began to pay dividends. As a consequence of Gödel’s incompleteness theorem, mathemati- cians had already been investigating those questions regarding algorithmic pro- cesses that advancing technology was now raising. With that, the stage was set for the emergence of a new discipline known as computer science. Today, computer science has established itself as the science of algorithms. The scope of this science is broad, drawing from such diverse subjects as mathe- matics, engineering, psychology, biology, business administration, and linguistics. Indeed, researchers in different branches of computer science may have very distinct definitions of the science. For example, a researcher in the field of com- puter architecture may focus on the task of miniaturizing circuitry and thus view computer science as the advancement and application of technology. But, a researcher in the field of database systems may see computer science as seeking ways to make information systems more useful. And, a researcher in the field of artificial intelligence may regard computer science as the study of intelligence and intelligent behavior. Nevertheless, all of these researchers are involved in aspects of the science of algorithms. Given the central role that algorithms play in computer science (see Figure!0.5), it is instructive to identify some questions that will provide focus for our study of this big idea. Which problems can be solved by algorithmic processes? How can the discovery of algorithms be made easier? Figure 0.5 The central role of algorithms in computer science Limitations of Application of Execution of Algorithms Analysis of Communication of Discovery of Representation of 0.4 The Overarching Themes of Computer Science 25 How can the techniques of representing and communicating algorithms be improved? How can the characteristics of different algorithms be analyzed and compared? How can algorithms be used to manipulate information? How can algorithms be applied to produce intelligent behavior? How does the application of algorithms affect society? Abstraction The term abstraction, as we are using it here, refers to the distinction between the external properties of an entity and the details of the entity’s internal composition. It is abstraction that allows us to ignore the internal details of a complex device such as a computer, automobile, or microwave oven and use it as a single, comprehensible unit. Moreover, it is by means of abstraction that such complex systems are designed and manufactured in the first place. Computers, automobiles, and microwave ovens are constructed from components, each of which represents a level of abstraction at which the use of the component is iso- lated from the details of the component’s internal composition. It is by applying abstraction that we are able to construct, analyze, and manage large, complex computer systems that would be overwhelming if viewed in their entirety at a detailed level. At each level of abstraction, we view the system in terms of components, called abstract tools, whose internal composition we ignore. This allows us to concentrate on how each component interacts with other components at the same level and how the collection as a whole forms a higher-level component. Thus we are able to comprehend the part of the system that is relevant to the task at hand rather than being lost in a sea of details. We emphasize that abstraction is not limited to science and technology. It is an important simplification technique with which our society has created a lifestyle that would otherwise be impossible. Few of us understand how the vari- ous conveniences of daily life are actually implemented. We eat food and wear clothes that we cannot produce by ourselves. We use electrical devices and com- munication systems without understanding the underlying technology. We use the services of others without knowing the details of their professions. With each new advancement, a small part of society chooses to specialize in its implementa- tion, while the rest of us learn to use the results as abstract tools. In this manner, society’s warehouse of abstract tools expands, and society’s ability to progress increases. Abstraction is a recurring pillar of our study. We will learn that computing equipment is constructed in levels of abstract tools. We will also see that the development of large software systems is accomplished in a modular fashion in which each module is used as an abstract tool in larger modules. Moreover, abstraction plays an important role in the task of advancing computer science itself, allowing researchers to focus attention on particular areas within a com- plex field. In fact, the organization of this text reflects this characteristic of the science. Each chapter, which focuses on a particular area within the science, is often surprisingly independent of the others, yet together the chapters form a comprehensive overview of a vast field of study. 26 Chapter 0 Introduction Creativity While computers may merely be complex machines mechanically executing rote algorithmic instructions, we shall see that the field of computer science is an inherently creative one. Discovering and applying new algorithms is a human activity that depends on our innate desire to apply our tools to solve problems in the world around us. Computer science not only extends forms of expression spanning the visual, language and musical arts, but also enables new modes of digital expression that pervade the modern world. Creating large software systems is much less like following a cookbook recipe than it is like conceiving of a grand new sculpture. Envisioning its form and function requires careful planning. Fabricating its components requires time, attention to detail, and practiced skill. The final product embodies the design aesthetics and sensibilities of its creators. Data Computers are capable of representing any information that can be discretized and digitized. Algorithms can process or transform such digitally represented information in a dizzying variety of ways. The result of this is not merely the shuffling of digital data from one part of the computer to another; computer algorithms enable us to search for patterns, to create simulations, and to cor- relate connections in ways that generate new knowledge and insight. Massive storage capacities, high-speed computer networks, and powerful computational tools are driving discoveries in many other disciplines of science, engineering and the humanities. Whether predicting the effects of a new drug by simulating complex protein folding, statistically analyzing the evolution of language across centuries of digitized books, or rendering 3D images of internal organs from a noninvasive medical scan, data is driving modern discovery across the breadth of human endeavors. Some of the questions about data that we will explore in our study include: How do computers store data about common digital artifacts, such as numbers, text, images, sounds, and video? How do computers approximate data about analog artifacts in the real world? How do computers detect and prevent errors in data? What are the ramifications of an ever-growing and interconnected digital universe of data at our disposal? Programming Translating human intentions into executable computer algorithms is now broadly referred to as programming, although the proliferation of languages and tools available now bear little resemblance to the programmable comput- ers of the 1950s and early 1960s. While computer science consists of much more than computer programming, the ability to solve problems by devising executable algorithms (programs) remains a foundational skill for all computer scientists. Computer hardware is capable of executing only relatively simple algorithmic steps, but the abstractions provided by computer programming languages allow 0.4 The Overarching Themes of Computer Science 27 humans to reason about and encode solutions for far more complex problems. Several key questions will frame our discussion of this theme. How are programs built? What kinds of errors can occur in programs? How are errors in programs found and repaired? What are the effects of errors in modern programs? How are programs documented and evaluated? Internet The Internet connects computers and electronic devices around the world and has had a profound impact in the way that our technological society stores, retrieves, and shares information. Commerce, news, entertainment, and communication now depend increasingly on this interconnected web of smaller computer net- works. Our discussion will not only describe the mechanisms of the Internet as an artifact, but will also touch on the many aspects of human society that are now intertwined with the global network. The reach of the Internet also has profound implications for our privacy and the security of our personal information. Cyberspace harbors many dangers. Consequently, cryptography and cybersecurity are of growing importance in our connected world. Impact Computer science not only has profound impacts on the technologies we use to communicate, work, and play, it also has enormous social repercussions. Progress in computer science is blurring many distinctions on which our society has based decisions in the past and is challenging many of society’s long-held principles. In law, it generates questions regarding the degree to which intellectual property can be owned and the rights and liabilities that accompany that ownership. In ethics, it generates numerous options that challenge the traditional principles on which social behavior is based. In government, it generates debates regarding the extent to which computer technology and its applications should be regulated. In phi- losophy, it generates contention between the presence of intelligent behavior and the presence of intelligence itself. And, throughout society, it generates disputes concerning whether new applications represent new freedoms or new controls. Such topics are important for those contemplating careers in computing or computer-related fields. Revelations within science have sometimes found contro- versial applications, causing serious discontent for the researchers involved. More- over, an otherwise successful career can quickly be derailed by an ethical misstep. The ability to deal with the dilemmas posed by advancing computer technol- ogy is also important for those outside its immediate realm. Indeed, technology is infiltrating society so rapidly that few, if any, are independent of its effects. This text provides the technical background needed to approach the dilem- mas generated by computer science in a rational manner. However, technical knowledge of the science alone does not provide solutions to all the questions involved. With this in mind, this text includes several sections that are devoted to social, ethical, and legal impacts of computer science. These include security concerns, issues of software ownership and liability, the social impact of database technology, and the consequences of advances in artificial intelligence. 28 Chapter 0 Introduction Moreover, there is often no definitive correct answer to a problem, and many valid solutions are compromises between opposing (and perhaps equally valid) views. Finding solutions in these cases often requires the ability to listen, to rec- ognize other points of view, to carry on a rational debate, and to alter one’s own opinion as new insights are gained. Thus, each chapter of this text ends with a col- lection of questions under the heading “Social Issues” that investigate the relation- ship between computer science and society. These are not necessarily questions to be answered. Instead, they are questions to be considered. In many cases, an answer that may appear obvious at first will cease to satisfy you as you explore alternatives. In short, the purpose of these questions is not to lead you to a “cor- rect” answer, but rather to increase your awareness, including your awareness of the various stakeholders in an issue, your awareness of alternatives, and your awareness of both the short- and long-term consequences of those alternatives. Philosophers have introduced many approaches to ethics in their search for fundamental theories that lead to principles for guiding decisions and behavior. Character-based ethics (sometimes called virtue ethics) were promoted by Plato and Aristotle, who argued that “good behavior” is not the result of apply- ing identifiable rules, but instead is a natural consequence of “good character.” Whereas other ethical bases, such as consequence-based ethics, duty-based ethics, and contract-based ethics, propose that a person resolve an ethical dilemma by ask- ing, “What are the consequences?”, “What are my duties?”, or “What contracts do I have?,” character-based ethics proposes that dilemmas be resolved by asking, “Who do I want to be?” Thus, good behavior is obtained by building good character, which is typically the result of sound upbringing and the development of virtuous habits. It is character-based ethics that underlies the approach normally taken when “teaching” ethics to professionals in various fields. Rather than presenting specific ethical theories, the approach is to introduce case studies that expose a variety of ethical questions in the professionals’ area of expertise. Then, by discussing the pros and cons in these cases, the professionals become more aware, insight- ful, and sensitive to the perils lurking in their professional lives and thus grow in character. This is the spirit in which the questions regarding social issues at the end of each chapter are presented. Social Issues The following questions are intended as a guide to the ethical/social/legal issues associated with the field of computing. The goal is not merely to answer these questions. You should also consider why you answered as you did and whether your justifications are consistent from one question to the next. 1. The premise that our society is different from what it would have been without the computer revolution is generally accepted. Is our society better than it would have been without the revolution? Is our society worse? Would your answer differ if your position within society were different? 2. Is it acceptable to participate in today’s technical society without making an effort to understand the basics of that technology? For instance, do members of a democracy, whose votes often determine how technology will be sup- ported and used, have an obligation to try to understand that technology? Social Issues 29 Does your answer depend on which technology is being considered? For example, is your answer the same when considering nuclear technology as when considering computer technology? 3. By using cash in financial transactions, individuals have traditionally had the option to manage their financial affairs without service charges. However, as more of our economy is becoming automated, financial institutions are implementing service charges for access to these automated systems. Is there a point at which these charges unfairly restrict an individual’s access to the economy? For example, suppose an employer pays employees only by check, and all financial institutions were to place a service charge on check cash- ing and depositing. Would the employees be unfairly treated? What if an employer insists on paying only via direct deposit? 4. In the context of interactive television, to what extent should a company be allowed to retrieve information from children (perhaps via an interactive game format)? For example, should a company be allowed to obtain a child’s report on his or her parents’ buying patterns? What about information about the child? 5. To what extent should a government regulate computer technology and its applications? Consider, for example, the issues mentioned in questions 3 and!4. What justifies governmental regulation? 6. To what extent will our decisions regarding technology in general, and com- puter technology in particular, affect future generations? 7. As technology advances, our educational system is constantly challenged to reconsider the level of abstraction at which topics are presented. Many ques- tions take the form of whether a skill is still necessary or whether students should be allowed to rely on an abstract tool. Students of trigonometry are no longer taught how to find the values of trigonometric functions using tables. Instead, they use calculators as abstract tools to find these values. Some argue that long division should also give way to abstraction. What other subjects are involved with similar controversies? Do modern word processors eliminate the need to develop spelling skills? Will the use of video technology someday remove the need to read? 8. The concept of public libraries is largely based on the premise that all citizens in a democracy must have access to information. As more information is stored and disseminated via computer technology, does access to this technol- ogy become a right of every individual? If so, should public libraries be the channel by which this access is provided? 9. What ethical concerns arise in a society that relies on the use of abstract tools? Are there cases in which it is unethical to use a product or service without understanding how it works? Without knowing how it is produced? Or, with- out understanding the byproducts of its use? 10. As our society becomes more automated, it becomes easier for governments to monitor their citizens’ activities. Is that good or bad? 11. Which technologies that were imagined by George Orwell (Eric Blair) in his novel 1984 have become reality? Are they being used in the manner in which Orwell predicted? 12. If you had a time machine, in which period of history would you like to live? Are there current technologies that you would like to take with you? Could your choice of technologies be taken with you without taking others? To!what 30 Chapter 0 Introduction extent can one technology be separated from another? Is it consistent to protest against global warming yet accept modern medical treatment? 13. Suppose your job requires that you reside in another culture. Should you continue to practice the ethics of your native culture or adopt the ethics of your host culture? Does your answer depend on whether the issue involves dress code or human rights? Which ethical standards should prevail if you continue to reside in your native culture but conduct business with a foreign culture on the Internet? 14. Has society become too dependent on computer applications for commerce, communications, or social interactions? For example, what would be the con- sequences of a long-term interruption in Internet and/or cellular telephone service? 15. Most smartphones are able to identify the phone’s location by means of GPS. This allows applications to provide location-specific information (such as the local news, local weather, or the presence of businesses in the immediate area) based on the phone’s current location. However, such GPS capabilities may also allow other applications to broadcast the phone’s location to other parties. Is this good? How could knowledge of the phone’s location (thus your location) be abused? Additional Reading Goldstine, J. J. The Computer from Pascal to von Neumann. Princeton, NJ: Princeton University Press, 1972. Kizza, J. M. Ethical and Social Issues in the Information Age, 3rd ed. London: Springer-Verlag, 2007. Mollenhoff, C. R. Atanasoff: Forgotten Father of the Computer. Ames, IA: Iowa State University Press, 1988. Neumann, P. G. Computer Related Risks. Boston, MA: Addison-Wesley, 1995. Ni, L. Smart Phone and Next Generation Mobile Computing. San Francisco, CA: Morgan Kaufmann, 2006. Quinn, M. J. Ethics for the Information Age, 5th ed. Boston, MA: AddisonWesley, 2012. Randell, B. The Origins of Digital Computers, 3rd ed. New York: SpringerVerlag, 1982. Spinello, R. A., and H. T. Tavani. Readings in CyberEthics, 2nd ed. Sudbury, MA:!Jones and Bartlett, 2004. Swade, D. The Difference Engine. New York: Viking, 2000. Tavani, H. T. Ethics and Technology: Ethical Issues in an Age of Information and Communication Technology, 4th ed. New York: Wiley, 2012. Woolley, B. The Bride of Science: Romance, Reason, and Byron’s Daughter. New York: McGraw-Hill, 1999. 1 C H A P T E R Data Storage In this chapter, we consider topics associated with data represen- tation and the storage of data within a computer. The types of data we will consider include text, numeric values, images, audio, and video. Much of the information in this chapter is also relevant to fields other than traditional computing, such as digital photogra- phy, audio/video recording and reproduction, and long-distance communication. 1.1 Bits and Their Storage Representing Images Variables Boolean Operations Representing Sound Operators and Expressions Gates and Flip-Flops Currency Conversion *1.5 The Binary System Debugging Hexadecimal Notation Binary Notation 1.2 Main Memory Binary Addition *1.9 Data Compression Memory Organization Fractions in Binary Generic Data Compression Measuring Memory Capacity Techniques *1.6 Storing Integers Compressing Images 1.3 Mass Storage Two’s Complement Notation Compressing Audio and Video Magnetic Systems Excess Notation Optical Systems *1.10 Communication Errors *1.7 Storing Fractions Flash Drives Parity Bits Floating-Point Notation Error-Correcting Codes 1.4 Representing Truncation Errors *Asterisks indicate suggestions for Information as Bit Patterns *1.8 Data and Programming optional sections. Representing Text Getting Started With Python Representing Numeric Values Hello Python 32 Chapter 1 Data Storage We begin our study of computer science by considering how information is encoded and stored inside computers. Our first step is to discuss the basics of a computer’s data storage devices and then to consider how information is encoded for storage in these systems. We will explore the ramifications of today’s data storage systems and how such techniques as data compression and error handling are used to overcome their shortfalls. 1.1 Bits and Their Storage CODIFICATE Inside today’s computers information is encoded as patterns of 0s and 1s. These digits are called bits (short for binary digits). Although you may be inclined to E CIFRE associate bits with numeric values, they are really only symbols whose mean- ing depends on the application at hand. Sometimes patterns of bits are used to represent numeric values; sometimes they represent characters in an alphabet and punctuation marks; sometimes they represent images; and sometimes they represent sounds. Boolean Operations To understand how individual bits are stored and manipulated inside a computer, it is convenient to imagine that the bit 0 represents the value false and the bit 1 represents the value true. Operations that manipulate true/false values are called Boolean operations, in honor of the mathematician George Boole (1815–1864), who was a pioneer in the field of mathematics called logic. Three of the basic Bool- ean operations are AND, OR, and XOR (exclusive or) as summarized in Figure!1.1. (We capitalize these Boolean operation names to distinguish them from their Eng- lish word counterparts.) These operations are similar to the arithmetic operations TIMES and PLUS because they combine a pair of values (the operation’s input) to produce a third value (the output). In contrast to arithmetic operations, however, 6 Boolean operations combine true/false values rather than numeric values. The Boolean operation AND is designed to reflect the truth or falseness of VED/ TABELLA a statement formed by combining two smaller, or simpler, statements with the PAGSUCCESSIVAconjunction and. Such statements have the generic form P AND Q where P represents one statement, and Q represents another—for example, Kermit is a frog AND Miss Piggy is an actress. The inputs to the AND operation represent the truth or falseness of the compound statement’s components; the output represents the truth or falseness of the com- pound statement itself. Since a statement of the form P AND Q is true only when both of its components are true, we conclude that 1 AND 1 should be 1, whereas all other cases should produce an output of 0, in agreement with Figure!1.1. In a similar manner, the OR operation is based on compound statements of the form P OR Q where, again, P represents one statement and Q represents another. Such state- ments are true when at least one of their components is true, which agrees with the OR operation depicted in Figure!1.1. 1.1 Bits and Their Storage 33 Figure 1.1 The possible input and output values of Boolean operations AND, OR, and XOR (exclusive or) The AND operation > - PRODOTTO TRA OL 1 0 0 1 1 AND 0 AND 1 AND 0 AND 1 0 0 0 1 The OR operation > SOMMA TRA - 001 Malte = 2 0 0 1 1 OR 0 OR 1 OR 0 OR 1 0 1 1 1 The XOR operation CME OR MA 1+1 =0 0 0 1 1 XOR 0 XOR 1 XOR 0 XOR 1 0 1 1 0 There is not a single conjunction in the English language that captures the meaning of the XOR operation. XOR produces an output of 1 (true) when one of its inputs is 1 (true) and the other is 0 (false). For example, a statement of the form P XOR Q means “either P or Q but not both.” (In short, the XOR operation produces an output of 1 when its inputs are different.) The operation NOT is another Boolean operation. It differs from AND, OR, and XOR because it has only one input. Its output is the opposite of that input; if the input of the operation NOT is true, then the output is false, and vice versa. Thus, if the input of the NOT operation is the truth or falseness of the statement Fozzie is a bear. then the output would represent the truth or falseness of the statement Fozzie is not a bear. Gates and Flip-Flops A device that produces the output of a Boolean operation when given the opera- tion’s input values is called a gate. Gates can be constructed from a variety of technologies such as gears, relays, and optic devices. Inside today’s computers, gates are usually implemented as small electronic circuits in which the digits 0 and 1 are represented as voltage levels. We need not concern ourselves with such details, however. For our purposes, it suffices to represent gates in their symbolic form, as shown in Figure!1.2. Note that the AND, OR, XOR, and NOT gates are represented by distinctively shaped symbols, with the input values entering on one side, and the output exiting on the other. Gates provide the building blocks from which computers are constructed. One important step in this direction is depicted in the circuit in Figure!1.3. This is a particular example from a collection of circuits known as a flip-flop. A flip-flop 34 Chapter 1 Data Storage IMPORTANTE !!! Figure 1.2 A pictorial representation of AND, OR, XOR, and NOT gates as well as their input and output values AND OR Inputs Output Inputs Output Inputs Output Inputs Output 0 0 0 0 0 0 0 1 0 0 1 1 1 0 0 1 0 1 1 1 1 1 1 1 XOR NOT Inputs Output Inputs Output FA RIMANERE MEMORIFEATO Inputs Output Inputs Output UN SINGOLO BIT 0 0 0 0 1 ↑ 0 1 1 1 0 1 0 1 Evi CIRCUITO Che PRODUCE 1 1 0 12 RISULTATO RIMANE STABIET THE FLIP-FLOP is a fundamental unit of computer memory. It is a circuit that produces an out- FINCHE NON RICEVE put value of 0 or 1, which remains constant until a pulse (a temporary change UH SECHALE ESTERNO to a 1 that returns to 0) from another circuit causes it to shift to the other value. In!other words, the output can be set to “remember” a zero or a one under control CHE 10 FA CAMBIARE of external stimuli. As long as both inputs in the circuit in Figure!1.3 remain 0, the!output (whether 0 or 1) will not change. However, temporarily placing a 1 on!the upper input will force the output to be 1, whereas temporarily placing a 1 on the lower input will force the output to be 0. Let us consider this claim in more detail. Without knowing the current output of the circuit in Figure!1.3, suppose that the upper input is changed to 1 while the lower input remains 0 (Figure 1.4a). This will cause the output of the OR gate to be 1, regardless of the other input to this gate. In turn, both inputs to the AND gate will now be 1, since the other input to this gate is already 1 (the output produced by the NOT gate whenever the lower input of the flip-flop is at 0). The output of the AND gate will then become 1, which means that the second input to the OR gate will now be 1 (Figure 1.4b). This guarantees that the output of the OR gate will remain 1, even when the upper input to the flip-flop is changed back to 0 (Figure 1.4c). In summary, the flip-flop’s output has become 1, and this output value will remain after the upper input returns to 0. 1.1 Bits and Their Storage 35 Figure 1.3 A simple flip-flop circuit Input Output Input In a similar manner, temporarily placing the value 1 on the lower input will force the flip-flop’s output to be 0, and this output will persist after the input value returns to 0. Our purpose in introducing the flip-flop circuit in Figures! 1.3 and 1.4 is threefold. First, it demonstrates how devices can be constructed from gates, a process known as digital circuit design, which is an important topic in computer Figure 1.4 Setting the output of a flip-flop to 1 a. First, a 1 is placed on the upper b. This causes the output of the OR gate to be 1 and, input. in turn, the output of the AND gate to be 1. 1 O 1 1 1 1 1 0 0 c. Finally, the 1 from the AND gate keeps the OR gate from changing after the upper input returns to 0. SIBICE > RIMANE 0 O - FINCHÉ UN CIRCUITO ESTERN O 1 O UN'ALTRA GIUDIZIONE GC) FA CAMBIARE IL RISULTATO 1 1 1 0 36 Chapter 1 Data Storage engineering. Indeed, the flip-flop is only one of many circuits that are basic tools in computer engineering. Second, the concept of a flip-flop provides an example of abstraction and the use of abstract tools. Actually, there are other ways to build a flip-flop. One alternative is shown in Figure!1.5. If you experiment with this circuit, you will find that, although it has a different internal structure, its external properties are the same as those of Figure!1.3. A computer engineer does not need to know which circuit is actually used within a flip-flop. Instead, only an understanding of the flip-flop’s external properties is needed to use it as an abstract tool. A flip- flop, along with other well-defined circuits, forms a set of building blocks from which an engineer can construct more complex circuitry. In turn, the design of computer circuitry takes on a hierarchical structure, each level of which uses the lower level components as abstract tools. The third purpose for introducing the flip-flop is that it is one means of stor- ing a bit within a modern computer. More precisely, a flip-flop can be set to have the output value of either 0 or 1. Other circuits can adjust this value by sending pulses to the flip-flop’s inputs, and still other circuits can respond to the stored value by using the flip-flop’s output as their inputs. Thus, many flip-flops, constructed as very small electrical circuits, can be used inside a computer as a means of recording information that is encoded as patterns of 0s and 1s. Indeed, technology known as very large-scale integration (VLSI), which allows mil- lions of electrical components to be constructed on a wafer (called a chip), is used to create miniature devices containing millions of flip-flops along with their controlling circuitry. Consequently, these chips are used as abstract tools in the construction of computer systems. In fact, in some cases VLSI is used to create an entire computer system on a single chip. Hexadecimal Notation When considering the internal activities of a computer, we must deal with pat- terns of bits, which we will refer to as a string of bits, some of which can be quite long. A long string of bits is often called a stream. Unfortunately, streams are difficult for the human mind to comprehend. Merely transcribing the pattern 101101010011 is tedious and error prone. To simplify the representation of such bit patterns, therefore, we usually use a shorthand notation called hexadecimal notation, which takes advantage of the fact that bit patterns within a machine Figure 1.5 Another way of constructing a flip-flop Input Output Input 1.1 Bits and Their Storage 37 Figure 1.6 The hexadecimal encoding system Hexadecimal Bit pattern representation 0000 0 0001 1 0010 2 0011 3 0100 4 0101 5 0110 6 0111 7 1000 8 1001 9 1010 A > 10 - 1011 B > 22- 1100 C 1101 1110 D E t 1111 F > 15 - tend to have lengths in multiples of four. In particular, hexadecimal notation uses a single symbol to represent a pattern of four bits. For example, a string of twelve bits can be represented by three hexadecimal symbols. Figure!1.6 presents the hexadecimal encoding system. The left column dis- plays all possible bit patterns of length four; the right column shows the symbol used in hexadecimal notation to represent the bit pattern to its left. Using this system, the bit pattern 10110101 is represented as B5. This is obtained by dividing the bit pattern into substrings of length four and then representing each substring by its hexadecimal equivalent—1011 is represented by B, and 0101 is represented 1 by 5. In this manner, the 16-bit pattern 1010010011001000 can be reduced to the more palatable form A4C8. We will use hexadecimal notation extensively in the next chapter. There you will come to appreciate its efficiency. Questions & Exercises 1. What input bit patterns will cause the following circuit to produce an output!of 1? Inputs Output 2. In the text, we claimed that placing a 1 on the lower input of the flip-flop in Figure!1.3 (while holding the upper input at 0) will force the flip-flop’s output to be 0. Describe the sequence of events that occurs within the flip-flop in this case. 38 Chapter 1 Data Storage 3. Assuming that both inputs to the flip-flop in Figure!1.5 begin as 0, describe the sequence of events that occurs when the upper input is temporarily set to 1. 4. a. If the output of an AND gate is passed through a NOT gate, the com- bination computes the Boolean operation called NAND, which has an output of 0 only when both its inputs are 1. The symbol for a NAND gate is the same as an AND gate except that it has a circle at its output. The following is a circuit containing a NAND gate. What Boolean opera- tion does the circuit compute? Input Input b. If the output of an OR gate is passed through a NOT gate, the combina- tion computes the Boolean operation called NOR that has an output of 1 only when both its inputs are 0. The symbol for a NOR gate is the same as an OR gate except that it has a circle at its output. The fol- lowing is a circuit containing an AND gate and two NOR gates. What Boolean operation does the circuit compute? Input Output Input 5. Use hexadecimal notation to represent the following bit patterns: a. 0110101011110010 b. 111010000101010100010111 c. 01001000 6. What bit patterns are represented by the following hexadecimal patterns? a. 5FD97 b. 610A c. ABCD d. 0100 1.2 Main Memory For the purpose of storing data, a computer contains a large collection of circuits (such as flip-flops), each capable of storing a single bit. This bit reservoir is known as the machine’s main memory. Memory Organization A computer’s main memory is organized in manageable units called cells, with a typical cell size being eight bits. (A string of eight bits is called a byte. Thus, a typical memory cell has a capacity of one byte.) Small computers embedded in such household devices as microwave ovens may have main memories consisting 1.2 Main Memory 39 Figure 1.7 The organization of a byte-size memory cell IMPORTANTE High-order end 0 1 0 1 1 0 1 0 Low-order end & Most significant bit Least significant bit of only a few hundred cells, whereas large computers may have billions of cells in their main memories. Although there is no left or right within a computer, we normally envision the bits within a memory cell as being arranged in a row. The left end of this row is called the high-order end, and the right end is called the low-order end. The left- most bit is called either the high-order bit or the most significant bit in reference to the fact that if the contents of the cell were interpreted as representing a numeric value, this bit would be the most significant digit in the number. Similarly, the rightmost bit is referred to as the low-order bit or the least significant bit. Thus we may represent the contents of a byte-size memory cell as shown in Figure!1.7. To identify individual cells in a computer’s main memory, each cell is assigned a unique “name,” called its address. The system is analogous to the technique of identifying houses in a city by addresses. In the case of memory cells, however, the addresses used are entirely numeric. To be more precise, we envision all the cells being placed in a single row and numbered in this order starting with the value zero. Such an addressing system not only gives us a way of uniquely identifying each cell but also associates an order to the cells (Figure!1.8), giving us phrases such as “the next cell” or “the previous cell.” An important consequence of assigning an order to both the cells in main memory and the bits within each cell is that the entire collection of bits within a computer’s main memory is essentially ordered in one long row. Pieces of this long row can therefore be used to store bit patterns that may be longer than the Figure 1.8 Memory cells arranged by address 101 111 10

Use Quizgecko on...
Browser
Browser