Ethics for the Information Age (8th Edition) PDF
Document Details
2019
Michael J. Quinn
Tags
Related
- GT101 Learning and Information Technology PDF
- IC 112 – PROFESSINAL ETHICS IN COMPUTING (MODULE 1 - INTRODUCTION AND LESSONS 1-2 PDF)
- Ethics in Information Technology PDF 6th Edition
- Batangas State University IT 111: Introduction to Computing PDF
- Information Technology Professional Ethics Lecture Notes PDF
- Workshop 3 - Computer Ethics PDF
Summary
This is an academic textbook discussing ethics in the Information Age, covering topics such as networked communications, intellectual property, and government regulations. It is the eighth edition of the book and was published in 2019. The book could be used for an undergraduate-level course on similar topics.
Full Transcript
Ethics for the Information Age 8th edition Ethics for the Information Age 8th edition Michael J. Quinn Seattle University 221 River Street, Hoboken NJ 07030 Senior Vice President Courseware Portfolio Management: Engineering, Computer Science, Mathematics, Statistics, and Global Editions: Marci...
Ethics for the Information Age 8th edition Ethics for the Information Age 8th edition Michael J. Quinn Seattle University 221 River Street, Hoboken NJ 07030 Senior Vice President Courseware Portfolio Management: Engineering, Computer Science, Mathematics, Statistics, and Global Editions: Marcia J. Horton Director, Portfolio Management: Engineering, Computer Science, and Global Editions: Julian Partridge Executive Portfolio Manager: Matt Goldstein Portfolio Management Assistant: Meghan Jacoby Managing Producer, ECS and Mathematics: Scott Disanno Senior Content Producer: Erin Ault Project Manager: Windfall Software, Paul C. Anagnostopoulos Manager, Rights and Permissions: Ben Ferrini Operations Specialist: Maura Zaldivar-Garcia Inventory Manager: Bruce Boundy Product Marketing Manager: Yvonne Vannatta Field Marketing Manager: Demetrius Hall Marketing Assistant: Jon Bryant Cover Image: Phonlamai Photo/Shutterstock Cover Design: Pearson CSC Composition: Windfall Software Cover Printer: Phoenix Color/Hagerstown Printer/Binder: Lake Side Communications, Inc. (LSC) Copyright © 2020, 2017, 2015, 2013, 2011 Pearson Education, Inc., Hoboken, NJ 07030. All rights reserved. Manufactured in the United States of America. This publication is protected by copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions department, please visit www.pearsoned.com/ permissions/. Many of the designations by manufacturers and seller to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps. The author and publisher of this book have used their best efforts in preparing this book. These efforts include the development, research, and testing of the theories and programs to determine their effectiveness. The author and publisher make no warranty of any kind, expressed or implied, with regard to these programs or the documentation contained in this book. The author and publisher shall not be liable in any event for incidental or consequential damages in connection with, or arising out of, the furnishing, performance, or use of these programs. Library of Congress Cataloging-in-Publication Data on file. 1 19 ISBN 10: 0-13-521772-5 ISBN 13: 978-0-13-521772-6 Brief Contents Preface xxi 1 Catalysts for Change 1 An Interview with Dalton Conley 47 2 Introduction to Ethics 49 An Interview with James Moor 105 3 Networked Communications 109 An Interview with Cal Newport 163 4 Intellectual Property 165 An Interview with June Besek 229 5 Information Privacy 233 An Interview with Michael Zimmer 277 6 Privacy and the Government 281 An Interview with Jerry Berman 329 7 Computer and Network Security 333 An Interview with Matt Bishop 377 8 Computer Reliability 381 An Interview with Avi Rubin 437 9 Professional Ethics 439 An Interview with Paul Axtell 479 10 Work and Wealth 483 An Interview with Martin Ford 529 Appendix A: Plagiarism 533 Appendix B: Introduction to Argumentation 537 Contents Preface xxi 1 Catalysts for Change 1 1.1 Introduction 1 1.2 Milestones in Computing 5 1.2.1 Aids to Manual Calculating 5 1.2.2 Mechanical Calculators 6 1.2.3 Cash Register 8 1.2.4 Punched-Card Tabulation 8 1.2.5 Precursors of Commercial Computers 10 1.2.6 First Commercial Computers 12 1.2.7 Programming Languages and Time-Sharing 13 1.2.8 Transistor and Integrated Circuit 15 1.2.9 IBM System/360 15 1.2.10 Microprocessor 16 1.2.11 Personal Computer 17 1.3 Milestones in Networking 20 1.3.1 Electricity and Electromagnetism 20 1.3.2 Telegraph 22 1.3.3 Telephone 23 1.3.4 Typewriter and Teletype 24 1.3.5 Radio 25 1.3.6 Television 26 1.3.7 Remote Computing 27 1.3.8 ARPANET 27 1.3.9 Email 29 1.3.10 Internet 29 1.3.11 NSFNET 29 1.3.12 Broadband 30 1.3.13 Wireless Networks 30 1.3.14 Cloud Computing 31 1.4 Milestones in Information Storage and Retrieval 31 1.4.1 Greek Alphabet 31 1.4.2 Codex and Paper 32 1.4.3 Gutenberg’s Printing Press 32 1.4.4 Newspapers 32 1.4.5 Hypertext 33 1.4.6 Graphical User Interface 33 1.4.7 Single-Computer Hypertext Systems 35 1.4.8 Networked Hypertext: World Wide Web 36 1.4.9 Search Engines 36 1.4.10 Cloud Storage 37 1.5 Contemporary Information Technology Issues 37 Summary 39 Further Reading and Viewing 40 Review Questions 41 Discussion Questions 41 In-Class Exercises 42 References 43 An Interview with Dalton Conley 47 2 Introduction to Ethics 49 2.1 Introduction 49 2.1.1 Defining Terms 50 2.1.2 Four Scenarios 51 2.1.3 Overview of Ethical Theories 54 2.2 Subjective Relativism 55 2.2.1 The Case For Subjective Relativism 55 2.2.2 The Case Against Subjective Relativism 56 2.3 Cultural Relativism 57 2.3.1 The Case For Cultural Relativism 58 2.3.2 The Case Against Cultural Relativism 58 2.4 Divine Command Theory 60 2.4.1 The Case For the Divine Command Theory 61 2.4.2 The Case Against the Divine Command Theory 62 2.5 Ethical Egoism 63 2.5.1 The Case For Ethical Egoism 63 2.5.2 The Case Against Ethical Egoism 64 2.6 Kantianism 65 2.6.1 Good Will and the Categorical Imperative 66 2.6.2 Evaluating a Scenario Using Kantianism 68 2.6.3 The Case For Kantianism 69 2.6.4 The Case Against Kantianism 70 2.7 Act Utilitarianism 71 2.7.1 Principle of Utility 71 2.7.2 Evaluating a Scenario Using Act Utilitarianism 73 2.7.3 The Case For Act Utilitarianism 74 2.7.4 The Case Against Act Utilitarianism 75 2.8 Rule Utilitarianism 76 2.8.1 Basis of Rule Utilitarianism 76 2.8.2 Evaluating a Scenario Using Rule Utilitarianism 77 2.8.3 The Case For Rule Utilitarianism 78 2.8.4 The Case Against Utilitarianism in General 79 2.9 Social Contract Theory 80 2.9.1 The Social Contract 80 2.9.2 Rawls’s Theory of Justice 82 2.9.3 Evaluating a Scenario Using Social Contract Theory 84 2.9.4 The Case For Social Contract Theory 85 2.9.5 The Case Against Social Contract Theory 86 2.10 Virtue Ethics 87 2.10.1 Virtues and Vices 87 2.10.2 Making a Decision Using Virtue Ethics 89 2.10.3 The Case For Virtue Ethics 91 2.10.4 The Case Against Virtue Ethics 92 2.11 Comparing Workable Ethical Theories 92 2.12 Morality of Breaking the Law 94 2.12.1 Social Contract Theory Perspective 94 2.12.2 Kantian Perspective 94 2.12.3 Rule-Utilitarian Perspective 95 2.12.4 Act-Utilitarian Perspective 96 2.12.5 Conclusion 97 Summary 97 Further Reading and Viewing 98 Review Questions 98 Discussion Questions 100 In-Class Exercises 101 References 102 An Interview with James Moor 105 3 Networked Communications 109 3.1 Introduction 109 3.2 Spam 110 3.2.1 The Spam Tsunami 111 3.2.2 Need for Social-Technical Solutions 113 3.2.3 Case Study: Ann the Acme Accountant 113 3.3 Internet Interactions 116 3.3.1 The World Wide Web 116 3.3.2 Mobile Apps 116 3.3.3 How We Use the Internet 117 3.4 Text Messaging 120 3.4.1 Transforming Lives in Developing Countries 120 3.4.2 Twitter 120 3.4.3 Business Promotion 120 3.5 Political Impact of Social Media and Online Advertising 121 3.5.1 Political Activism 121 3.5.2 Macedonian Entrepreneurs 122 3.5.3 Internet Research Agency 122 3.5.4 Is Democracy Being Threatened? 123 3.5.5 Troubling Times for Traditional Newspapers 123 3.6 Censorship 125 3.6.1 Direct Censorship 125 3.6.2 Self-Censorship 125 3.6.3 Challenges Posed by the Internet 126 3.6.4 Government Filtering and Surveillance of Internet Content 127 3.6.5 Ethical Perspectives on Censorship 128 3.7 Freedom of Expression 129 3.7.1 History 129 3.7.2 Freedom of Expression Not an Absolute Right 130 3.7.3 FCC v. Pacifica Foundation 132 3.7.4 Case Study: Kate’s Blog 132 3.8 Children and Inappropriate Content 135 3.8.1 Web Filters 135 3.8.2 Child Internet Protection Act 135 3.8.3 Ethical Evaluations of CIPA 136 3.8.4 Sexting 138 3.9 Breaking Trust 139 3.9.1 Identity Theft 139 3.9.2 Fake Reviews 140 3.9.3 Online Predators 141 3.9.4 Ethical Evaluations of Police Sting Operations 142 3.9.5 False Information 143 3.9.6 Cyberbullying 144 3.9.7 Revenge Porn 146 3.10 Internet Addiction 147 3.10.1 Is Internet Addiction Real? 147 3.10.2 Contributing Factors to Addiction 148 3.10.3 Ethical Evaluation of Internet Addiction 149 Summary 149 Further Reading and Viewing 151 Review Questions 151 Discussion Questions 152 In-Class Exercises 155 References 156 An Interview with Cal Newport 163 4 Intellectual Property 165 4.1 Introduction 165 4.2 Intellectual Property Rights 167 4.2.1 Property Rights 167 4.2.2 Extending the Argument to Intellectual Property 169 4.2.3 Benefits of Intellectual Property Protection 171 4.2.4 Limits to Intellectual Property Protection 171 4.3 Protecting Intellectual Property 173 4.3.1 Trade Secrets 173 4.3.2 Trademarks and Service Marks 174 4.3.3 Patents 175 4.3.4 Copyrights 176 4.3.5 Case Study: The Database Guru 180 4.4 Fair Use 184 4.4.1 Sony v. Universal City Studios 186 4.4.2 Audio Home Recording Act of 1992 187 4.4.3 RIAA v. Diamond Multimedia 188 4.4.4 Kelly v. Arriba Soft 188 4.4.5 Authors Guild v. Google 189 4.4.6 Mashups 190 4.5 Digital Media 191 4.5.1 Digital Rights Management 191 4.5.2 Digital Millennium Copyright Act 192 4.5.3 Secure Digital Music Initiative 192 4.5.4 Sony BMG Music Entertainment Rootkit 193 4.5.5 Criticisms of Digital Rights Management 193 4.5.6 Online Music Stores Drop Digital Rights Management 194 4.5.7 Microsoft Xbox One 194 4.6 Peer-to-Peer Networks and Cyberlockers 195 4.6.1 RIAA Lawsuits Against Napster, Grokster, and Kazaa 195 4.6.2 MGM v. Grokster 197 4.6.3 BitTorrent 198 4.6.4 Legal Action Against the Pirate Bay 199 4.6.5 PRO-IP Act 200 4.6.6 Megaupload Shutdown 200 4.6.7 Legal Online Access to Entertainment 201 4.7 Protections for Software 202 4.7.1 Software Copyrights 202 4.7.2 Violations of Software Copyrights 202 4.7.3 Safe Software Development 203 4.7.4 Software Patents 204 4.8 Legitimacy of Intellectual Property Protection for Software 208 4.8.1 Rights-Based Analysis 208 4.8.2 Utilitarian Analysis 209 4.8.3 Conclusion 210 4.9 Open-Source Software 211 4.9.1 Consequences of Proprietary Software 211 4.9.2 “Open Source” Definition 212 4.9.3 Beneficial Consequences of Open-Source Software 213 4.9.4 Examples of Open-Source Software 213 4.9.5 The GNU Project and Linux 214 4.9.6 Impact of Open-Source Software 214 4.10 Creative Commons 215 Summary 218 Further Reading and Viewing 219 Review Questions 220 Discussion Questions 220 In-Class Exercises 221 References 222 An Interview with June Besek 229 5 Information Privacy 233 5.1 Introduction 233 5.2 Perspectives on Privacy 234 5.2.1 Defining Privacy 234 5.2.2 Harms and Benefits of Privacy 235 5.2.3 Is There a Natural Right to Privacy? 238 5.2.4 Privacy and Trust 241 5.2.5 Case Study: The New Parents 242 5.3 Information Disclosures 244 5.3.1 Public Records 244 5.3.2 Information Held by Private Organizations 245 5.3.3 Facebook Tags 246 5.3.4 Enhanced 911 Services 246 5.3.5 Rewards or Loyalty Programs 247 5.3.6 Body Scanners 247 5.3.7 RFID Tags 248 5.3.8 Implanted Chips 249 5.3.9 Mobile Apps 249 5.3.10 Facebook Login 250 5.3.11 OnStar 250 5.3.12 Automobile “Black Boxes” 251 5.3.13 Medical Records 251 5.3.14 Digital Video Recorders 251 5.3.15 Cookies 252 5.4 Data Mining 252 5.4.1 Data Mining Defined 252 5.4.2 Opt-In versus Opt-Out Policies 254 5.4.3 Examples of Data Mining 255 5.4.4 Social Network Analysis 258 5.4.5 Release of “Anonymized” Datasets 259 5.5 Examples of Consumer or Political Backlash 261 5.5.1 Marketplace: Households 261 5.5.2 Facebook Beacon 261 5.5.3 Malls Track Shoppers’ Cell Phones 262 5.5.4 iPhone Apps Uploading Address Books 262 5.5.5 Instagram’s Proposed Change to Terms of Service 263 5.5.6 Cambridge Analytica 263 Summary 265 Further Reading and Viewing 266 Review Questions 266 Discussion Questions 267 In-Class Exercises 269 References 270 An Interview with Michael Zimmer 277 6 Privacy and the Government 281 6.1 Introduction 281 6.2 US Legislation Restricting Information Collection 283 6.2.1 Employee Polygraph Protection Act 283 6.2.2 Children’s Online Privacy Protection Act 283 6.2.3 Genetic Information Nondiscrimination Act 283 6.3 Information Collection by the Government 284 6.3.1 Census Records 284 6.3.2 Internal Revenue Service Records 285 6.3.3 FBI National Crime Information Center 2000 285 6.3.4 OneDOJ Database 287 6.3.5 Closed-Circuit Television Cameras 287 6.3.6 License-Plate Scanners 289 6.3.7 Police Drones 289 6.4 Covert Government Surveillance 290 6.4.1 Wiretaps and Bugs 291 6.4.2 Operation Shamrock 293 6.4.3 Carnivore Surveillance System 294 6.4.4 Covert Activities After 9/11 294 6.5 US Legislation Authorizing Wiretapping 295 6.5.1 Title III 295 6.5.2 Foreign Intelligence Surveillance Act 296 6.5.3 Electronic Communications Privacy Act 296 6.5.4 Stored Communications Act 297 6.5.5 Communications Assistance for Law Enforcement Act 297 6.6 USA PATRIOT Act 298 6.6.1 Provisions of the Patriot Act 298 6.6.2 National Security Letters 299 6.6.3 Responses to the Patriot Act 300 6.6.4 Successes and Failures 301 6.6.5 Long-Standing NSA Access to Telephone Records 302 6.7 Regulation of Public and Private Databases 303 6.7.1 Code of Fair Information Practices 303 6.7.2 Privacy Act of 1974 305 6.7.3 Fair Credit Reporting Act 305 6.7.4 Fair and Accurate Credit Transactions Act 306 6.7.5 Financial Services Modernization Act 306 6.8 Data Mining by the Government 306 6.8.1 Internal Revenue Service Audits 307 6.8.2 Syndromic Surveillance Systems 307 6.8.3 Telecommunications Records Database 307 6.8.4 Predictive Policing 308 6.8.5 Potential Harms of Profiling 308 6.9 National Identification Card 309 6.9.1 History and Role of the Social Security Number 309 6.9.2 Debate over a National ID Card 310 6.9.3 The REAL ID Act 311 6.10 Information Dissemination 312 6.10.1 Family Education Rights and Privacy Act 313 6.10.2 Video Privacy Protection Act 313 6.10.3 Health Insurance Portability and Accountability Act 313 6.10.4 Freedom of Information Act 313 6.10.5 Tollbooth Records Used in Court 314 6.10.6 Carpenter v. United States 315 6.11 Invasion 316 6.11.1 Telemarketing 316 6.11.2 Loud Television Commercials 316 6.11.3 Requiring Identification for Pseudoephedrine Purchases 317 6.11.4 Advanced Imaging Technology Scanners 317 Summary 318 Further Reading and Viewing 319 Review Questions 320 Discussion Questions 321 In-Class Exercises 322 References 323 An Interview with Jerry Berman 329 7 Computer and Network Security 333 7.1 Introduction 333 7.2 Hacking 334 7.2.1 Hackers, Past and Present 334 7.2.2 Penalties for Hacking 336 7.2.3 Selected Hacking Incidents 337 7.2.4 FBI and the Locked iPhone 337 7.2.5 Case Study: Firesheep 338 7.3 Malware 341 7.3.1 Viruses 341 7.3.2 The Internet Worm 343 7.3.3 Sasser 348 7.3.4 Instant Messaging Worms 348 7.3.5 Conficker 348 7.3.6 Cross-Site Scripting 349 7.3.7 Drive-By Downloads 349 7.3.8 Trojan Horses and Backdoor Trojans 349 7.3.9 Ransomware 349 7.3.10 Rootkits 350 7.3.11 Spyware and Adware 350 7.3.12 Bots and Botnets 350 7.3.13 Security Risks Associated with “Bring Your Own Device” 352 7.4 Cyber Crime and Cyber Attacks 352 7.4.1 Phishing and Spear Phishing 353 7.4.2 SQL Injection 353 7.4.3 Denial-of-Service and Distributed Denial-of-Service Attacks 354 7.4.4 Internet-of-Things Devices Co-opted for DDoS Attack 354 7.4.5 Cyber Crime 354 7.4.6 Politically Motivated Cyber Attacks 356 7.5 Online Voting 361 7.5.1 Motivation for Online Voting 361 7.5.2 Proposals 362 7.5.3 Ethical Evaluation 363 Summary 366 Further Reading and Viewing 367 Review Questions 367 Discussion Questions 368 In-Class Exercises 369 References 370 An Interview with Matt Bishop 377 8 Computer Reliability 381 8.1 Introduction 381 8.2 Data-Entry or Data-Retrieval Errors 382 8.2.1 Disenfranchised Voters 382 8.2.2 False Arrests 383 8.2.3 Utilitarian Analysis: Accuracy of NCIC Records 383 8.3 Software and Billing Errors 384 8.3.1 Errors Leading to System Malfunctions 385 8.3.2 Errors Leading to System Failures 385 8.3.3 Analysis: E-retailer Posts Wrong Price, Refuses to Deliver 386 8.4 Notable Software System Failures 387 8.4.1 Patriot Missile 388 8.4.2 Ariane 5 389 8.4.3 AT&T Long-Distance Network 390 8.4.4 Robot Missions to Mars 390 8.4.5 Denver International Airport 392 8.4.6 Tokyo Stock Exchange 393 8.4.7 Direct-Recording Electronic Voting Machines 394 8.5 Therac-25 397 8.5.1 Genesis of the Therac-25 397 8.5.2 Chronology of Accidents and AECL Responses 398 8.5.3 Software Errors 401 8.5.4 Postmortem 402 8.5.5 Moral Responsibility of the Therac-25 Team 403 8.5.6 Postscript 404 8.6 Tesla Version 7.0 (Autopilot) 404 8.6.1 Introduction 404 8.6.2 May 2016 Fatal Accident 405 8.6.3 The Hand-off Problem 406 8.6.4 Assigning Moral Responsibility 406 8.7 Uber Test-Vehicle Accident 408 8.7.1 Introduction 408 8.7.2 Shift to One Human Safety Operator 408 8.7.3 Effort to Eliminate “Bad Experiences” 409 8.7.4 March 18, 2018, Accident 410 8.8 Computer Simulations 411 8.8.1 Uses of Simulation 411 8.8.2 Validating Simulations 412 8.9 Software Engineering 414 8.9.1 Specification 414 8.9.2 Development 415 8.9.3 Validation 416 8.9.4 Evolution 417 8.9.5 Improvement in Software Quality 417 8.9.6 Gender Bias 418 8.9.7 Bias in Training Data Sets for Artificial-Intelligence Systems 419 8.10 Software Warranties and Vendor Liability 419 8.10.1 Shrink-Wrap Warranties 419 8.10.2 Are Software Warranties Enforceable? 421 8.10.3 Should Software Be Considered a Product? 423 8.10.4 Case Study: Incredible Bulk 423 Summary 424 Further Reading and Viewing 427 Review Questions 427 Discussion Questions 428 In-Class Exercises 430 References 430 An Interview with Avi Rubin 437 9 Professional Ethics 439 9.1 Introduction 439 9.2 How Well Developed Are the Computing Professions? 441 9.2.1 Characteristics of a Fully Developed Profession 441 9.2.2 Case Study: Certified Public Accountants 442 9.2.3 How Do Computer-Related Careers Stack Up? 443 9.3 Software Engineering Code of Ethics 444 9.4 Analysis of the Code 453 9.4.1 Preamble 453 9.4.2 Alternative List of Fundamental Principles 454 9.5 Case Studies 455 9.5.1 Software Recommendation 456 9.5.2 Child Pornography 457 9.5.3 Antiworm 458 9.5.4 Consulting Opportunity 460 9.6 Whistle-Blowing 462 9.6.1 Morton Thiokol/NASA 462 9.6.2 Hughes Aircraft 464 9.6.3 US Legislation Related to Whistle-Blowing 466 9.6.4 Morality of Whistle-Blowing 467 Summary 470 Further Reading and Viewing 471 Review Questions 472 Discussion Questions 472 In-Class Exercises 474 References 476 An Interview with Paul Axtell 479 10 Work and Wealth 483 10.1 Introduction 483 10.2 Automation and Employment 484 10.2.1 Automation and Job Destruction 485 10.2.2 Automation and Job Creation 487 10.2.3 Effects of Increase in Productivity 488 10.2.4 Case Study: The Canceled Vacation 490 10.2.5 Rise of the Robots? 491 10.3 Workplace Changes 495 10.3.1 Organizational Changes 496 10.3.2 Telework 497 10.3.3 The Gig Economy 499 10.3.4 Monitoring 500 10.3.5 Multinational Teams 502 10.4 Globalization 503 10.4.1 Arguments For Globalization 503 10.4.2 Arguments Against Globalization 504 10.4.3 Dot-Com Bust Increased IT Sector Unemployment 505 10.4.4 Foreign Workers in the American IT Industry 505 10.4.5 Foreign Competition 506 10.5 The Digital Divide 507 10.5.1 Global Divide 507 10.5.2 Social Divide 508 10.5.3 Models of Technological Diffusion 508 10.5.4 Critiques of the Digital Divide 510 10.5.5 Massive Open Online Courses 511 10.5.6 Net Neutrality 512 10.6 The “Winner-Take-All” Society 513 10.6.1 Harmful Effects of Winner-Take-All 514 10.6.2 Reducing Winner-Take-All Effects 516 Summary 517 Further Reading and Viewing 518 Review Questions 519 Discussion Questions 519 In-Class Exercises 521 References 522 An Interview with Martin Ford 529 Appendix A: Plagiarism 533 Consequences of Plagiarism 533 Types of Plagiarism 533 Guidelines for Citing Sources 534 How to Avoid Plagiarism 534 Misuse of Sources 534 Additional Information 535 References 535 Appendix B: Introduction to Argumentation 537 B.1 Introduction 537 B.1.1 Arguments and Propositions 537 B.1.2 Conditional Statements 539 B.1.3 Backing 540 B.2 Valid Arguments 540 B.2.1 Affirming the Antecedent (Modus Ponens) 540 B.2.2 Denying the Consequent (Modus Tollens) 541 B.2.3 Process of Elimination 541 B.2.4 Chain Rule 542 B.3 Unsound Arguments 543 B.4 Common Fallacies 544 B.4.1 Affirming the Consequent 544 B.4.2 Denying the Antecedent 545 B.4.3 Begging the Question 545 B.4.4 Slippery Slope 546 B.4.5 Bandwagon Fallacy 546 B.4.6 Faulty Generalization (Hasty Generalization) 547 B.4.7 Division Fallacy 547 B.4.8 The Fallacy of Equivocation 547 B.5 Unfair Debating Gambits 548 B.5.1 Red Herring 548 B.5.2 Ad Hominem Argument 548 B.5.3 Attacking a Straw Man 549 B.6 Writing Persuasive Essays 549 The Government Should Ban Self-Driving Cars 550 Quiz 552 Answers to the Quiz Questions 553 References 554 Preface Computers and high-speed communication networks are transforming our world. These technologies have brought us many benefits, but they have also raised many social and ethical concerns. My view is that we ought to approach every new technology in a thoughtful manner, considering not just its short-term benefits, but also how its long-term use will affect our lives. A thoughtful response to information technology requires a basic understanding of its history, an awareness of current information-technology-related issues, and a familiarity with ethics. I have written Ethics for the Information Age with these ends in mind. Ethics for the Information Age is suitable for college students at all levels. The only prerequisite is some experience using computers and the Internet. The book is appropriate for a stand-alone “computers and society” or “computer ethics” course offered by a computer science, business, or philosophy department. It can also be used as a supplemental textbook in a technical course that devotes some time to social and ethical issues related to computing. As students discuss controversial issues related to information technology, they have the opportunity to learn from one another and improve their critical thinking skills. The provocative questions raised at the end of every chapter, together with dozens of in-class exercises, provide many opportunities for students to express their views, learn from their classmates, and refine their positions on important issues. My hope is that through these discussions students will get better at evaluating complex issues and defending their conclusions with facts, sound values, and rational arguments. What’s New in the Eighth Edition The most significant change in the eighth edition is the addition of Appendix B that focuses on the structure of logical arguments and some common logical fallacies. The eighth edition also contains four sidebars with practical advice about how to enhance privacy and security. The sidebars explain how to limit the amount of information Google saves about your searches how to limit the amount of personal information Facebook releases to others how to create a secure password how to protect your computer and other Internet-connected devices The eighth edition covers many new developments and controversies related to the introduction, use, and abuse of information technology in modern society, including: safety concerns arising from accidents involving self-driving vehicles Cambridge Analytica gaining access to personal information from as many as 87 million Facebook users foreign interference in the 2016 US Presidential election using social media platforms police obtaining cell phone location records without a search warrant the US Supreme Court decision that has led to the invalidation of hundreds of software patents whether copying declaring code in APIs should be considered fair use of copyrighted material the dispute between the FBI and Apple about unlocking the encrypted iPhone of a terrorist how unrepresentative test-data sets can lead to biased artificial-intelligence software security risks associated with the “Bring Your Own Device” movement distributed denial-of-service attacks carried out by botnets of Internet-of-Things devices, such as baby monitors and security cameras the debate whether gig workers should be considered employees or independent contractors the new stance of the FCC regarding net neutrality the rise of Craigslist and the decline of print newspapers final court resolution of the Google Books controversy the shift in credit card fraud from point-of-sale fraud to “card not present” fraud cloud computing and cloud storage Finally, I have updated a significant number of facts and figures throughout the book. Organization of the Book The book is divided into 10 chapters. Chapter 1 has several objectives: to get the reader thinking about how social conditions can lead to the development of new technologies and how the adoption of new technologies can lead to social change; to provide the reader with an introduction to the history of computing, networking, and information storage and retrieval; and to help the reader understand how the introduction of information technology has raised some new ethical issues. Chapter 2 is an introduction to ethics. It presents nine different theories of ethical decision making, weighing the pros and cons of each one. Five of these theories—Kantianism, act utilitarianism, rule utilitarianism, social contract theory, and virtue ethics—are deemed the most appropriate “tools” for analyzing moral problems in the remaining chapters. Chapters 3 –10 discuss a wide variety of issues related to the introduction of information technology into society. I think of these chapters as forming concentric rings around a particular computer user. Chapter 3 is the innermost ring, focusing on communications over cellular networks and the Internet. Issues such as the increase in spam, political activism over social media, government censorship, identity theft, sexting, revenge porn, and Internet addiction raise important questions related to trust, quality of life, free speech, and whether new media are strengthening or weakening democracies. The next ring, Chapter 4 , deals with the creation and exchange of intellectual property. It discusses intellectual property rights; legal safeguards for intellectual property; the definition of fair use; the impact of digital media, peer-to-peer networks, and cyber-lockers; software copyrights and software patents; the legitimacy of intellectual property protection for software; and the rise of the open-source movement. Chapter 5 focuses on information privacy. What is privacy exactly? Is there a natural right to privacy? How do others learn so much about us? The chapter examines the electronic trail that people leave behind when they use a cell phone, drive a car, search the Web, use social media, make credit card purchases, open a bank account, go to a physician, or apply for a loan, and it explains how mining data to predict consumer behavior has become an important industry. It also provides several examples where companies have gone too far with their collection of personal information, and the consumer or political backlash that has resulted. Chapter 6 focuses on privacy and the US government. Using Daniel Solove’s taxonomy of privacy as our organizing principle, we look at how the government has steered between the competing interests of personal privacy and public safety. We consider US legislation to restrict information collection and government surveillance; government regulation of private databases and abuses of large government databases; legislation to reduce the dissemination of information and legislation that has had the opposite effect; and finally government actions to prevent the invasion of privacy as well as invasive government actions. Along the way, we discuss the implications of the USA PATRIOT Act and the debate over the REAL ID Act to establish a de facto national identification card. Chapter 7 focuses on the vulnerabilities of networked computers. A case study focuses on the release of the Firesheep extension to the Firefox Web browser. A section on malware discusses viruses, worms, cross-site scripting, drive-by downloads, Trojan horses, ransomware, rootkits, spyware, botnets, and more. The chapter covers phishing, spear phishing, SQL injection, denial-of-service attacks, and distributed denial-of-service attacks, and how these tools are employed by criminal organizations and even nation states. We conclude with a discussion of the risks associated with online voting. Computerized system failures have led to inconvenienced consumers, lost income for businesses, the destruction of property, human suffering, and even death. Chapter 8 describes some notable software system failures, including the story of the Therac-25 radiation therapy system. It also covers an important contemporary problem: the safety of self-driving automobiles. New sections focus on two fatal accidents: the Florida accident involving a Tesla Model S and the Arizona accident in which an Uber test vehicle struck and killed a pedestrian. The chapter also discusses the reliability of computer simulations, the emergence of software engineering as a distinct discipline, and the validity of software warranties. Chapter 9 is particularly relevant for those readers who plan to take jobs in the computer industry. The chapter presents a professional code related to computing, the Software Engineering Code of Ethics and Professional Practice, followed by an analysis of the code. Several case studies illustrate how to use the code to evaluate moral problems related to the use of computers. The chapter concludes with an ethical evaluation of whistle-blowing, an extreme example of organizational dissent. Chapter 10 raises a wide variety of issues related to how information technology has impacted the world of work and the distribution of wealth. Topics include automation, the rise of computerized systems relying on artificial intelligence, telework, workplace monitoring, the gig economy, and globalization. Does automation increase unemployment? Will improvements in artificial intelligence lead to most jobs being taken over by machines? Is there a “digital divide” separating society into “haves” and “have-nots”? Is information technology widening the gap between rich and poor? These are just a few of the important questions the chapter addresses. Note to Instructors In December 2013, a joint task force of the Association for Computing Machinery and the IEEE Computer Society released the final draft of Computer Science Curricula 2013 (www.acm.org/binaries/content/assets/education/cs2013_web_final.pdf). The report recommends that every undergraduate computer science degree program incorporate instruction related to Social Issues and Professional Practice through “a combination of one required course along with short modules in other courses” (Computer Science Curricula 2013, p. 193). Ethics for the Information Age covers nearly all of the core and elective material described in the report, with the notable exception of Professional Communications. Table 1 shows the mapping between the other topics within Social Issues and Professional Practice and the chapters of this book. Table 1 The topics of the Social Issues and Professional Practice Knowledge Area in Computer Science Curricula 2013 mapped to the chapters and appendices of this book. The organization of the book makes it easy to adapt to your particular needs. If your syllabus does not include the history of information technology, you can skip the middle three sections of Chapter 1 and still expose your students to examples motivating the formal study of ethics in Chapter 2 . After Chapter 2 , you may cover the remaining chapters in any order you choose, because Chapters 3 –10 do not depend on one other. Many departments choose to incorporate discussions of social and ethical issues throughout the undergraduate curriculum. The independence of Chapters 3 –10 makes it convenient to use Ethics for the Information Age as a supplementary textbook. You can simply assign readings from the chapters most closely related to the course topic. Supplements The following supplements are available to qualified instructors on Pearson’s Instructor Resource Center. Please contact your local Pearson sales representative or visit www.pearsonhighered.com/educator to access this material. An instructor’s manual provides tips for teaching a course in computer ethics. It also contains answers to all of the review questions. A test bank contains nearly 500 multiple-choice, fill-in-the-blank, and essay questions that you can use for quizzes, midterms, and final examinations. A set of PowerPoint lecture slides outlines the material covered in every chapter. Feedback Ethics for the Information Age cites nearly a thousand sources and includes dozens of ethical analyses. Despite my best efforts and those of many reviewers, the book is bound to contain errors. I appreciate getting comments (both positive and negative), corrections, and suggestions from readers. You can reach me through my Web site: www.michaeljquinn.net. Acknowledgments I appreciate the continuing support of a great publications team: portfolio manager Matt Goldstein, portfolio management assistant Meghan Jacoby, managing producer Scott Disanno, senior content producer Erin Ault, project manager Paul Anagnostopoulos, copyeditor Katrina Avery, and proofreader MaryEllen Oliver. A superb group of reviewers provided me with many helpful suggestions regarding new material to incorporate into the eighth edition. My thanks to Rhonda Ficek, Minnesota State University Moorhead; Tom Gallagher, University of Montana; Fred Geldon, George Mason University; Richard Gordon, University of Delaware; Micha Hofri, Worcester Polytechnic Institute; and Tamara Maddox, George Mason University. Matthew Rellihan of Seattle University reviewed the new appendix on logical argumentation, corrected several errors, and provided me with many helpful suggestions for reorganizing the material and improving its presentation. Thank you, Matt, for your valuable contribution to the new edition! I want to recognize all who participated in the creation of the first seven editions or provided useful suggestions for the eighth edition: Paul Anagnostopoulos, Valerie Anctil, Beth Anderson, Bob Baddeley, George Beekman, Brian Breck, Maria Carl, Sherry Clark, Thomas Dietterich, Roger Eastman, Beverly Fusfield, Robert Greene, Jose Guerrero, Peter Harris, Susan Hartman, Michael Hirsch, Michael Johnson, Paulette Kidder, Marilyn Lloyd, Pat McCutcheon, Joshua Noyce, Beth Paquin, Konrad Puczynski, Brandon Quinn, Courtney Quinn, Stuart Quinn, Victoria Quinn, Charley Renn, Gregory Silverman, Lindsey Triebel, Charles Volzka, Shauna Weaver, and Todd Will. Reviewers of previous editions include Ramprasad Bala, University of Massachusetts at Dartmouth; Phillip Barry, University of Minnesota; Bo Brinkman, Miami University; Diane Cassidy, University of North Carolina at Charlotte; Madhavi M. Chakrabarty, New Jersey Institute of Technology; John Clark, University of Colorado at Denver; Timothy Colburn, University of Minnesota Duluth; Lee D. Cornell, Minnesota State University, Mankato; Lorrie Faith Cranor, Carnegie Mellon University; Donna Maria D’Ambrosio, University of South Florida; Dawit Demissie, The Sage Colleges; J.C. Diaz, University of Tulsa; Richard W. Egan, New Jersey Institute of Technology; Fred Geldon, George Mason University; David Goodall, State University of New York at Albany; Richard E. Gordon, University of Delaware; Mike Gourley, University of Central Oklahoma; D.C. Grant, Columbia Basin College; Robert Greene, University of Wisconsin-Eau Claire; Fritz H. Grupe, University of Nevada, Reno; Ric Heishman, George Mason University; Gurdeep Hura, University of Maryland Eastern Shore; Musconda Kapatamoyo, Southern Illinois University, Edwardsville; Christopher Kauggman, George Mason University; Evelyn Lulis, DePaul University; Tamara A. Maddox, George Mason University; Aparna Mahadev, Worcester State University; Eric Manley, Drake University; Richard D. Manning, Nova Southeastern University; James Markulic, New Jersey Institute of Technology; John G. Messerly, University of Texas at Austin; Linda O’Hara, Oregon State University; Joe Oldham, Centre College; Mimi Opkins, California State University, Long Beach; Daniel Palmer, Kent State University; Holly Patterson-McNeill, Lewis-Clark State College; Colin Potts, Georgia Tech; Jason Rogers, George Mason University; Medha S. Sarkar, Middle Tennessee State University; Michael Scanlan, Oregon State University; Robert Sloan, University of Illinois at Chicago; Matthew Stockton, Portland Community College; Dorothy Sunio, Leeward Community College; Leon Tabak, Cornell College; Renée Turban, Arizona State University; Scott Vitz, Indiana University–Purdue University Fort Wayne; Todd Will, New Jersey Institute of Technology; David Womack, University of Texas at San Antonio; John Wright, Juniata College; and Matthew Zullo, Wake Technical Community College. Finally, I am indebted to my wife, Victoria, for her support and encouragement. You are a wonderful helpmate. Thanks for everything. Michael J. Quinn Seattle, Washington We never know how high we are Till we are called to rise; And then, if we are true to plan, Our statures touch the skies. The heroism we recite Would be a daily thing, Did not ourselves the cubits warp For fear to be a king. —EMILY DICKINSON, Aspiration I dedicate this book to Shauna, Skyler, Brandon, Courtney, Bridget, and Claire. Know that my love goes with you, wherever your aspirations may lead you. Chapter 1 Catalysts for Change Technology is a useful servant but a dangerous master. —CHRISTIAN LOUS LANGE, Nobel lecture, December 13, 1921 1.1 Introduction WE ARE LIVING IN THE INFORMATION AGE. Never before have so many people had such easy access to information. The two principal catalysts for the Information Age have been low-cost computers and high-speed communication networks, which have made possible the development of exciting new technologies, including smartphones, video streaming services, voice-activated digital assistants, low-cost drones, and self-driving cars (Figure 1.1 ). Figure 1.1 Low-cost computers and high-speed communication networks make possible the products of the Information Age, such as the Samsung Galaxy S9 Plus. It functions as a phone, text messager, email client, Web browser, camera, video recorder, digital compass, and much more. (Hocus-focus/iStock Unreleased/Getty Images) Modern computing and communications systems have profoundly changed the way we live. In 1950 there were no more than a handful of electronic digital computers in the world, and the Internet did not exist. Today we are surrounded by networked devices containing embedded microprocessors, and most of us spend many hours every day engaged with them as we communicate, seek information, play games, listen to music, or watch videos. Our relationship with technology is complicated. We create technology and choose to adopt it. However, once we have adopted a technological device, it can transform us and how we relate to other people and our environment. Some of the transformations are physical. The neural pathways and synapses in our brains demonstrate neuroplasticity: literally changing with our experiences. One well-known brain study focused on London taxi drivers. In order to get a license, aspiring London taxi drivers must spend two to four years memorizing the complicated road network of 25,000 streets within 10 kilometers of the Charing Cross train station, as well as the locations of thousands of tourist destinations. The hippocampus is the region of the brain responsible for long-term memory and spatial navigation. Neuroscientists at University College London found that the brains of London taxi drivers have larger-than-average hippocampi and that the hippocampi of aspiring taxi drivers grow as they learn the road network. Stronger longer-term memory and spatial navigation skills are great outcomes of mental exercise, but sometimes the physical effects of our mental exertions are more insidious. For example, studies with macaque monkeys suggest that when we satisfy our hunger for quick access to information through our use of Web browsers, Facebook, Twitter, and texting, neurons inside our brains release dopamine, producing a desire to seek out additional information, causing further releases of dopamine, and so on, which may explain why we find it difficult to break away from these activities [2, 3]. Adopting a technology can change our perceptions, too. More than 90 percent of cell phone users report that having a cell phone makes them feel safer, but once people get used to carrying a cell phone, losing the phone may make them feel more vulnerable than they ever did before they began carrying one. A Rutgers University professor asked his students to go without their cell phones for 48 hours. Some students couldn’t do it. A female student reported to the student newspaper, “I felt like I was going to get raped if I didn’t have my cell phone in my hand.” Some parents purchase cell phones for their children so that a child may call a family member in an emergency. However, parents who provide a cell-phone “lifeline” may be implicitly communicating to their children the idea that people in trouble cannot expect help from strangers. The Amish understand that the adoption of a new technology can affect the way people relate to each other (Figure 1.2 ). Amish bishops meet twice a year to discuss matters of importance to the church, including whether any new technologies should be allowed. Their discussion about a new technology is driven by the question, “Does it bring us together, or draw us apart?” You can visit an “Old Order” Amish home and find a gas barbecue on the front porch but no telephone inside, because they believe gas barbecues bring people together while telephones interfere with face-to-face conversations. Figure 1.2 The Amish carefully evaluate new technologies, choosing those that enhance family and community solidarity. (AP photo/The Indianapolis Star and News, Mike Fender) Most of us appreciate the many beneficial changes that technology has brought into our lives. In health care alone, computed tomography (CT) and magnetic resonance imaging (MRI) scanners have greatly improved our ability to diagnose major illnesses; new vaccines and pharmaceuticals have eradicated some deadly diseases and brought others under control; and pacemakers, hearing aids, and artificial joints have improved the physical well- being of millions. New technologies are adopted to solve problems, but they often create problems, too. The automobile has given people the ability to travel where they want, when they want. On the other hand, millions of people spend an hour or more each day stuck in traffic commuting between home and work. Commuters frustrated by slow freeway traffic turn to mobile apps like Waze to find shortcuts, but when too many drivers follow these apps, long lines at exit ramps can actually increase freeway congestion for the remaining vehicles, and cars taking shortcuts can overwhelm side streets and clog intersections, frustrating local residents. The Web contains billions of pages and makes possible extraordinarily valuable information retrieval systems. Even grade-school children are expected to gather information from the Web when writing their reports. However, many parents worry that their Web-surfing children may be exposed to pornographic or violent images or other inappropriate material. New communication technologies have made it possible for us to get access to news and entertainment from around the world. However, the same technologies have enabled major software companies to move thousands of jobs to India, China, and Vietnam, putting downward pressure on the salaries of computer programmers in the United States. We may not be able to prevent a new technology from being invented, but we do have control over whether to adopt it. Nuclear power is a case in point. Nuclear power plants create electricity without producing carbon dioxide emissions, but they also produce radioactive waste products that must be safely stored for 100,000 years. Although nuclear power technology is available, no new nuclear power plants were built in the United States for more than 25 years after the accident at Three Mile Island in 1979. Finally, we can influence the rate at which new technologies are developed. Some societies, such as the United States, have a history of nurturing and exploiting new inventions. Congress has passed intellectual property laws that allow people to make money from their creative work, and the federal income tax structure allows individuals to accumulate great wealth. To sum up, societies develop new technologies to solve problems or make life better, but the use of new technologies changes social conditions and may create new problems. That doesn’t mean we should never adopt a new technology, but it does give us a good reason why we should be making informed decisions, weighing the benefits and potential harms associated with the use of new devices. To that end, this book will help you gain a better understanding of contemporary ethical issues related to the use of information technology. This chapter sets the stage for the remainder of the book. Electronic digital computers and high-performance communication networks are central to contemporary information technology. While the impact of these inventions has been dramatic in the past few decades, their roots go back hundreds of years. Section 1.2 tells the story of the development of computers, showing how they evolved from simple manual calculation aids to complex microprocessors. In Section 1.3 we describe two centuries of progress in networking technology, starting with the semaphore telegraph and culminating in the creation of an email system connecting over a billion users. Section 1.4 shows how information storage and retrieval evolved from the creation of the Greek alphabet to Google. Finally, Section 1.5 discusses some of the moral issues that have arisen from the deployment of information technology. 1.2 Milestones in Computing Calculating devices have supported the development of commercial enterprises, governments, science, and weapons. As you will see in this section, the introduction of new technologies has often had a social impact. 1.2.1 Aids to Manual Calculating Adding and subtracting are as old as commerce and taxes. Fingers and toes are handy calculation aids, but to manipulate numbers above 20, people need more than their own digits. The tablet, the abacus, and mathematical tables are three important aids to manual calculating. Simply having a tablet to write down the numbers being manipulated is a great help. In ancient times, erasable clay and wax tablets served this purpose. By the late Middle Ages, Europeans often used erasable slates. Paper tablets became common in the nineteenth century, and they are still popular today. An abacus is a computing aid in which a person performs arithmetic operations by sliding counters along rods, wires, or lines. The first abacus was probably developed in the Middle East more than 2,000 years ago. In a Chinese, Japanese, or Russian abacus, counters move along rods or wires held in a rectangular frame. Beginning in medieval Europe, merchants performed their calculations by sliding wooden or metal counters along lines drawn in a wooden counting board (Figure 1.3 ). Eventually, the word “counter” came to mean not only the disk being manipulated but also the place in a store where transactions take place. Figure 1.3 This illustration from Gregor Reisch’s Margarita Philosophica, published in 1503, shows two aids to manual calculating. The person on the left is using a tablet; the person on the right is adding numbers using a counting board, a type of abacus. (Library of Congress Prints and Photographs Division [LC-USZ62-95297]) Mathematical tables have been another important aid to manual computing for about 2,000 years. A great breakthrough occurred in the early seventeenth century, when John Napier and Johannes Kepler published tables of logarithms. These tables were tremendous time- savers to anyone doing complicated math because they allowed them to multiply two numbers by simply adding their logarithms. Many other useful tables were created as well. For example, businesspeople consulted tables to compute interest and convert between currencies. Today people who compute their income taxes “by hand” use tax tables to determine how much they owe. Even with tablets, abacuses, and mathematical tables, manual calculating is slow, tedious, and error-prone. To make matters worse, mathematical tables prepared centuries ago usually contained errors. That’s because somebody had to compute each table entry and somebody had to typeset each entry, and errors could occur in either of these steps. Advances in science, engineering, and business in the post-Renaissance period motivated European inventors to create new devices to make calculations faster and more reliable and to automate the printing of mathematical tables. 1.2.2 Mechanical Calculators Blaise Pascal had a weak physique but a powerful mind. When he got tired of summing by hand long columns of numbers given him by his father, a French tax collector, he constructed a mechanical calculator to speed the chore. Pascal’s calculator, built in 1640, was capable of adding whole numbers containing up to six digits. Inspired by Pascal’s invention, the German Gottfried Leibniz constructed a more sophisticated calculator that could add, subtract, multiply, and divide whole numbers. The hand-cranked machine, which he called the Step Reckoner, performed multiplications and divisions through repeated additions and subtractions, respectively. The calculators of Pascal and Leibniz were not reliable, however, and did not enjoy commercial success. In the nineteenth century, advances in machine tools and mass-production methods, combined with larger markets, made possible the creation of practical calculating machines. Frenchman Charles Thomas de Colmar utilized the stepped-drum gear mechanism invented by Leibniz to create the Arithmometer, the first commercially successful calculator. Many insurance companies purchased Arithmometers to help their actuaries compute rate tables more rapidly. Swedish publisher Georg Scheutz was intimately familiar with printing errors associated with the production of mathematical tables. He resolved to build a machine capable of automatically calculating and typesetting table values. Scheutz knew about the earlier work of English mathematician Charles Babbage, who had demonstrated how a machine could compute the values of polynomial functions through the method of differences. Despite promising early results, Babbage’s efforts to construct a full-scale difference engine had been unsuccessful. In contrast, Georg Scheutz and his son Edvard, who developed their own designs, completed the world’s first printing calculator: a machine capable of calculating mathematical tables and typesetting the values onto molds. The Dudley Observatory in Albany, New York, purchased the Scheutz difference engine in 1856. With support from the US Nautical Almanac Office, astronomers used the machine to help them compute the motion of Mars and the refraction of starlight. Difference engines were never widely used; the technology was eclipsed by the emergence of simpler and less expensive calculating machines. America in the late 1800s was fertile ground for the development of new calculating technologies. This period of American history, commonly known as the Gilded Age, was characterized by rapid industrialization, economic expansion, and a concentration of corporate power. Corporations merged to increase efficiency and profits, but the new, larger corporate organizations had multiple layers of management and multiple locations. In order for middle- and upper-level managers to monitor and improve performance, they needed access to up-to-date, comprehensive, reliable, and affordable information. All these requirements could not be met by bookkeepers and accountants using pen and paper to sum long columns of transactions by hand. To meet this demand, many entrepreneurs began producing adding and calculating machines. One of these inventors was William Burroughs, a former bank clerk who had spent long days adding columns of figures. Burroughs devised a practical adding machine and offered it for sale. He found himself in a cutthroat market; companies competed fiercely to reduce the size of their machines and make them faster and easier to use. Burroughs distinguished himself from his competitors by putting together first-class manufacturing and marketing organizations, and by the 1890s the Burroughs Adding Machine Company led the industry. Calculating machines were entrenched in the offices of large American corporations by the turn of the century. The adoption of mechanical calculators led to the “de-skilling” and “feminization” of bookkeeping (Figure 1.4 ). Before the introduction of calculating machines, offices were a male bastion, and men who could rapidly compute sums by hand were at a premium. Calculators leveled the playing field, making people of average ability quite productive. In fact, a 1909 Burroughs study concluded that a clerk using a calculator was six times faster than a clerk adding the same column of figures by hand. As managers introduced mechanical calculators into offices, they replaced male bookkeepers with female bookkeepers and lowered wages. In 1880 only 5.7 percent of bookkeepers, cashiers, and accountants were women, but by 1910 the number of women in these jobs had risen to 38.5 percent. Figure 1.4 Mechanical calculators led to the “de-skilling” and “feminization” of bookkeeping. (Automatic Data Processing (ADP)) 1.2.3 Cash Register Store owners in the late 1800s faced challenges related to accounting and embezzlement. Keeping accurate sales records was becoming more difficult as smaller stores evolved into “department stores” with several departments and many clerks. Preventing embezzlement was tricky when clerks could steal cash simply by not creating receipts for some sales. While on a European holiday in 1878, Ohio restaurateur James Ritty saw a mechanical counter connected to the propeller shaft of his ship. A year later he and his brother John used that concept to construct the first cash register, essentially an adding machine capable of expressing values in dollars and cents. Enhancements followed rapidly, and by the early 1900s the cash register had become an important information-processing device (Figure 1.5 ). Cash registers created printed, itemized receipts for customers, maintained printed logs of transactions, and performed other accounting functions that provided store owners with the detailed sales records they needed. Figure 1.5 An NCR cash register in Miller’s Shoe Shine Parlor, Dayton, Ohio (1904). (The NCR Archive at Dayton History) Cash registers also made embezzlement by clerks more difficult. The bell made it impossible for clerks to sneak money from the cash drawer and helped ensure that every sale was “rung up.” Printed logs made it easy for department store owners to compare cash on hand against sales receipts. 1.2.4 Punched-Card Tabulation As corporations and governmental organizations grew larger in the late 1800s, they needed to handle greater volumes of information. One of these agencies was the US Bureau of the Census, which collected and analyzed information on tens of millions of residents every decade. Aware of the tedium and errors associated with clerks manually copying and tallying figures, several Census Bureau employees developed mechanical tabulating machines. Herman Hollerith created the most successful device. Unlike a predecessor, who chose to record information on rolls of paper, Hollerith decided to record information on punched cards. The use of punched cards to store data was a much better approach because cards could be sorted into groups, allowing the computation of subtotals by categories. Hollerith’s equipment proved to be a great success when used in the 1890 census. In contrast to the 1880 census, which had required eight years to complete, the 1890 census was finished in only two years. Automating the census saved the Census Bureau five million dollars, about one-third of its annual budget. Other data-intensive organizations found applications for punched cards. Railroads used them to improve their accounting operations and send bills out more frequently. Retail organizations, such as Marshall Field’s, used punched cards to perform more sophisticated analyses of information generated by the cash registers at its many department stores. The Pennsylvania Steel Company and other heavy industries began to use punched-card technology to do cost accounting on manufacturing processes. The invention of sorters, tabulators, and other devices to manipulate the data on punched cards created a positive feedback loop. As organizations began using tabulating machines, they thought up new uses for information-processing equipment, stimulating further technological innovations. International Business Machines (IBM) is the corporate descendant of Hollerith’s company. Over a period of several decades, IBM and its principal competitor, Remington Rand, developed sophisticated machines based on punched cards: card punches, card verifiers, card tabulators, card sorters, and alphabetizers. Customers used these devices to create data-processing systems that received input data, performed one or more calculations, and produced output data. Within these systems, punched cards stored input data, intermediate results, and output data. In the most complicated systems, punched cards also stored the program—the steps of the computational process to be followed. Early systems relied on human operators to carry cards from one machine to the next. Later systems had electrical connections that allowed the output of one machine to be transmitted to the next machine without the use of punched cards or human intervention. Organizations with large data-processing needs found punched-card tabulators and calculators to be valuable devices, and they continually clamored for new features that would improve the computational capabilities and speed of their systems. These organizations would become a natural market for commercial electronic digital computers. Some customers of data-processing equipment used these systems for nefarious purposes. For example, IBM machines played an infamous role in the Holocaust. After Adolf Hitler came to power in Germany in 1933, IBM chief executive Thomas J. Watson overlooked well publicized accounts of anti-Semitic violence and the opening of concentration camps, focusing instead on a golden business opportunity. The firm expanded the operations of its German subsidiary, Dehomag, built a new factory in Germany, and actively sought business from the German government. Tabulating, sorting, collating, and alphabetizing machines and support services provided by Dehomag allowed the Nazi government to conduct rapid censuses, identify acknowledged Jews and those with Jewish ancestors, and generate the alphabetical lists of names needed to efficiently seize their assets, confine them to ghettos, and deport them to death camps. 1.2.5 Precursors of Commercial Computers Several computing devices developed during and immediately after World War II paved the way for the commercialization of electronic digital computers. Between 1939 and 1941, Iowa State College professor John Atanasoff and his graduate student Clifford Berry constructed an electronic device for solving systems of linear equations. The Atanasoff-Berry Computer was the first computing device built with vacuum tubes, but it was not programmable. Dr. John W. Mauchly, a physics professor at the University of Pennsylvania, visited Iowa State College in 1941 to learn more about the Atanasoff-Berry Computer. After he returned to Penn, Mauchly worked with J. Presper Eckert to create a design for an electronic computer to speed the computation of artillery tables for the US Army. They led a team that completed work on the ENIAC (electronic numerical integrator and computer) in 1946. As it turns out, the war ended before the ENIAC could provide the Army with any ballistics tables, but its speed was truly impressive. A person with a desk calculator could compute a 60-second trajectory in 20 hours. The ENIAC performed the computation in 30 seconds. In other words, the ENIAC was 2,400 times faster than a person with a desk calculator. The ENIAC had many features of a modern computer. All its internal components were electronic, and it could be programmed to perform a variety of computations. However, its program was not stored inside memory. Instead, it was “wired in” from the outside. Reprogramming the computer meant removing and reattaching many wires. This process could take many days (Figure 1.6 ). Figure 1.6 The ENIAC’s first six programmers were women. Every instruction was programmed by connecting several wires into plugboards. (Corbis Historical/Getty Images) Even before the ENIAC was completed, work began on a follow-on system called the EDVAC (electronic discrete variable automatic computer). The design of the EDVAC incorporated many improvements over the ENIAC. The most important improvement was that the EDVAC stored the program in primary memory, along with the data manipulated by the program. In 1946 Eckert, Mauchly, and several other computer pioneers gave a series of 48 lectures at the Moore School. While some of the lectures discussed lessons learned from the ENIAC, others focused on the design of its successor, the EDVAC. These lectures influenced the design of future machines built in the United States and the United Kingdom. During World War II, British engineer F. C. Williams was actively involved in the development of cathode ray tubes (CRTs) used in radar systems. After the war, he decided to put his knowledge to use by figuring out how to use a CRT as a storage device for digital information. In early 1948 a team at the University of Manchester set out to build a small computer that would use a CRT storage device, now called the Williams Tube, to store the program and its data. They called their system the Small-Scale Experimental Machine. The computer successfully executed its first program in 1948. The Small-Scale Experimental Machine was the first operational, fully electronic computer system that had both program and data stored in its memory. 1.2.6 First Commercial Computers In 1951 the British corporation Ferranti Ltd. introduced the Ferranti Mark 1, the world’s first commercial computer. The computer was the direct descendant of research computers constructed at the University of Manchester. Ferranti delivered nine computers between 1951 and 1957, and later Ferranti models boasted a variety of technological breakthroughs, thanks to the company’s close association with research undertaken at the University of Manchester and Cambridge University. After completing work on the ENIAC, Eckert and Mauchly formed their own company to produce a commercial digital computer. The Eckert-Mauchly Computer Corporation signed a preliminary agreement with the National Bureau of Standards (representing the Census Bureau) in 1946 to develop a commercial computer, which they called the UNIVAC, for “universal automatic computer.” The project experienced huge cost overruns, and by 1950 the Eckert-Mauchly Computer Corporation was on the brink of bankruptcy. Remington Rand bought them out and delivered the UNIVAC I to the US Bureau of the Census in 1951. In a public relations coup, Remington Rand cooperated with CBS to use a UNIVAC computer to predict the outcome of the 1952 presidential election (Figure 1.7 ). The events of election night illustrate the tough decisions people can face when computers produce unexpected results. Figure 1.7 CBS news coverage of the 1952 presidential election included predictions made by a UNIVAC computer. When the computer predicted Eisenhower would win in a landslide, consternation followed. (Photo reproduced courtesy of Unisys Corporation) Adlai Stevenson had led Dwight Eisenhower in polls taken before the election, but less than an hour after voting ended, with just 7 percent of the votes tabulated, the UNIVAC was predicting Dwight Eisenhower would win the election in a landslide. When CBS correspondent Charles Collingwood asked Remington Rand for the computer’s prediction, however, he was given the run-around. The computer’s engineers were convinced there was a programming error. For one thing, UNIVAC was predicting that Eisenhower would carry several Southern states, and everybody “knew” that Republican presidential candidates never won in the South. Remington Rand’s director of advanced research ordered the engineers to change the programming so the outcome would be closer to what the political pundits expected. An hour later, the reprogrammed computer predicted that Eisenhower would win by only nine electoral votes, and that’s what CBS announced. As it turns out, the computer was right and the human “experts” were wrong. Before being reprogrammed, UNIVAC had predicted Eisenhower would win 438 electoral votes to 93 for Stevenson. The official result was a 442–89 victory for Eisenhower. In America in the early 1950s, the word “UNIVAC” was synonymous with “computer.” Remington Rand sold a total of 46 UNIVACs to government agencies, such as the US Air Force, the US Army Map Service, the Atomic Energy Commission, and the US Navy, as well as to large corporations and public utilities, such as General Electric, Metropolitan Life, US Steel, Du Pont, Franklin Life Insurance, Westinghouse, Pacific Mutual Life Insurance, Sylvania Electric, and Consolidated Edison. Office automation leader IBM did not enter the commercial computer market until 1953, and its initial products were inferior to the UNIVAC. However, IBM quickly turned the tables on Remington Rand, thanks to a larger base of existing customers, a far superior sales and marketing organization, and a much greater investment in research and development. In 1955 IBM held more than half the market, and by the mid-1960s IBM dominated the computer industry with 65 percent of total sales, compared to 12 percent for number-two computer maker Sperry Rand (the successor to Remington Rand). 1.2.7 Programming Languages and Time- Sharing In the earliest digital computers, every instruction was coded as a long string of 0s and 1s. People immediately began looking for ways to make coding faster and less error-prone. One early improvement was the creation of assembly language, which allowed programmers to work with symbolic representations of the instruction codes. Still, one assembly-language instruction was required for every machine instruction. Programmers wanted fewer, higher- level instructions to generate more machine instructions. In 1951 Frances Holberton, one of the six original ENIAC programmers, created a sort-merge generator for the UNIVAC that took a specification of files to be manipulated and automatically produced the machine program to do the sorting and merging. Building on this work, Grace Murray Hopper, also at Remington Rand, developed the A-0 system that automated the process of linking together subroutines to form the complete machine code. Over at IBM, John Backus convinced his superiors of the need for a higher-level programming language for IBM computers. He led the effort to develop the IBM Mathematical Formula Translating System, or FORTRAN. Designed for scientific applications, the first system was completed in 1957. Many skeptics believed that any “automatic programming” system would generate inefficient machine code compared to hand-coded assembly language, but they were proven wrong: the FORTRAN compiler generated high-quality code. What’s more, programmers could write FORTRAN programs 5 to 20 times faster than the equivalent assembly language programs. Most programmers quickly shifted allegiance from assembly language to FORTRAN. Eventually, other computer manufacturers developed their own FORTRAN compilers, and FORTRAN became an international standard. Meanwhile, business-oriented programming languages were also being developed by several computer manufacturers. Grace Murray Hopper specified FLOW-MATIC, an English-like programming language for the UNIVAC. Other manufacturers began to develop their own languages. Customers didn’t like incompatible languages, because it meant programs written for one brand of computer had to be rewritten before they could be run on another brand of computer. In 1959 an extremely important customer, the US Department of Defense, brought together a committee to develop a common business-oriented programming language that all manufacturers would support. The committee wrote the specification for COBOL. By requiring manufacturers to support COBOL in order to get defense contracts, the US Department of Defense helped ensure its widespread adoption. In the early 1960s, John Kemeny and Thomas Kurtz at Dartmouth College directed teams of undergraduate students who developed a time-sharing system and an easy-to-learn programming language. The Dartmouth Time-Sharing System (DTSS) gave multiple users the ability to edit and run their programs simultaneously, by dividing the computer’s time among all the users. Time-sharing made computers accessible to more people because it allowed the cost of owning and operating a computer system to be divided among a large pool of users who purchased the right to connect to the system. The development of BASIC, a simple, easy-to-learn programming language, was another important step toward making computers accessible to a wider audience. Kemeny and Kurtz saw BASIC as a way to teach programming, and soon many other educational institutions began teaching students how to program using Dartmouth BASIC. The language’s popularity led computer manufacturers to develop their own versions of BASIC. 1.2.8 Transistor and Integrated Circuit Although the British had radar installations at the beginning of World War II, it became clear during the Battle of Britain that their systems were inadequate. The British and the Americans worked together to develop microwave radar systems capable of locating enemy planes more precisely. Microwave radar required higher-frequency receivers utilizing semiconductors, and in the process of manufacturing microwave radar systems for the war effort, several American companies, including AT&T, greatly improved their ability to create semiconductors. AT&T was on the lookout for a new technology to replace the vacuum tube. Its long- distance network relied on vacuum tubes to amplify signals, but the tubes required a lot of power, generated a lot of heat, and burned out like lightbulbs. After the war, AT&T put together a team of Bell Labs scientists, led by Bill Shockley, to develop a semiconductor substitute for the vacuum tube. In 1948 Bell Labs announced the invention of such a device, which they called the transistor. While most electronics companies ignored the invention of the transistor, Bill Shockley understood its potential. He left Bell Labs and moved to Palo Alto, California, where he founded Shockley Semiconductor in 1956. He hired an exceptional team of engineers and physicists, but many disliked his heavy-handed management style. In September 1957, eight of Shockley’s most talented employees, including Gordon Moore and Robert Noyce, walked out. The group, soon to be known as the “traitorous eight,” founded Fairchild Semiconductor (Figure 1.8 ). By this time transistors were being used in a wide variety of devices, from transistor radios to computers. While transistors were far superior to vacuum tubes, they were still too big for some applications. Fairchild Semiconductor set out to produce a single semiconductor device containing transistors, capacitors, and resistors; in other words, an integrated circuit. Another firm, Texas Instruments, was on the same mission. Today Robert Noyce of Fairchild Semiconductor and Jack Kilby of Texas Instruments are credited for independently inventing the integrated circuit. Figure 1.8 The eight founders of Fairchild Semiconductor on the factory floor. Gordon Moore is second from the left and Robert Noyce is on the right. (Wayne Miller/Magnum Photos, Inc.) The Cold War between the United States and the Soviet Union played an important role in advancing integrated circuit technology. American engineers developing the Minuteman II ballistic missile in the early 1960s decided to use integrated circuits to improve the processing speed of its guidance computer. The Minuteman II program was the single largest consumer of integrated circuits in the United States between 1962 and 1965, representing about 20 percent of total sales. During these years companies learned how to make rugged, reliable integrated circuits. They also continued to shrink the components within the integrated circuits, leading to an exponential increase in their power. Gordon Moore noted this trend in a 1965 paper and predicted it would continue. Today Moore’s law refers to the phenomenon that the number of transistors in the most powerful integrated circuits doubles roughly every two years. 1.2.9 IBM System/360 The integrated circuit made possible the construction of much more powerful and reliable computers. The 1960s was the era of mainframe computers—large computers designed to serve the data-processing needs of large businesses. Mainframe computers enabled enterprises to centralize all their data-processing applications in a single system. As we have seen, by this time IBM dominated the mainframe market in the United States. In 1964 IBM unveiled the System/360, a series of 19 compatible computers with varying levels of computing speed and memory capacity (Figure 1.9 ). Because the systems were software compatible, a business could upgrade its computer without having to rewrite its application programs. This feature was important, because by the 1960s companies were making much larger investments in software. Figure 1.9 In the 1960s, IBM dominated the mainframe computer market in the United States. (H. Armstrong Roberts/Classic Stock/Alamy) 1.2.10 Microprocessor In 1968 Robert Noyce and Gordon Moore left Fairchild Semiconductor to found another semiconductor manufacturing company, which they named Intel. A year later Japanese calculator manufacturer Busicom approached Intel about designing 12 custom chips for use in a new scientific calculator. Intel agreed to provide the chips and assigned responsibility for the project to Marcian “Ted” Hoff. After reviewing the project, Hoff suggested that it was not in Intel’s best interest to manufacture a custom chip for every customer. As an alternative, he suggested that Intel create a general-purpose chip that could be programmed to perform a wide variety of tasks. Each customer could then program the chip to meet its particular needs. Intel and Busicom agreed to the plan, which reduced the required number of chips for Busicom’s calculator from 12 to 4. A year of development by Ted Hoff, Stanley Mazor, and Federico Faggin led to the release of the Intel 4004, the world’s first microprocessor. Inside the 1/8-inch × 1/6-inch chip were 2,300 transistors, giving the Intel 4004 the same computing power as the ENIAC, which had occupied 3,000 cubic feet. Microprocessors made it possible to integrate computers into everyday devices. Today we’re surrounded by devices containing microprocessors: smartphones, streaming media players, smart speakers with voice-controlled personal assistants, learning thermostats, video doorbells, augmented reality glasses, self-driving cars, and much more. The highest-profile use of microprocessors, however, is in personal computers. 1.2.11 Personal Computer During the Vietnam conflict in the late 1960s and early 1970s, the area around San Francisco was home to a significant counterculture, including a large number of antiwar and antiestablishment activists. The do-it-yourself idealism of the power-to-the-people movement intersected with advances in computer technology in a variety of ways, including the Whole Earth Catalog, the People’s Computer Company, and the Homebrew Computer Club. The Whole Earth Catalog, first published in 1968, was, in the words of Steve Jobs, “sort of like Google in paperback form” —an effort to pull together in a single large volume lists of helpful tools, in this case for the creation of a more just and environmentally sensitive society. The definition of “tools” was broad; the catalog’s lists included books, classes, garden tools, camping equipment, and (in later issues) early personal computers. “With the Whole Earth Catalog, Stewart Brand offered a generation of computer engineers and programmers an alternative vision of technology as a tool for individual and collective transformation” [24, p. 104]. The People’s Computer Company was a not-for-profit corporation dedicated to educating people on how to use computers. One of its activities was publishing a newspaper. The cover of the first issue read: “Computers are mostly used against people instead of for people, used to control people instead of to free them, time to change all that—we need a PEOPLE’S COMPUTER COMPANY”. Typical issues contained programming tips and the source code to programs, particularly educational games written in BASIC. The newspaper’s publisher, Bob Albrecht, said, “I was heavily influenced by the Whole Earth Catalog. I wanted to give away ideas” [24, p. 114]. The People’s Computer Company also set up the People’s Computer Center in a strip mall in Menlo Park, California. The center allowed people to rent teletype terminals connected to a timeshared computer. A large number of teenagers were drawn to computing through Friday evening game-playing sessions. Many users wrote their own programs, and the center promoted a culture in which computer enthusiasts freely shared software with each other. In 1975 the Homebrew Computer Club, an outgrowth of the People’s Computer Company, became a meeting place for hobbyists interested in building personal computers out of microprocessors. A company in Albuquerque, New Mexico, called MITS had recently begun shipping the Altair 8800 personal computer, and during the first few Homebrew Computer Club meetings, members showed off various enhancements to the Altair 8800. Progress was frustratingly slow, however, due to the lack of a higher-level programming language. Three months after the establishment of the Homebrew Computer Club, MITS representatives visited Palo Alto, California, to demonstrate the Altair 8800 and the BASIC interpreter created by Paul Allen and Bill Gates, who had a tiny company called Micro-Soft. The audience in the hotel conference room was far larger than expected, and during the overcrowded and chaotic meeting somebody acquired a paper tape containing the source code to Altair BASIC. More than 70 copies of the tape were handed out at the next meeting of the Homebrew Computer Club. After that, free copies of the interpreter proliferated. Some hobbyists felt that the asking price of $500 for the BASIC interpreter was too high, considering that the Altair computer itself cost only $395 as a kit or $495 preassembled. Bill Gates responded by writing “An Open Letter to Hobbyists,” which was reprinted in a variety of publications. In the letter he asserted that less than 10 percent of all Altair owners had purchased BASIC, even though far more people than that were using it. According to Gates, the royalties Micro-Soft had received from Altair BASIC made the time spent on the software worth less than $2 an hour. He wrote, “Nothing would please me more than being able to hire 10 programmers and deluge the hobby market with good software,” but the theft of software created “very little incentive” for his company to release new products. The controversy over Altair BASIC did not slow the pace of innovations. Hobbyists wanted to do more than flip the toggle switches and watch the lights blink on the Altair 8800. Steve Wozniak, a computer engineer at Hewlett-Packard, created a more powerful personal computer that supported keyboard input and television monitor output. Wozniak’s goal was to make a machine for himself and to impress other members of the Homebrew Computer Club, but his friend Steve Jobs thought of a few improvements and convinced Wozniak they should go into business (Figure 1.10 ). They raised $1,300 by selling Jobs’s Volkswagen van and Wozniak’s Hewlett-Packard scientific calculator, launching Apple Computer. Although the company sold only 200 Apple I computers, its next product, the Apple II, became one of the most popular personal computers of all time. Figure 1.10 Steve Jobs (right) convinced Steve Wozniak (left) they should go into business selling the personal computer Wozniak designed. They named their company Apple Computer. (Kimberly White/Reuters) By the end of the 1970s, many companies, including Apple Computer and Tandy, were producing personal computers. While hundreds of thousands of people bought personal computers for home use, businesses were reluctant to move to the new computer platform. However, two significant developments made personal computers more attractive to businesses. The first development was the computer spreadsheet program. For decades firms had used spreadsheets to make financial predictions. Manually computing spreadsheets was monotonous and error-prone, since changing a value in a single cell could require updating many other cells. In the fall of 1979, Dan Bricklin and Bob Frankston released their program, called VisiCalc, for the Apple II. VisiCalc’s labor-saving potential was obvious to businesses. After a slow start, it quickly became one of the most popular application programs for personal computers. Sales of the Apple II computer increased significantly after the introduction of VisiCalc. The second development was the release of the IBM PC in 1981. The IBM name exuded reliability and respectability, making it easier for companies to make the move to desktop systems for their employees. As the saying went, “Nobody ever got fired for buying from IBM.” In contrast to the approach taken by Apple Computer, IBM decided to make its PC an open architecture, meaning the system was built from off-the-shelf parts and other companies could manufacture “clones” with the same functionality. This decision helped to make the IBM PC the dominant personal computer architecture. The success of IBM-compatible PCs fueled the growth of Microsoft. In 1980 IBM contracted with Microsoft to provide the DOS operating system for the IBM PC. Microsoft let IBM have DOS for practically nothing, but in return IBM gave Microsoft the right to collect royalties from other companies manufacturing PC-compatible computers. Microsoft profited handsomely from this arrangement when PC-compatibles manufactured by other companies gained more than 80 percent of the PC market. 1.3 Milestones in Networking In the early nineteenth century, the United States fell far behind Europe in networking technology. The French had begun constructing a network of telegraph towers in the 1790s, and 40 years later there were towers all over the European continent (Figure 1.11 ). At the top of each tower was a pair of semaphores. Operators raised and lowered the semaphores; each pattern corresponded to a letter or symbol. A message initiated at one tower would be seen by another tower within viewing distance. The receiving tower would then repeat the message for the next tower in the network, and so on. This optical telegraph system could transmit messages at the impressive rate of about 350 miles per hour when skies were clear. Figure 1.11 A semaphore telegraph tower on the first line from Paris to Lille (1794). (Interfoto/Alamy) In 1837 Congress asked for proposals to create a telegraph system between New York and New Orleans. It received one proposal based on proven European technology. Samuel Morse submitted a radically different proposal. He suggested constructing a telegraph system that used electricity to communicate the signals. Let’s step back and review some of the key discoveries and inventions that enabled Morse to make his dramatic proposal. 1.3.1 Electricity and Electromagnetism Amber is a hard, translucent, yellowish-brown fossil resin often used to make beads and other ornamental items. About 2,600 years ago the Greeks discovered that if you rub amber, it becomes charged with a force enabling it to attract light objects such as feathers and dried leaves. The Greek word for amber is ηλ∈κτρων (electron). Our word “electric” literally means “like amber.” For more than 2,000 years amber’s ability to attract other materials was seen as a curiosity with no practical value, but in the seventeenth and eighteenth centuries scientists began to study electricity in earnest. Alessandro Volta, a professor of physics at the University of Pavia, made a key breakthrough when he discovered that electricity could be generated chemically. He produced an electric current by submerging two different metals close to each other in an acid. In 1799 Volta used this principle to create the world’s first battery. Volta’s battery produced an electric charge more than 1,000 times as powerful as that produced by rubbing amber. Scientists soon put this power to practical use. In 1820 Danish physicist Christian Oersted discovered that an electric current creates a magnetic field. Five years later British electrician William Sturgeon constructed an electromagnet by coiling wire around a horseshoe-shaped piece of iron. When he ran an electric current through the coil, the iron became magnetized. Sturgeon showed how a single battery was capable of producing a charge strong enough to pick up a nine-pound metal object. In 1830 American professor Joseph Henry rigged up an experiment that showed how a telegraph machine could work. He strung a mile of wire around the walls of his classroom at the Albany Academy. At one end he placed a battery; at the other end he connected an electromagnet, a pivoting metal bar, and a bell. When Henry connected the battery, the electromagnet attracted the metal bar, causing it to ring the bell. Disconnecting the battery allowed the bar to return to its original position. In this way he could produce a series of rings. 1.3.2 Telegraph Samuel Morse, a professor of arts and design at New York University, worked on the idea of a telegraph during most of the 1830s, and in 1838 he patented his design of a telegraph machine. The US Congress did not approve Morse’s proposal in 1837 to construct a New York–to–New Orleans telegraph system, but it did not fund any of the other proposals either. Morse persisted with his lobbying, and in 1843 Congress appropriated $30,000 to Morse for the construction of a 40-mile telegraph line between Washington, DC, and Baltimore, Maryland. On May 1, 1844, the Whig party convention in Baltimore nominated Henry Clay for president. The telegraph line had been completed to Annapolis Junction at that time. A courier hand-carried a message about Clay’s nomination from Baltimore to Annapolis Junction, where it was telegraphed to Washington. This was the first news reported via telegraph. The line officially opened on May 24. Morse, seated in the old Supreme Court chamber inside the US Capitol, sent his partner in Baltimore a verse from the Bible: “What hath God wrought?” The value of the telegraph was immediately apparent, and the number of telegraph lines quickly increased. By 1846 telegraph lines connected Washington, Baltimore, Philadelphia, New York, Buffalo, and Boston. In 1850 twenty different companies operated 12,000 miles of telegraph lines. The first transcontinental telegraph line was completed in 1861, putting the Pony Express out of business (Figure 1.12 ). The telegraph was the sole method of rapid long-distance communication until 1877. By this time the United States was networked by more than 200,000 miles of telegraph wire. Figure 1.12 Pony Express riders lost their jobs when the US transcontinental telegraph line was completed in 1861. (North Wind Picture Archives/Alamy) The telegraph was a versatile tool, and people kept finding new applications for it. For example, by 1870 fire-alarm telegraphs were in use in 75 major cities in the United States. New York City alone had 600 fire-alarm telegraphs. When a person pulled the lever of the alarm box, it automatically transmitted a message identifying its location to a fire station. These devices greatly improved the ability of fire departments to dispatch equipment quickly to the correct location. 1.3.3 Telephone Alexander Graham Bell was born in Edinburgh, Scotland, into a family focused on impairments of speech and hearing. His father and grandfather were experts in elocution and the correction of speech. His mother was almost completely deaf. Bell was educated to follow in the same career path as his father and grandfather, and he became a teacher of deaf students. Later, he married a deaf woman. Bell pursued inventing as a means of achieving financial independence. At first he focused on making improvements to the telegraph. A significant problem with early telegraph systems was that a single wire could transmit only one message at a time. If multiple messages could be sent simultaneously along the same wire, communication delays would be reduced, and the value of the entire system would increase. Bell’s solution to this problem was called a harmonic or musical telegraph. If you imagine hearing Morse code, it’s obvious that all of the dots and dashes are the same note played for a shorter or longer period of time. The harmonic telegraph assigned a different note (different sound frequency) to each message. At the receiving end, different receivers could be tuned to respond to different notes, as you can tune your radio to hear only what is broadcast by a particular station. Bell knew that the human voice is made up of sounds at many different frequencies. From his work on the harmonic telegraph, he speculated that it should be possible to capture and transmit human voice over a wire. He and Thomas A. Watson succeeded in transmitting speech electronically in 1876. Soon after, they commercialized their invention. Nearly all early telephones were installed in businesses. Leasing a telephone was expensive, and most people focused on its commercial value rather than its social value. However, the number of phones placed in homes increased rapidly in the 1890s, after Bell’s first patent expired. Once telephones were placed in the home, the traditional boundaries between private family life and public business life became blurred. People enjoyed being able to conduct business transactions from the privacy of their home, but they also found that a ringing telephone could be an unwelcome interruption. Another consequence of the telephone was that it eroded traditional social hierarchies. An 1897 issue of Western Electrician reports that Governor Chauncey Depew of New York was receiving unwanted phone calls from ordinary citizens: “Every time they see anything about him in the newspapers, they call and tell him what a ‘fine letter he wrote’ or ‘what a lovely speech he made,’ or ask if this or that report is true; and all this from people who, if they came to his office, would probably never say more than ‘Good morning’ ”. People also worried about the loss of privacy brought about by the telephone. In 1877 the New York Times reported that telephone workers responsible for operating an early system in Providence, Rhode Island, overheard many confidential conversations. The writer fretted that telephone eavesdropping would make it dangerous for anyone in Providence to accept nomination for public office. The telephone enabled the creation of the first “online” communities. In rural areas the most common form of phone service was the party line: a single circuit connecting multiple phones to the telephone exchange. Party lines enabled farmers to gather by their phones every evening to talk about the weather and exchange gossip. The power of this new medium was demonstrated in the Bryan/McKinley presidential election of 1896. For the first time, presidential election returns were transmitted directly into people’s homes. “Thousands sat with their ear glued to the receiver the whole night long, hypnotized by the possibilities unfolding to them for the first time”. 1.3.4 Typewriter and Teletype For hundreds of years people dreamed of a device that would allow an individual to produce a document that looked as if it had been typeset, but the dream was not realized until 1867, when Americans Christopher Sholes, Carlos Glidden, and Samuel Soule patented the first typewriter. In late 1873 Remington & Sons Company, famous for guns and sewing machines,