Full Transcript

Much has changed since the first edition of New to this edition: ALAN DIX, JANET FINLAY, Human–Computer Interaction was published. Ubiquitous computing and rich sensor-filled environments are finding their way out of the laboratory, not ju...

Much has changed since the first edition of New to this edition: ALAN DIX, JANET FINLAY, Human–Computer Interaction was published. Ubiquitous computing and rich sensor-filled environments are finding their way out of the laboratory, not just into A revised structure, reflecting the growth of HCI as a discipline, GREGORY D. ABOWD, RUSSELL BEALE movies but also into our workplaces and homes. The separates out basic material suitable HUMAN–COMPUTER INTERACTION HUMAN–COMPUTER computer has broken out of its plastic and glass for introductory courses from more bounds providing us with networked societies where detailed models and theories. personal computing devices from mobile phones to smart cards fill our pockets and electronic devices New chapter on interaction design surround us at home and work. The web too has grown adds material on scenarios and basic from a largely academic network into the hub of INTERACTION navigation design. business and everyday lives. As the distinctions between the physical and the digital, and between work and New chapter on universal design, leisure start to break down, human–computer substantially extending the coverage interaction is also changing radically. of this material in the book. The excitement of these changes is captured in this new edition, which also looks forward to other emerging Updated and extended treatment of THIRD EDITION socio/contextual issues. technologies. However, the book is firmly rooted in strong principles and models independent of the Extended and new material on novel passing technologies of the day: these foundations will be the means by which today’s students will interaction, including updated understand tomorrow’s technology. ubicomp material, designing experience, physical sensors and a The third edition of Human–Computer Interaction can be new chapter on rich interaction. used for introductory and advanced courses on HCI, Interaction Design, Usability or Interactive Systems Updated material about the web Design. It will also prove an invaluable reference for including dynamic content. professionals wishing to design usable computing devices. Relaunched website including case studies, WAP access and search. Accompanying the text is a comprehensive website containing a broad range of material for instructors, students and practitioners, a full text search facility for the book, links to many sites of additional interest and THIRD much more: go to www.hcibook.com EDITION Alan Dix is Professor in the Department of Computing, Lancaster, UK. Janet Finlay is DIX Professor in the School of Computing, Leeds Metropolitan University, UK. Gregory D. Abowd is Associate Professor in the College of Computing and GVU Center at Georgia Tech, USA. FINLAY Russell Beale is lecturer at the School of Computer Science, University of ABOWD Birmingham, UK. BEALE Cover illustration by Peter Gudynas www.pearson-books.com Human–Computer Interaction We work with leading authors to develop the strongest educational materials in computing, bringing cutting-edge thinking and best learning practice to a global market. Under a range of well-known imprints, including Prentice Hall, we craft high quality print and electronic publications which help readers to understand and apply their content, whether studying or at work. To find out more about the complete range of our publishing, please visit us on the world wide web at: www.pearsoned.co.uk Human–Computer Interaction Third Edition Alan Dix, Lancaster University Janet Finlay, Leeds Metropolitan University Gregory D. Abowd, Georgia Institute of Technology Russell Beale, University of Birmingham Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world Visit us on the world wide web at: www.pearsoned.co.uk First published 1993 Second edition published 1998 Third edition published 2004 © Prentice-Hall Europe 1993, 1998 © Pearson Education Limited 2004 The rights of Alan Dix, Janet E. Finlay, Gregory D. Abowd and Russell Beale to be identified as authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without either the prior written permission of the publisher or a licence permitting restricted copying in the United Kingdom issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP. All trademarks used herein are the property of their respective owners. The use of any trademark in this text does not vest in the author or publisher any trademark ownership rights in such trademarks, nor does the use of such trademarks imply any affiliation with or endorsement of this book by such owners. ISBN-13: 978-0-13-046109-4 ISBN-10: 0-13-046109-1 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library 10 9 8 7 6 5 4 3 10 09 08 07 06 Typeset in 10/121/2pt Minion by 35 Printed and bound by Scotprint, Haddington BRIEF CONTENTS Guided tour xiv Foreword xvi Preface to the third edition xix Publisher’s acknowledgements xxiii Introduction 1 Part 1 FOUNDATIONS 9 Chapter 1 The human 11 Chapter 2 The computer 59 Chapter 3 The interaction 123 Chapter 4 Paradigms 164 Part 2 DESIGN PROCESS 189 Chapter 5 Interaction design basics 191 Chapter 6 HCI in the software process 225 Chapter 7 Design rules 258 Chapter 8 Implementation support 289 Chapter 9 Evaluation techniques 318 Chapter 10 Universal design 365 Chapter 11 User support 395 Part 3 MODELS AND THEORIES 417 Chapter 12 Cognitive models 419 Chapter 13 Socio-organizational issues and stakeholder requirements 450 vi Brief Contents Chapter 14 Communication and collaboratio0n models 475 Chapter 15 Task analysis 510 Chapter 16 Dialog notations and design 544 Chapter 17 Models of the system 594 Chapter 18 Modeling rich interaction 629 Part 4 OUTSIDE THE BOX 661 Chapter 19 Groupware 663 Chapter 20 Ubiquitous computing and augmented realities 716 Chapter 21 Hypertext, multimedia and the world wide web 748 References 791 Index 817 CONTENTS Guided tour xiv Foreword xvi Preface to the third edition xix Publisher’s acknowledgements xxiii Introduction 1 Part 1 FOUNDATIONS 9 Chapter 1 The human 11 1.1 Introduction 12 1.2 Input–output channels 13 Design Focus: Getting noticed 16 Design Focus: Where’s the middle? 22 1.3 Human memory 27 Design Focus: Cashing in 30 Design Focus: 7 ± 2 revisited 32 1.4 Thinking: reasoning and problem solving 39 Design Focus: Human error and false memories 49 1.5 Emotion 51 1.6 Individual differences 52 1.7 Psychology and the design of interactive systems 53 1.8 Summary 55 Exercises 56 Recommended reading 57 Chapter 2 The computer 59 2.1 Introduction 60 2.2 Text entry devices 63 Design Focus: Numeric keypads 67 2.3 Positioning, pointing and drawing 71 viii Contents 2.4 Display devices 78 Design Focus: Hermes: a situated display 86 2.5 Devices for virtual reality and 3D interaction 87 2.6 Physical controls, sensors and special devices 91 Design Focus: Feeling the road 94 Design Focus: Smart-Its – making using sensors easy 96 2.7 Paper: printing and scanning 97 Design Focus: Readability of text 101 2.8 Memory 107 2.9 Processing and networks 114 Design Focus: The myth of the infinitely fast machine 116 2.10 Summary 120 Exercises 121 Recommended reading 122 Chapter 3 The interaction 123 3.1 Introduction 124 3.2 Models of interaction 124 Design Focus: Video recorder 130 3.3 Frameworks and HCI 130 3.4 Ergonomics 131 Design Focus: Industrial interfaces 133 3.5 Interaction styles 136 Design Focus: Navigation in 3D and 2D 144 3.6 Elements of the WIMP interface 145 Design Focus: Learning toolbars 151 3.7 Interactivity 152 3.8 The context of the interaction 154 Design Focus: Half the picture? 155 3.9 Experience, engagement and fun 156 3.10 Summary 160 Exercises 161 Recommended reading 162 Chapter 4 Paradigms 164 4.1 Introduction 165 4.2 Paradigms for interaction 165 4.3 Summary 185 Exercises 186 Recommended reading 187 Contents ix Part 2 DESIGN PROCESS 189 Chapter 5 Interaction design basics 191 5.1 Introduction 192 5.2 What is design? 193 5.3 The process of design 195 5.4 User focus 197 Design Focus: Cultural probes 200 5.5 Scenarios 201 5.6 Navigation design 203 Design Focus: Beware the big button trap 206 Design Focus: Modes 207 5.7 Screen design and layout 211 Design Focus: Alignment and layout matter 214 Design Focus: Checking screen colors 219 5.8 Iteration and prototyping 220 5.9 Summary 222 Exercises 223 Recommended reading 224 Chapter 6 HCI in the software process 225 6.1 Introduction 226 6.2 The software life cycle 226 6.3 Usability engineering 237 6.4 Iterative design and prototyping 241 Design Focus: Prototyping in practice 245 6.5 Design rationale 248 6.6 Summary 256 Exercises 257 Recommended reading 257 Chapter 7 Design rules 258 7.1 Introduction 259 7.2 Principles to support usability 260 7.3 Standards 275 7.4 Guidelines 277 7.5 Golden rules and heuristics 282 7.6 HCI patterns 284 7.7 Summary 286 Exercises 287 Recommended reading 288 x Contents Chapter 8 Implementation support 289 8.1 Introduction 290 8.2 Elements of windowing systems 291 8.3 Programming the application 296 Design Focus: Going with the grain 301 8.4 Using toolkits 302 Design Focus: Java and AWT 304 8.5 User interface management systems 306 8.6 Summary 313 Exercises 314 Recommended reading 316 Chapter 9 Evaluation techniques 318 9.1 What is evaluation? 319 9.2 Goals of evaluation 319 9.3 Evaluation through expert analysis 320 9.4 Evaluation through user participation 327 9.5 Choosing an evaluation method 357 9.6 Summary 362 Exercises 363 Recommended reading 364 Chapter 10 Universal design 365 10.1 Introduction 366 10.2 Universal design principles 366 10.3 Multi-modal interaction 368 Design Focus: Designing websites for screen readers 374 Design Focus: Choosing the right kind of speech 375 Design Focus: Apple Newton 381 10.4 Designing for diversity 384 Design Focus: Mathematics for the blind 386 10.5 Summary 393 Exercises 393 Recommended reading 394 Chapter 11 User support 395 11.1 Introduction 396 11.2 Requirements of user support 397 11.3 Approaches to user support 399 11.4 Adaptive help systems 404 Design Focus: It’s good to talk – help from real people 405 11.5 Designing user support systems 412 11.6 Summary 414 Exercises 415 Recommended reading 416 Contents xi Part 3 MODELS AND THEORIES 417 Chapter 12 Cognitive models 419 12.1 Introduction 420 12.2 Goal and task hierarchies 421 Design Focus: GOMS saves money 424 12.3 Linguistic models 430 12.4 The challenge of display-based systems 434 12.5 Physical and device models 436 12.6 Cognitive architectures 443 12.7 Summary 447 Exercises 448 Recommended reading 448 Chapter 13 Socio-organizational issues and stakeholder requirements 450 13.1 Introduction 451 13.2 Organizational issues 451 Design Focus: Implementing workflow in Lotus Notes 457 13.3 Capturing requirements 458 Design Focus: Tomorrow’s hospital – using participatory design 468 13.4 Summary 472 Exercises 473 Recommended reading 474 Chapter 14 Communication and collaboration models 475 14.1 Introduction 476 14.2 Face-to-face communication 476 Design Focus: Looking real – Avatar Conference 481 14.3 Conversation 483 14.4 Text-based communication 495 14.5 Group working 504 14.6 Summary 507 Exercises 508 Recommended reading 509 Chapter 15 Task analysis 510 15.1 Introduction 511 15.2 Differences between task analysis and other techniques 511 15.3 Task decomposition 512 15.4 Knowledge-based analysis 519 15.5 Entity–relationship-based techniques 525 15.6 Sources of information and data collection 532 15.7 Uses of task analysis 538 xii Contents 15.8 Summary 541 Exercises 542 Recommended reading 543 Chapter 16 Dialog notations and design 544 16.1 What is dialog? 545 16.2 Dialog design notations 547 16.3 Diagrammatic notations 548 Design Focus: Using STNs in prototyping 551 Design Focus: Digital watch – documentation and analysis 563 16.4 Textual dialog notations 565 16.5 Dialog semantics 573 16.6 Dialog analysis and design 582 16.7 Summary 589 Exercises 591 Recommended reading 592 Chapter 17 Models of the system 594 17.1 Introduction 595 17.2 Standard formalisms 595 17.3 Interaction models 608 17.4 Continuous behavior 618 17.5 Summary 624 Exercises 625 Recommended reading 627 Chapter 18 Modeling rich interaction 629 18.1 Introduction 630 18.2 Status–event analysis 631 18.3 Rich contexts 639 18.4 Low intention and sensor-based interaction 649 Design Focus: Designing a car courtesy light 655 18.5 Summary 657 Exercises 658 Recommended reading 659 Part 4 OUTSIDE THE BOX 661 Chapter 19 Groupware 663 19.1 Introduction 664 19.2 Groupware systems 664 Contents xiii 19.3 Computer-mediated communication 667 Design Focus: SMS in action 673 19.4 Meeting and decision support systems 679 19.5 Shared applications and artifacts 685 19.6 Frameworks for groupware 691 Design Focus: TOWER – workspace awareness 701 19.7 Implementing synchronous groupware 702 19.8 Summary 713 Exercises 714 Recommended reading 715 Chapter 20 Ubiquitous computing and augmented realities 716 20.1 Introduction 717 20.2 Ubiquitous computing applications research 717 Design Focus: Ambient Wood – augmenting the physical 723 Design Focus: Classroom 2000/eClass – deploying and evaluating ubicomp 727 Design Focus: Shared experience 732 20.3 Virtual and augmented reality 733 Design Focus: Applications of augmented reality 737 20.4 Information and data visualization 738 Design Focus: Getting the size right 740 20.5 Summary 745 Exercises 746 Recommended reading 746 Chapter 21 Hypertext, multimedia and the world wide web 748 21.1 Introduction 749 21.2 Understanding hypertext 749 21.3 Finding things 761 21.4 Web technology and issues 768 21.5 Static web content 771 21.6 Dynamic web content 778 21.7 Summary 787 Exercises 788 Recommended reading 788 References 791 Index 817 xiv Guided tour PART DESIGN PROCESS 2 MODELING INTERACTION RICH 18 O V E RV I E W In this part, we concentrate on how design practice addresses the critical feature of an interactive system – usability from the human perspective. The chapters in We operate within an ecology of people, physical artifacts this part promote the purposeful design of more usable and electronic systems, and this rich ecology has recently interactive systems. We begin in Chapter 5 by introducing become more complex as electronic devices invade the the key elements in the interaction design process. These workplace and our day-to-day lives. We need methods elements are then expanded in later chapters. to deal with these rich interactions. Chapter 6 discusses the design process in more detail, n Status–event analysis is a semi-formal, easy to apply specifically focussing on the place of user-centered design technique that: within a software engineering framework. Chapter 7 high- – classifies phenomena as event or status lights the range of design rules that can help us to specify – embodies naïve psychology usable interactive systems, including abstract principles, – highlights feedback problems in interfaces. guidelines and other design representations. In Chapter 8, we provide an overview of implementa- n Aspects of rich environments can be incorporated into tion support for the programmer of an interactive system. methods such as task analysis: Chapter 9 is concerned with the techniques used to evalu- – other people ate the interactive system to see if it satisfies user needs. – information requirements Chapter 10 discusses how to design a system to be univer- – triggers for tasks sally accessible, regardless of age, gender, cultural background – modeling artifacts or ability. In Chapter 11 we discuss the provision of user – placeholders in task sequences. support in the form of help systems and documentation. n New sensor-based systems do not require explicit interaction; this means: – new cognitive and interaction models – new design methods – new system architectures. The part structure separates out introductory and more Bullet points at the start of each chapter highlight the advanced material, with each part opener giving a simple core coverage description of what its constituent chapters cover 19.3 Computer-mediated communication 675 440 Chapter 12 n Cognitive models Worked exercise Do a keystroke-level analysis for opening up an application in a visual desktop interface using CuSeeMe a mouse as the pointing device, comparing at least two different methods for performing the task. Repeat the exercise using a trackball. Consider how the analysis would differ for various Special-purpose video conferencing is still relatively expensive, but low-fidelity desktop positions of the trackball relative to the keyboard and for other pointing devices. video conferencing is now within the reach of many users of desktop computers. Digital video Answer We provide a keystroke-level analysis for three different methods for launching an cameras are now inexpensive and easily obtainable. They often come with pre-packaged video application on a visual desktop. These methods are analyzed for a conventional one- phone or video conferencing software. However, the system which has really popularized button mouse, a trackball mounted away from the keyboard and one mounted close to video conferencing is a web-based tool. CuSeeMe works over the internet allowing participants the keyboard. The main distinction between the two trackballs is that the second one across the world owning only a basic digital video camera to see and talk to one another. The soft- does not require an explicit repositioning of the hands, that is there is no time required ware is usually public domain (although there are commercial versions) and the services allowing for homing the hands between the pointing device and the keyboard. connection are often free. The limited bandwidth available over long-distance internet links means that video quality and frame rates are low and periodic image break-up may occur. In fact, it is Method 1 Double clicking on application icon sound break-up which is more problematic. After all, we can talk to one another quite easily with- out seeing one another, but find it very difficult over a noisy phone line. Often participants may Steps Operator Mouse Trackball1 Trackball2 see one another’s video image, but actually discuss using a synchronous text-based ‘talk’ program. 1. Move hand to mouse H[mouse] 0.400 0.400 0.000 2. Mouse to icon P[to icon] 0.664 1.113 1.113 3. Double click 2B[click] 0.400 0.400 0.400 4. Return to keyboard H[kbd] 0.400 0.400 0.000 Total times 1.864 2.313 1.513 Method 2 Using an accelerator key Steps Operator Mouse Trackball1 Trackball2 1. Move hand to mouse H[mouse] 0.400 0.400 0.000 2. Mouse to icon P[to icon] 0.664 1.113 1.113 3. Click to select B[click] 0.200 0.200 0.200 4. Pause M 1.350 1.350 1.350 5. Return to keyboard H[kbd] 0.400 0.400 0.000 6. Press accelerator K 0.200 0.200 0.200 Total times 3.214 3.663 2.763 Method 3 Using a menu Steps Operator Mouse Trackball1 Trackball2 1. Move hand to mouse H[mouse] 0.400 0.400 0.000 2. Mouse to icon P[to icon] 0.664 1.113 1.113 3. Click to select B[click] 0.200 0.200 0.200 4. Pause M 1.350 1.350 1.350 5. Mouse to file menu P 0.664 1.113 1.113 6. Pop-up menu B[down] 0.100 0.100 0.100 7. Drag to open P[drag] 0.713 1.248 1.248 8. Release mouse B[up] 0.100 0.100 0.100 9. Return to keyboard H[kbd] 0.400 0.400 0.000 CuSeeMe – video conferencing on the internet. Source: Courtesy of Geoff Ellis Total times 4.591 6.024 5.224 Boxed asides contain descriptions of particular tasks or Worked exercises within chapters provide step-by-step technologies for additional interest, experimentation guidelines to demonstrate problem-solving techniques and discussion Guided tour xv 732 Chapter 20 n Ubiquitous computing and augmented realities within these environments. Much of our understanding of work has developed from Fordist and Taylorist principles on the structuring of activities and tasks. Evaluation within HCI reflects these roots and is often predicated on notions of task and the measurement of performance and efficiency in meeting these goals and tasks. However, it is not clear that these measures can apply universally across activities when we move away from structured and paid work to other activities. For example, DESIGN FOCUS Shared experience You are in the Mackintosh Interpretation Centre in an arts center in Glasgow, Scotland. You notice a man wearing black wandering around looking at the exhibits and then occasionally at a small PDA he is holding. As you get closer he appears to be talking to himself, but then you realize he is simply talking into a head-mounted microphone. ‘Some people can never stop using their mobile phone’, you think. As you are looking at one exhibit, he comes across and suddenly cranes forward to look more closely, getting right in front of you. ‘How rude’, you think. The visitor is taking part in the City project – a mixed-reality experience. He is talking to two other people at remote sites, one who has a desktop VR view of the exhibition and the other just a website. However, they can all see representations of each other. The visitor is being tracked by ultrasound and he appears in the VR world. Also, the web user’s current page locates her in a particular part of the virtual exhibition. All of the users see a map of the exhibitiion showing where they all are. You might think that in such an experiment the person actually in the museum would take the lead, but in fact real groups using this system seemed to have equal roles and really had a sense of shared experi- ence despite their very different means of seeing the exhibition. Frequent links to the See the book website for a full case study: /e3/casestudy/city/ book website for further information City project: physical presence, VR interfaces and web interface. Source: Courtesy of Matthew Chalmers, note: City is an Equator project Design Focus mini case studies highlight practical applications of HCI concepts Exercises 393 Recommended reading 509 10.5 SUMMARY RECOMMENDED READING Universal design is about designing systems that are accessible by all users in all J. Carroll, editor, HCI Models, Theories, and Frameworks: Toward an Interdisciplinary circumstances, taking account of human diversity in disabilities, age and culture. Science, Morgan Kaufmann, 2003. Universal design helps everyone – for example, designing a system so that it can be See chapters by Perry on distributed cognition, Monk on common ground and used by someone who is deaf or hard of hearing will benefit other people working in Kraut on social psychology. noisy environments or without audio facilities. Designing to be accessible to screen- L. A. Suchman, Plans and Situated Actions: The Problem of Human–Machine reading systems will make websites better for mobile users and older browsers. Communication, Cambridge University Press, 1987. Multi-modal systems provide access to system information and functionality This book popularized ethnography within HCI. It puts forward the viewpoint through a range of different input and output channels, exploiting redundancy. that most actions are not pre-planned, but situated within the context in which Such systems will enable users with sensory, physical or cognitive impairments to they occur. The principal domain of the book is the design of help for a photo- make use of the channels that they can use most effectively. But all users benefit copier. This is itself a single-user task, but the methodology applied is based on from multi-modal systems that utilize more of our senses in an involving interactive both ethnographic and conversational analysis. The book includes several chap- experience. ters discussing the contextual nature of language and analysis of conversation For any design choice we should ask ourselves whether our decision is excluding transcripts. someone and whether there are any potential confusions or misunderstandings in our choice. T. Winograd and F. Flores, Understanding Computers and Cognition: A New Foundation for Design, Addison-Wesley, 1986. Like Suchman, this book emphasizes the contextual nature of language and the weakness of traditional artificial intelligence research. It includes an account of speech act theory as applied to Coordinator. Many people disagree with the EXERCISES authors’ use of speech act theory, but, whether by application or reaction, this work has been highly influential. 10.1 Is multi-modality always a good thing? Justify your answer. S. Greenberg, editor, Computer-supported Cooperative Work and Groupware, 10.2 What are (i) auditory icons and (ii) earcons? How can they be used to benefit both visually Academic Press, 1991. impaired and sighted users? The contents of this collection originally made up two special issues of the 10.3 Research your country’s legislation relating to accessibility of technology for disabled people. International Journal of Man–Machine Studies. In addition, the book contains What are the implications of this to your future career in computing? Greenberg’s extensive annotated bibliography of CSCW, a major entry point for 10.4 Take your university website or another site of your choice and assess it for accessibility using any research into the field. Updated versions of the bibliography can be obtained Bobby. How would you recommend improving the site? from the Department of Computer Science, University of Calgary, Calgary, 10.5 How could systems be made more accessible to older users? Alberta, Canada. 10.6 Interview either (i) a person you know over 65 or (ii) a child you know under 16 about their Communications of the ACM, Vol. 34, No. 12, special issue on ‘collaborative com- experience, attitude and expectations of computers. What factors would you take into account puting’, December 1991. if you were designing a website aimed at this person? Several issues of the journal Interacting with Computers from late 1992 through early 10.7 Use the screen reader simulation available at www.webaim.org/simulations/screenreader to 1993 have a special emphasis on CSCW. experience something of what it is like to access the web using a screen reader. Can you find Computer-Supported Cooperative Work is a journal dedicated to CSCW. See also back the answers to the test questions on the site? issues of the journal Collaborative Computing. This ran independently for a while, but has now merged with Computer-Supported Cooperative Work. See also the recommended reading list for Chapter 19, especially the conference proceedings. Chapter summaries reinforce student learning. Annotated further reading encourages readers to Exercises at the end of chapters can be used by research topics in depth teachers or individuals to test understanding FOREWORD Human–computer interaction is a difficult endeavor with glorious rewards. Designing interactive computer systems to be effective, efficient, easy, and enjoyable to use is important, so that people and society may realize the benefits of computation- based devices. The subtle weave of constraints and their trade-offs – human, machine, algorithmic, task, social, aesthetic, and economic – generates the difficulty. The reward is the creation of digital libraries where scholars can find and turn the pages of virtual medieval manuscripts thousands of miles away; medical instruments that allow a surgical team to conceptualize, locate, and monitor a complex neuro- surgical operation; virtual worlds for entertainment and social interaction, respon- sive and efficient government services, from online license renewal to the analysis of parliamentary testimony; or smart telephones that know where they are and under- stand limited speech. Interaction designers create interaction in virtual worlds and embed interaction in physical worlds. Human–computer interaction is a specialty in many fields, and is therefore multi- disciplinary, but it has an intrinsic relationship as a subfield to computer science. Most interactive computing systems are for some human purpose and interact with humans in human contexts. The notion that computer science is the study of algo- rithms has virtue as an attempt to bring foundational rigor, but can lead to ignoring constraints foundational to the design of successful interactive computer systems. A lesson repeatedly learned in engineering is that a major source of failure is the narrow optimization of a design that does not take sufficient account of contextual factors. Human users and their contexts are major components of the design problem that cannot be wished away simply because they are complex to address. In fact, that largest part of program code in most interactive systems deals with user interaction. Inadequate attention to users and task context not only leads to bad user interfaces, it puts entire systems at risk. The problem is how to take into account the human and contextual part of a sys- tem with anything like the rigor with which other parts of the system are understood and designed – how to go beyond fuzzy platitudes like ‘know the user’ that are true, but do not give a method for doing or a test for having done. This is difficult to do, but inescapable, and, in fact, capable of progress. Over the years, the need to take into account human aspects of technical systems has led to the creation of new fields of study: applied psychology, industrial engineering, ergonomics, human factors, Foreword xvii man–machine systems. Human–computer interaction is the latest of these, more complex in some ways because of the breadth of user populations and applications, the reach into cognitive and social constraints, and the emphasis on interaction. The experiences with other human-technical disciplines lead to a set of conclusions about how a discipline of human–computer interaction should be organized if it is to be successful. First, design is where the action is. An effective discipline of human–computer interaction cannot be based largely on ‘usability analysis’, important though that may be. Usability analysis happens too late; there are too few degrees of freedom; and most importantly, it is not generative. Design thrives on understanding constraints, on insight into the design space, and on deep knowledge of the materials of the design, that is, the user, the task, and the machine. The classic landmark designs in human–computer interaction, such as the Xerox Star and the Apple Lisa/Macintosh, were not created from usability analysis (although usability analysis had important roles), but by generative principles for their designs by user interface designers who had control of the design and implementation. Second, although the notion of ‘user-centered design’ gets much press, we should really be emphasizing ‘task-centered design’. Understanding the purpose and con- text of a system is key to allocating functions between people and machines and to designing their interaction. It is only in deciding what a human–machine system should do and the constraints on this goal that the human and technical issues can be resolved. The need for task-centered design brings forward the need for methods of task analysis as a central part of system design. Third, human–computer interaction needs to be structured to include both analytic and implementation methods together in the same discipline and taught together as part of the core. Practitioners of the discipline who can only evaluate, but not design and build are under a handicap. Builders who cannot reason analytically about the systems they build or who do not understand the human information pro- cessing or social contexts of their designs are under a handicap. Of course, there will be specialists in one or another part of human–computer interaction, but for there to be a successful field, there must be a common core. Finally, what makes a discipline is a set of methods for doing something. A field must have results that can be taught and used by people other than their originators to do something. Historically, a field naturally evolves from a set of point results to a set of techniques to a set of facts, general abstractions, methods, and theories. In fact, for a field to be cumulative, there must be compaction of knowledge by crunch- ing the results down into methods and theories; otherwise the field becomes fad- driven and a collection of an almost unteachably large set of weak results. The most useful methods and theories are generative theories: from some task analysis it is possible to compute some insightful property that constrains the design space of a system. In a formula: task analysis, approximation, and calculation. For example, we can predict that if a graphics system cannot update the display faster than 10 times/second then the illusion of animation will begin to break down. This con- straint worked backwards has architectural implications for how to guarantee the needed display rate under variable computational load. It can be designed against. xviii Foreword This textbook, by Alan Dix, Janet Finlay, Gregory Abowd, and Russell Beale, represents how far human–computer interaction has come in developing and organizing technical results for the design and understanding of interactive systems. Remarkably, by the light of their text, it is pretty far, satisfying all the just- enumerated conclusions. This book makes an argument that by now there are many teachable results in human–computer interaction by weight alone! It makes an argu- ment that these results form a cumulative discipline by its structure, with sections that organize the results systematically, characterizing human, machine, interaction, and the design process. There are analytic models, but also code implementation examples. It is no surprise that methods of task analysis play a prominent role in the text as do theories to help in the design of the interaction. Usability evaluation methods are integrated in their proper niche within the larger framework. In short, the codification of the field of human–computer interaction in this text is now starting to look like other subfields of computer science. Students by studying the text can learn how to understand and build interactive systems. Human–computer interaction as represented by the text fits together with other parts of computer science. Moreover, human–computer interaction as presented is a challenge problem for advancing theory in cognitive science, design, business, or social-technical systems. Given where the field was just a few short years ago, the creation of this text is a monumental achievement. The way is open to reap the glorious rewards of interactive systems through a markedly less difficult endeavor, both for designer and for user. Stuart K. Card Palo Alto Research Center, Palo Alto, California PREFACE TO THE THIRD EDITION It is ten years since the first edition of this book was published and much has changed. Ubiquitous computing and rich sensor-filled environments are finding their way out of the laboratory, not just into films and fiction, but also into our workplaces and homes. Now the computer really has broken its bounds of plastic and glass: we live in networked societies where personal computing devices from mobile phones to smart cards fill our pockets, and electronic devices surround us at home and at work. The web too has grown from a largely academic network into the hub of business and everyday lives. As the distinctions between physical and digital, work and leisure start to break down, human–computer interaction is also radically changing. We have tried to capture some of the excitement of these changes in this revised edition, including issues of physical devices in Chapters 2 and 3, discussion of web interfaces in Chapter 21, ubiquitous computing in Chapters 4 and 20, and new models and paradigms for interaction in these new environments in Chapters 17 and 18. We have reflected aspects of the shift in use of technology from work to leisure in the analysis of user experience in Chapter 3, and in several of the boxed examples and case studies in the text. This new edition of Human–Computer Interaction is not just tracking these changes but looking ahead at emerging areas. However, it is also rooted in strong principles and models that are not dependent on the passing technologies of the day. We are excited both by the challenges of the new and by the established foundations, as it is these foundations that will be the means by which today’s students understand tomorrow’s technology. So we make no apology for continuing the focus of previous editions on the theoretical and con- ceptual models that underpin our discipline. As the use of technology has changed, these models have expanded. In particular, the insular individual focus of early work is increasingly giving way to include the social and physical context. This is reflected in the expanded treatment of social and organizational analysis, including ethnography, in Chapter 13, and the analysis of artifacts in the physical environment in Chapter 18. xx Preface to the third edition STRUCTURE The structure of the new edition has been completely revised. This in part reflects the growth of the area: ten years ago HCI was as often as not a minority optional sub- ject, and the original edition was written to capture the core material for a standard course. Today HCI is much expanded: some areas (like CSCW) are fully fledged dis- ciplines in their own right, and HCI is studied from a range of perspectives and at different levels of detail. We have therefore separated basic material suitable for intro- ductory courses into the first two parts, including a new chapter on interaction design, which adds new material on scenarios and navigation design and provides an overview suitable for a first course. In addition, we have included a new chapter on universal design, to reflect the growing emphasis on design that is inclusive of all, regardless of ability, age or cultural background. More advanced material focussing on different HCI models and theories is presented in Part 3, with extended cover- age of social and contextual models and rich interaction. It is intended that these sections will be suitable for more advanced HCI courses at undergraduate and postgraduate level, as well as for researchers new to the field. Detailed coverage of the particular domains of web applications, ubiquitous computing and CSCW is given in Part 4. New to this edition is a full color plate section. Images flagged with a camera icon in the text can be found in color in the plate section. WEBSITE AND SUPPORT MATERIALS We have always believed that support materials are an essential part of a textbook of this kind. These are designed to supplement and enhance the printed book – phys- ical and digital integration in practice. Since the first edition we have had exercises, mini-case studies and presentation slides for all chapters available electronically. For the second edition these were incorporated into a website including links and an online search facility that acts as an exhaustive index to the book and mini- encyclopedia of HCI. For visually disabled readers, access to a full online electronic text has also been available. The website is continuing to develop, and for the third edition provides all these features plus more, including WAP search, multi-choice questions, and extended case study material (see also color plate section). We will use the book website to bring you new exercises, information and other things, so do visit us at www.hcibook.com (also available via www.booksites.net/dix). Throughout the book you will find shorthand web references of the form /e3/a-page-url/. Just prepend http://www.hcibook.com to find further information. To assist users of the second edition, a mapping between the structures of the old and new editions is available on the web at: http://www.hcibook.com/e3/contents/map2e/ Preface to the third edition xxi STYLISTIC CONVENTION As with all books, we have had to make some global decisions regarding style and terminology. Specifically, in a book in which the central characters are ‘the user’ and ‘the designer’, it is difficult to avoid the singular pronoun. We therefore use the pronoun ‘he’ when discussing the user and ‘she’ when referring to the designer. In other cases we use ‘she’ as a generic term. This should not be taken to imply anything about the composition of any actual population. Similarly, we have adopted the convention of referring to the field of ‘Human– Computer Interaction’ and the notion of ‘human–computer interaction’. In many cases we will also use the abbreviation HCI. ACKNOWLEDGEMENTS In a book of this size, written by multiple authors, there will always be myriad people behind the scenes who have aided, supported and abetted our efforts. We would like to thank all those who provided information, pictures and software that have enhanced the quality of the final product. In particular, we are indebted to Wendy Mackay for the photograph of EVA; Wendy Hall and her colleagues at the University of Southampton for the screen shot of Microcosm; Saul Greenberg for the reactive keyboard; Alistair Edwards for Soundtrack; Christina Engelbart for the photographs of the early chord keyset and mouse; Geoff Ellis for the screen shot of Devina and himself using CuSeeMe; Steve Benford for images of the Internet Foyer; and Tony Renshaw who provided photographs of the eye tracking equipment. Thanks too to Simon Shum for information on design rationale, Robert Ward who gave us material on psycho-physiology, and Elizabeth Mynatt and Tom Rodden who worked with Gregory on material adapted in Chapter 20. Several of the boxed case studies are based on the work of multi-institution projects, and we are grateful to all those from the project teams of CASCO, thePooch SMART-ITS, TOWER, AVATAR-Conference and TEAM-HOS for boxes and case studies based on their work; and also to the EQUATOR project from which we drew material for the boxes on cultural probes, ‘Ambient Wood’ and ‘City’. We would also like to thank all the reviewers and survey respondents whose feedback helped us to select our subject matter and improve our coverage; and our colleagues at our respective institutions and beyond who offered insight, encouragement and tolerance throughout the revi- sion. We are indebted to all those who have contributed to the production process at Pearson Education and elsewhere, especially Keith Mansfield, Anita Atkinson, Lynette Miller, Sheila Chatten and Robert Chaundy. Personal thanks must go to Fiona, Esther, Miriam, Rachel, Tina, Meghan, Aidan and Blaise, who have all endured ‘The Book’ well beyond the call of duty and over xxii Preface to the third edition many years, and Bruno and ‘the girls’ who continue to make their own inimitable contribution. Finally we all owe huge thanks to Fiona for her continued deep personal support and for tireless proofreading, checking of figures, and keeping us all moving. We would never have got beyond the first edition without her. The efforts of all of these have meant that the book is better than it would other- wise have been. Where it could still be better, we take full responsibility. PUBLISHER’S ACKNOWLEDGEMENTS We are grateful to the following for permission to reproduce copyright material: Figure p. 2, Figures 3.14, 3.15, 3.16 and 5.13 and Exercise 8.4 screen shots reprinted by permission from Apple Computer, Inc.; Figure 2.11 reprinted by permission of Keith Cheverst; Figure 3.13 from The WebBook and Web Forager: An information workspace for the world-wide web in CHI Conference Proceedings, © 1996 ACM, Inc., reprinted by permission (Card, S. K., Robertson, G. G. and York, W. 1996); Figures 3.9, 3.19, 5.5, Chapter 14, Design Focus: Looking real – Avatar Conference screen shots, Figures 21.3, 21.10, 21.11 screen shot frames reprinted by permission from Microsoft Corporation; Tables 6.2 and 6.3 adapted from Usability engineering: our experience and evolution in Handbook for Human–Computer Interaction edited by M. Helander, Copyright 1988, with permission from Elsevier (Whiteside, J., Bennett, J. and Hotzblatt, K. 1988); Figure 7.1 adapted from The alternate reality kit – an animated environment for creating interactive simulations in Proceedings of Workshop on Visual Languages, © 1986 IEEE, reprinted by permission of IEEE (Smith, R. B. 1986); Figure 7.2 from Guidelines for designing user interface software in MITRE Corporation Report MTR-9420, reprinted by permission of The MITRE Corporation (Smith, S. L. and Mosier, J. N. 1986); Figure 7.3 reprinted by permis- sion of Jenifer Tidwell; Figures 8.6 and 8.9 from Xview Programming Manual, Volume 7 of The X Window System, reprinted by permission of O’Reilly and Associates, Inc. (Heller, D. 1990); Figure 9.8 screen shot reprinted by permission of Dr. R. D. Ward; Figure 10.2 after Earcons and icons: their structure and common design principles in Human-Computer Interaction, 4(1), published and reprinted by permission of Lawrence Erlbaum Associates, Inc. (Blattner, M., Sumikawa, D. and Greenberg, R. 1989); Figure 10.5 reprinted by permission of Alistair D. N. Edwards; Figure 10.7 reprinted by permission of Saul Greenberg; Figure 11.2 screen shot reprinted by permission of Macromedia, Inc.; Table 12.1 adapted from The Psychology of Human Computer Interaction, published and reprinted by permission of Lawrence Erlbaum Associates, Inc. (Card, S. K., Moran, T. P. and Newell, A. 1983); Table 12.2 after Table in A comparison of input devices in elemental pointing and dragging tasks in Reaching through technology – CHI’91 Conference Proceedings, Human Factors in Computing Systems, April, edited by S. P. Robertson, G. M. Olson and J. S. Olson, © 1991 ACM, Inc., reprinted by permission (Mackenzie, xxiv Publisher’s acknowledgements I. S., Sellen, A. and Buxton, W. 1991); Figure 14.1 from Understanding Computers and Cognition: A New Foundation for Design, published by Addison-Wesley, reprinted by permission of Pearson Education, Inc. (Winograd, T. and Flores, F. 1986); Figure 14.5 from Theories of multi-party interaction. Technical report, Social and Computer Sciences Research Group, University of Surrey and Queen Mary and Westfield Colleges, University of London, reprinted by permission of Nigel Gilbert (Hewitt, B., Gilbert, N., Jirotka, M. and Wilbur, S. 1990); Figure 14.6 from Dialogue processes in computer-mediated communication: a study of letters in the com system. Technical report, Linköping Studies in Arts and Sciences, reprinted by permission of Kerstin Severinson Eklundh (Eklundh, K. S. 1986); Chapter 14, Design Focus: Looking real – Avatar Conference, screen shots reprinted by permission of AVATAR-Conference project team; Figure 16.17 screen shot reprinted by permis- sion of Harold Thimbleby; Figure 17.5 based on Verifying the behaviour of virtual world objects in DSV-IS 2000 Interactive Systems: Design, Specification and Verification. LNCS 1946, edited by P. Palanque and F. Paternò, published and reprinted by permission of Spinger-Verlag GmbH & Co. KG (Willans, J. S. and Harrison, M. D. 2001); Figure 18.4 icons reprinted by permission of Fabio Paternò; Chapter 19, p.675 CuSeeMe screen shot reprinted by permission of Geoff Ellis; Chapter 19, Design Focus: TOWER – workspace awareness, screen shots reprinted by permission of Wolfgang Prinz; Figure 20.1 reprinted by permission of Mitsubishi Electric Research Laboratories, Inc.; Figure 20.4 (right) reprinted by permission of Sony Computer Science Laboratories, Inc; Figure 20.9 from Cone trees. Animated 3d visualisation of hierarchical information in Proceedings of the CH’91 Conference of Human Factors in Computing Systems, © 1991 ACM, Inc., reprinted by permission (Robertson, G. G., Card, S. K., and Mackinlay, J. D. 1991); Figure 20.10 from Lifelines: visualising personal histories in Proceedings of CH’96, © 1996 ACM, Inc., reprinted by permission (Plaisant, C., Milash, B., Rose, A., Widoff, S. and Shneiderman, B. 1996); Figure 20.11 from Browsing anatomical image databases: a case study of the Visible Human in CH’96 Conference Companion, © 1996 ACM, Inc., reprinted by permission (North, C. and Korn, F. 1996); Figure 20.12 from Externalising abstract mathematical models in Proceedings of CH’96, © 1996 ACM, Inc., reprinted by permission (Tweedie, L., Spence, R., Dawkes, H. and Su, H. 1996); Figure 21.2 from The impact of Utility and Time on Distributed Information Retrieval in People and Computers XII: Proceedings of HCI’97, edited by H. Thimbleby, B. O’Conaill and P. Thomas, published and reprinted by permission of Spinger-Verlag GmbH & Co. KG (Johnson, C. W. 1997); Figure 21.4 screen shot reprinted by permission of the Departments of Electronics and Computer Science and History at the University of Southampton; Figure 21.6 Netscape browser window © 2002 Netscape Communications Corporation. Used with permission. Netscape has not authorized, sponsored, endorsed, or approved this publication and is not responsible for its content. We are grateful to the following for permission to reproduce photographs: Chapter 1, p. 50, Popperfoto.com; Chapter 2, p. 65, PCD Maltron Ltd; Figure 2.2 Electrolux; Figures 2.6 and 19.6 photos courtesy of Douglas Engelbart and Bootstrap Institute; Figure 2.8 (left) British Sky Broadcasting Limited; Figure 2.13 (bottom Publisher’s acknowledgements xxv right) Sony (UK) Ltd; Chapter 2, Design Focus: Feeling the Road, BMW AG; Chapter 2, Design Focus: Smart-Its – making using sensors easy, Hans Gellersen; Figures 4.1 (right) and 20.2 (left) Palo Alto Research Center; Figure 4.2 and 20.3 (left) François Guimbretière; Figure 4.3 (bottom left) Franklin Electronic Publishers; Figure 5.2 (top plate and middle plate) Kingston Museum and Heritage Service, (bottom plate) V&A Images, The Victoria and Albert Museum, London; Chapter 5, Design Focus: Cultural probes, William W. Gaver, Anthony Boucher, Sarah Pennington and Brendan Walker, Equator IRC, Royal College of Art; Chapter 6, p. 245, from The 1984 Olympic Message System: a text of behavioural principle of system design in Communications of the ACM, 30(9), © 1987 ACM, Inc., reprinted by permission (Gould, J. D., Boies, S. J., Levy, S., Richards, J. T. and Schoonard, J. 1987); Figures 9.5 and 9.6 J. A. Renshaw; Figure 9.7 Dr. R. D. Ward; Figure 10.3 SensAble Technologies; Chapter 13, Design Focus: Tomorrow’s hospital – using participatory design, Professor J. Artur Vale Serrano; Chapter 18, p. 650, Michael Beigl; Chapter 19, p. 678, Steve Benford, The Mixed Reality Laboratory, University of Nottingham; Chapter 19, Design Focus: SMS in action, Mark Rouncefield; Figure 20.2 (right) Ken Hinckley; Figure 20.3 (right) MIT Media Lab; Figure 20.4 (left) from Interacting with paper on the digital desk in Communications of the ACM, 36(7), © 1993 ACM, Inc., reprinted by permission (Wellner, P. 1993); Chapter 20, p. 726, Peter Phillips; Chapter 20, Design Focus: Ambient wood – augmenting the physical, Yvonne Rogers; Chapter 20, Design Focus: Shared experience, Matthew Chalmers. We are grateful to the following for permission to reproduce text extracts: Pearson Education, Inc. Publishing as Pearson Addison Wesley for an extract adapted from Designing the User Interface: Strategies for Effective Human–Computer Interaction 3/e by B. Shneiderman © 1998, Pearson Education, Inc; Perseus Books Group for an extract adapted from The Design of Everyday Things by D. Norman, 1998; and Wiley Publishing, Inc. for extracts adapted from ‘Heuristic Evaluation’ by Jakob Nielson and Robert L. Mack published in Usability Inspection Methods © 1994 Wiley Publishing, Inc.; IEEE for permission to base chapter 20 on ‘The human ex- perience’ by Gregory Abowd, Elizabeth Mynatt and Tom Rodden which appeared in IEEE Pervasive Computing Magazine, Special Inaugural Issue on Reaching for Weiser’s Vision, Vol. 1, Issue 1, pp. 48–58, Jan–March 2002. © 2002 IEEE. In some instances we have been unable to trace the owners of copyright material, and we would appreciate any information that would enable us to do so. INTRODUCTION WHY HUMAN–COMPUTER INTERACTION? In the first edition of this book we wrote the following: This is the authors’ second attempt at writing this introduction. Our first attempt fell victim to a design quirk coupled with an innocent, though weary and less than attentive, user. The word-processing package we originally used to write this intro- duction is menu based. Menu items are grouped to reflect their function. The ‘save’ and ‘delete’ options, both of which are correctly classified as file-level operations, are consequently adjacent items in the menu. With a cursor controlled by a trackball it is all too easy for the hand to slip, inadvertently selecting delete instead of save. Of course, the delete option, being well thought out, pops up a confirmation box allow- ing the user to cancel a mistaken command. Unfortunately, the save option produces a very similar confirmation box – it was only as we hit the ‘Confirm’ button that we noticed the word ‘delete’ at the top... Happily this word processor no longer has a delete option in its menu, but unfortu- nately, similar problems to this are still an all too common occurrence. Errors such as these, resulting from poor design choices, happen every day. Perhaps they are not catastrophic: after all nobody’s life is endangered nor is there environmental damage (unless the designer happens to be nearby or you break something in frustration!). However, when you lose several hours’ work with no written notes or backup and a publisher’s deadline already a week past, ‘catastrophe’ is certainly the word that springs to mind. Why is it then that when computers are marketed as ‘user friendly’ and ‘easy to use’, simple mistakes like this can still occur? Did the designer of the word processor actually try to use it with the trackball, or was it just that she was so expert with the system that the mistake never arose? We hazard a guess that no one tried to use it when tired and under pressure. But these criticisms are not levied only on the design- ers of traditional computer software. More and more, our everyday lives involve pro- grammed devices that do not sit on our desk, and these devices are just as unusable. Exactly how many VCR designers understand the universal difficulty people have trying to set their machines to record a television program? Do car radio designers 2 Introduction actually think it is safe to use so many knobs and displays that the driver has to divert attention away from the road completely in order to tune the radio or adjust the volume? Computers and related devices have to be designed with an understanding that people with specific tasks in mind will want to use them in a way that is seamless with respect to their everyday work. To do this, those who design these systems need to know how to think in terms of the eventual users’ tasks and how to translate that knowledge into an executable system. But there is a problem with trying to teach the notion of designing computers for people. All designers are people and, most prob- ably, they are users as well. Isn’t it therefore intuitive to design for the user? Why does it need to be taught when we all know what a good interface looks like? As a result, the study of human–computer interaction (HCI) tends to come late in the designer’s training, if at all. The scenario with which we started shows that this is a mistaken view; it is not at all intuitive or easy to design consistent, robust systems DESIGN FOCUS Things don’t change It would be nice to think that problems like those described at the start of the Introduction would never happen now. Think again! Look at the MacOS X ‘dock’ below. It is a fast launch point for applica- tions; folders and files can be dragged there for instant access; and also, at the right-hand side, there sits the trash can. Imagine what happens as you try to drag a file into one of the folders. If your finger accidentally slips whilst the icon is over the trash can – oops! Happily this is not quite as easy in reality as it looks in the screen shot, since the icons in the dock con- stantly move around as you try to drag a file into it. This is to make room for the file in case you want to place it in the dock. However, it means you have to concentrate very hard when dragging a file over the dock. We assume this is not a deliberate feature, but it does have the beneficial side effect that users are less likely to throw away a file by accident – whew! In fact it is quite fun to watch a new user trying to throw away a file. The trash can keeps moving as if it didn’t want the file in it. Experienced users evolve coping strategies. One user always drags files into the trash from the right-hand side as then the icons in the dock don’t move around. So two lessons: n designs don’t always get better n but at least users are clever. Screen shot reprinted by permission from Apple Computer, Inc. What is HCI? 3 that will cope with all manner of user carelessness. The interface is not something that can be plugged in at the last minute; its design should be developed integrally with the rest of the system. It should not just present a ‘pretty face’, but should sup- port the tasks that people actually want to do, and forgive the careless mistakes. We therefore need to consider how HCI fits into the design process. Designing usable systems is not simply a matter of altruism towards the eventual user, or even marketing; it is increasingly a matter of law. National health and safety standards constrain employers to provide their workforce with usable computer sys- tems: not just safe but usable. For example, EC Directive 90/270/EEC, which has been incorporated into member countries’ legislation, requires employers to ensure the following when designing, selecting, commissioning or modifying software: n that it is suitable for the task n that it is easy to use and, where appropriate, adaptable to the user’s knowledge and experience n that it provides feedback on performance n that it displays information in a format and at a pace that is adapted to the user n that it conforms to the ‘principles of software ergonomics’. Designers and employers can no longer afford to ignore the user. WHAT IS HCI? The term human–computer interaction has only been in widespread use since the early 1980s, but has its roots in more established disciplines. Systematic study of human performance began in earnest at the beginning of the last century in factories, with an emphasis on manual tasks. The Second World War provided the impetus for studying the interaction between humans and machines, as each side strove to pro- duce more effective weapons systems. This led to a wave of interest in the area among researchers, and the formation of the Ergonomics Research Society in 1949. Tradi- tionally, ergonomists have been concerned primarily with the physical characteristics of machines and systems, and how these affect user performance. Human Factors incorporates these issues, and more cognitive issues as well. The terms are often used interchangeably, with Ergonomics being the preferred term in the United Kingdom and Human Factors in the English-speaking parts of North America. Both of these disciplines are concerned with user performance in the context of any system, whether computer, mec

Use Quizgecko on...
Browser
Browser