Final Summary HF - Human Factors (Technische Universiteit Eindhoven) PDF

Document Details

PersonalizedParticle6281

Uploaded by PersonalizedParticle6281

Technische Universiteit Eindhoven

Tags

human factors engineering human factors design human-computer interaction

Summary

This document provides a summary of Human Factors, a subject offered at Technische Universiteit Eindhoven. It discusses concepts such as human factors engineering, design processes, and evaluation methods. The document focuses on how to improve human interaction with products and processes. It covers topics like safety, performance, and user satisfaction.

Full Transcript

lOMoARcPSD|51767539 Final Summary HF - Samenvatting Human factors Human factors (Technische Universiteit Eindhoven) Scan to open on Studeersnel Studocu is not sponsored or endorsed by any college or university Downloaded by Luuk Pigman...

lOMoARcPSD|51767539 Final Summary HF - Samenvatting Human factors Human factors (Technische Universiteit Eindhoven) Scan to open on Studeersnel Studocu is not sponsored or endorsed by any college or university Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 CH1-Intro Human factors engineering: is a discipline that considers the cognitive, physical, and organizational influences on human behavior to improve human interaction with products and processes. Human factors engineering improves people’s lives by making technology work well for them. Most broadly, human factors engineering aims to improve human interaction with systems by enhancing: Safety: Reducing the risk of injury and death Performance: Increasing productivity, quality, and efficiency Satisfaction: Increasing acceptance, comfort, and well-being Design of high-risk systems must focus on safety. In contrast, design of workplaces focuses more on performance, and design of consumer products focuses more on satisfaction. There is a trade-off between these three goals but good human factor design could avoid these trade-offs.(eg. Speed- accuracy trade-off) The three goals of human factors are accomplished through the human factors design cycle. The design cycle begins with understanding the people and system they interact with, proceeds with creating a solution, and completes with evaluating how well the solution achieves the human factors goals. The outcome of this evaluation becomes an input to the cycle because it typically leads to a deeper understanding of what people need and identifies additional opportunities for improvement. Because designs are imperfect and people adapt to designs in unanticipated ways, the design process is iterative, repeating until a satisfactory design emerges, and continues even after the first version is released. This approach embodies the essence of design thinking: an empathetic focus on the person, iterative refinement, and integrative thinking that considers many aspects of design problems to arrive at novel solutions. Six human factors design interventions: The process of understanding, creating and evaluating is repeated across days, weeks and years of a system’s lifecycle. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Task design focuses more on changing what operators do than on changing the devices they use. Equipment design changes the physical equipment that people work with. Environmental design changes the physical environment where the tasks are carried out. Training enhances the knowledge and skills of people by preparing them for the job. Selection changes the makeup of the team or organization by picking people that are best suited to the job. Team and organization design changes how groups of people communicate and relate to each other, and provides a broad view that includes the organizational climate where the work is performed. The least effective of these design interventions are selection and training: design should fit technology to person rather than fit the person to the technology. In fact, design should strive to accommodate all people. The percentage cost to an organization of incorporating human factors in design grows from 1% of the total product cost when human factors is addressed at the earliest stages to more than 12% when human factors is addressed only in response to problems, after a product is in the manufacturing stage. Design thinking: an empathetic focus on the person, iterative refinement, and integrative thinking Scope of HFE: priority of human goals and interventions depends on the application area Engineering psychology is a discipline within psychology, and human factors is a discipline within engineering. The distinction is clear: The ultimate goal of the study of human factors is system design, accounting for those factors, psychological and physical, that are properties of the human component. In contrast, the ultimate goal of engineering psychology is to understand the human mind as it relates to design. Cognitive engineering, also closely related to human factors, but focuses on the cognitive considerations, particularly in the context of safety of complex systems, such as nuclear power plant [12, 13]. It focuses on how organizations and individuals manage such systems with the aid of sophisticated displays, decision aids, and automation, which is the focus of Chapters 7 and 11. Macroergonomics, like cognitive engineering, takes complex systems as its focus. Macroergonomics addresses the need to consider not just the details of particular devices or processes, but the need to consider the overall work system. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Macroergonomics takes a broad systems perspective and considers the design of teams and organizations, which is the focus of Chapters 16 through 18. Human-systems integration takes an even broader view and considers how designs must consider how people interact with all systems, to the point of forecasting availability of qualified staff based on demographic trends and training requirements. Human-computer interaction (HCI) is often linked to the field of user experience and tends to focus more on software and less on the physical and organizational environment. Computers already touch many aspects of our lives. The internet of things, augmented reality, and wearable computers will foster an even stronger influence. As a consequence, human-computer interaction and user experience increasingly overlap with other areas of human factors engineering. For example, as computers have been transformed from desktop machines to devices that are held in your hand or worn on your wrist, the physical aspects of reach and touch are critically important. The behavior of people depends on the situation. And this makes systems thinking and engineering important when designing for people. Three elements of system thinking: 1. Interconnection = complex systems have many interconnected elements. Changing one element effects the others. – what is the purpose: Ask ‘why’ the system is built. Joint optimalization = improving performance and technology 2. Adaptation = technology often has unanticipated consequences that result from people adapting and changing their behavior in response to the technology –adaptation can lead good technology to have bad outcomes. What could go wrong? Ask ‘what’ could happen that might not be expected. 3. Environment = our surroundings guide our behavior. Affordances – opportunities for action presented by the environment. Intuition is often a poor guide for design. History 1911:Taylor : Father of scientific management Attempted to apply scientific methods to study and improve the engineering of processes and management Experiment: workers and rest breaks Introduced time studies and the scientific study of work He focused on increasing productivity, but not safety or satisfaction 1924: Gillbreth & Gillbreth Systematic analysis of human work (brick motion) Developed an influential technique fordecomposing motions during work into fundamental element or therbligs Experiment: pilot human error 1830 – 1904: Muybridge Captured biomechanical characteristics of complex actions 1949: Group of researchers (Oxford) Created the Ergonomics Research Society Ergos = work, nomos = natural laws First textbook Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 CH2-Design Methods Human factors considerations go beyond the interface. Systematic design processes specify a sequence of steps for product analysis, design, and production. Even though there are many different design processes, they generally include stages that reflect understanding the users needs (pre- design or front-end analysis activities), creating a product or system (prototypes, pre-production models), evaluating how well the design meets user’s needs; all of which is an iterative process that cycles back to understanding the user’s needs. Product lifecycle models, are design processes that include product implementation, utilization and maintenance, and dismantling or disposal. Design processes differ to the degree that they are defined by sequential steps or by iteration, flexibility, and adaption to uncertainty. Vee process. Figure 2.1 shows three common design processes, the first is the Vee process, which is often used in the design of large, high-risk systems, such as the design of a new aircraft, where sequential development is possible and verification, validation, and documentation are critical. The Vee shape starts with a broad system description and design requirements, which are decomposed into detailed requirements. For the dashboard of a car, these detailed requirements might include information elements, such as speed and level of the gas tank. Design of these components are then integrated and verified by comparing them to the original system requirements. In the Vee process, the general specifications are well-defined at the start and emphasis is given to documenting a successful implementation of those specifications. Plan-Do-Check-Act cycle. A second design model is the PlanDo-Check-Act cycle (PDCA), which is commonly used to enhance workplace efficiency and production quality. The cycle begins with the target improvement. The Plan stage describes objectives and specifies the targeted improvement. The Plan is then implemented in the Do stage where a product, prototype or process is created. The Check stage involves assessing the intervention defined by the Do stage to understand what effect it had. Act completes the cycle by implementing the intervention or developing a new Plan based on the outcomes. This cycle reflects the scientific management approach of Taylor in that each plan represents a hypothesis of how the system or product might be improved. Scrum process. A third design model is the Scrum approach, which is more typical of consumer software products, such as smartphone and web applications, where an iterative and incremental approach is needed to resolve uncertainty in design requirements. The Scrum approach focuses on creating products and using those products to discover requirements. Early prototypes reveal design opportunities that are visible only after the technology has been implemented. Central to the Scrum approach is delivering system components quickly and accommodating requirements discovered during development. “Sprints,” which are short duration efforts, typically 24 hours to 30 days, focus effort on quickly producing new iterations of the product. The Scrum approach is well-suited to situations that demand high degree of innovation, such as those where technology changes rapidly and potential applications emerge abruptly. This flexibility is why such techniques are sometimes termed agile design The Scrum approach relies on close interaction between co-located workers to develop solutions in an ad-hoc manner and therefore, the approach tends to place less emphasis on standardized work processes, documentation, and testing. As noted in the introduction, cars are increasingly becoming highly computerized consumer products. Consequently, one might think a Scrum approach might be appropriate for designing a car given the rapidly changing technology and the associated need for innovation to stay ahead of competitors. Rapid technology change makes it difficult to specify detailed requirements in advance. Cars also have elements of high-risk systems that intensify the demands to verify and validate critical safety features, making the “Vee” model more appropriate. Such design situations demonstrate the need for a hybrid approach that combines elements of the Vee, Plan-Do-Check-Act, and Scrum. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 The cycles vary in how long they take to complete, with the outer cycles taking months or years and inner cycles taking minutes. In the extreme, one might complete a cycle during an interview with a user where the designer creates a simple paper prototype of a possible solution, and the user provides immediate feedback. Taking hours rather than seconds, a heuristic evaluation(no end users), where the design principles and guidelines are applied to the prototype, can quickly assess how design might violate human capabilities. Usability tests typically take days or weeks to collect data from how end users respond to the system, and so provide a more detailed and precise understanding of how people will react to a design. The inner elements of the design cycle provide rapid, but approximate information about how a particular design might succeed in meeting people’s needs, and the outer elements of the cycle are more time consuming, but more precise. This speed-accuracy tradeoff means that the time and resources needed to understand, create, and evaluate should be matched to the system being developed. Rapidly changing markets place a premium on fast and approximate methods. Usability tests are conducted multiple times as the interface design goes through modifications , Although each usability test typically includes only five people (see Chapter 3 for more detail), as many as 60 cycles of testing can provide benefits that outweigh the costs. At a minimum, three to five iterations should be considered and one can expect improvements of 25–40% for each iteration. Holistic perspective or system thinking = an important part of user-centered design; Rather than considering elements of the design as independent, unrelated parts, it focuses on the interaction and relationships of parts. The “Five Whys” help identify the multiple causes of accidents. These questions typically show multiple unsafe elements associated with training, procedures, controls and displays that should be considered before rather than after an accident. Time-motion studies identify ways to improve worker efficiency. Contextual inquiry reveals users needs through careful observation: Act like an apprentice and ask a user to explain things. Task analysis consists of the following steps: 1. Define the purpose and identify the required data Typical reasons for performing a task analysis include: Redesigning processes Identifying software and hardware design requirements Identifying content of the human-machine interface Defining procedures, manuals, and training Allocating functions across teammates and automation Estimating system reliability Evaluating staffing requirements and estimating workload Whether the task analysis is focused on the physical or cognitive aspects of the activity, four categories of information are typically collected: Hierarchical relationships: What, why, and how tasks are performed Information flow: Who performs the task, with what indications, and with what feedback Sequence and timing: When, in what order, and how long it takes to perform tasks Location and environmental context: Where and under what physical and social conditions tasks are performed 2. Collect task data Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Observation involves watching users as they interact with existing versions of the product/system. They should be performed in the environment that the person normally accomplishes the task. Ask questions during the observations or Think out loud are helpful to observe the user better. Retrospective and prospective protocol analysis address important limits of direct observations. Direct observations disrupt ongoing activity or they can fail to capture rarely occurring situations. So retrospective verbal protocols require that people describe past events and prospective verbal protocols require people imagine how they would act in future situations. Structural and unstructured interviews involve the human factors specialist asking the user to describe their tasks. Structured interviews; use a standard set of questions that ensure the interview captures specific information for all interviewees. Unstructured interviews: use questions that are adjusted during the interview according to the situation. Critical incident technique is a particularly useful approach for understanding how people respond to accident and near accident situations in high-risk systems. Because accidents are rare, direct observation is not feasible. With the critical incident technique, the analyst asks users to recall the details of specific situations and relive the event. By reliving the event with the user, the analyst can get insights similar to those from direct observation Surveys and questionnaires are typically used after designers have obtained preliminary descriptions of activities or basic tasks. It helps designers prioritize different design functions or features. Automatic data recording uses smartphones and activity monitors to record people’s activities unobtrusively. Such data has the benefit of providing a detailed and objective record, but it lacks information regarding the purpose of the activity. Limitations of data collection techniques. All methods have limits, but combinations of methods can compensate. Innovations based on data requires analysts to go beyond current activities and identify better ways to achieve users’ goals. 3. Interpret task data Task hierarchy: goal, task, subtask decomposition. Task flow: control, decisions regarding the flow from one task to another. Activity diagrams build on flow charts and also show tasks that are performed concurrently. Those diagrams capture task flow and information. Task sequence: task duration and sequence, as well as communication between system components. Sequence diagrams capture task sequence and timing. Match the representation of tasks – hierarchy, flow, and sequence – to the design issue. 4. Innovate from task data User identification and persona development describes the most important user populations of the product or system. Use persona’s, an imagination of a potential user. Senarios, users journeys, and use cases complement personas. Personas are detailed description of typical users and scenarios are stories about these personas in a context. Environment and context analysis describes where the tasks scenarios, and personae live. Workload analysis considers whether the system is going to place excessive mental or physical demands on the user, either alone or in conjunction with other tasks. Safety and hazard analyses should be conducted any time a product or system has implications for human safety. Identify potential hazards or the likelihood of human error. Function allocation analysis considers how to distribute tasks between the human operator and technology. - Design heuristics help human factors professionals provide design teams with quick input on whether the design alternatives are consistent with human capabilities. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 - Design patterns are solutions to commonly occurring design problems and are most typically associated with software, but also apply to physical systems. Translating the user needs and goals into system specifications requires the human factor specialist to take a systems thinking approach; analyzing the entire system to determine the best configuration of features. The focus should be on the person-technology system as a unit. - Quality Function Deployment can help prioritize system features. By using the QFD (Quality Function Deployment) method, which uses the house of quality analysis tool. - Cost/benefit analysis builds on the QFD analysis, but now the different designs are compared according to their cost relative to their benefits. The design with the lowest cost/benefit ratio represents the greatest value. - Tradeoff analysis identifies the most promising way to implement a design. If multiple factors are considered, design tradeoffs might be based on the design that has the largest number of advantages and the smallest number of disadvantages. Paper prototypes are more of a tool to understand user needs than an initial design solution. - Paper prototypes are useful in the beginning of a design process since changes are easily made and thus users feel more comfortable to identify flaws. Moreover sketching is little effort so many design alternatives can be tested. - Wireframes are simple layouts that show grouping and location of content, but which omit graphics and detailed functionality. They are helpful in documenting decisions and communicating the essential interactions people might have with the product. Wireframes lack details for look and feel. - Mockups focus on the look and feel, and include color, font, layout, and choices of the final product. Limited to software systems, but are often created to hardware systems. Wireframes communicate the system’s functional characteristics to the design team, and mockups are used to communicate the system’s physical features to the design team and other stakeholders. Design is an iterative cycle of understanding, creating, and evaluating. - Critical incident technique = is a particularly useful approach for understanding how people respond to accident and near accident situations in high-risk systems. Because accidents are rare, direct observation is not feasible. So now the analysts asks users to recall the details of specific situations and relive the event. Is an unstructured interview. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 CH3-Evalution Method No one evaluation method provides a complete answer. Evaluation methods are not equally suited to all design questions. True experiment = in a high controlled laboratory environment Quasi-experiment (descriptive studies) = less controlled, but more representative Evaluation methods that serve as the first step of the next iteration of the design are termed formative evaluations. Formative evaluations help understand how people use a system and how the system might be improved. Consequently, formative evaluations tend to rely on qualitative measures—general aspects of the interaction that need improvement. Evaluation methods that serve as the final step in assessing a design are termed summative evaluations. Summative evaluations are used to assess whether the system performance meets design requirements and benchmarks. Consequently, summative evaluations tend to rely on quantitative measures—numeric indicators of performance. Understand how to improve (Formative evaluation): Does the existing product address the real needs of people? Is it used as expected? Diagnose problems with prototypes (Formative evaluation): How can it be improved? Why did it fail? Why isn’t it good enough? Verify (Summative evaluation): Does the expected performance meet design requirements? Which system is better? How good is it? Different evaluation methods at different points in the iterative design process are necessary. Literature reviews can serve as a useful starting point for evaluation. It involves reading journal articles, books, conference papers etc. Heuristic evaluations build on previous research and do not require additional data collection. It should include at least three evaluators. Cognitive walkthrough considers each task associated with a system interaction, and poses a series of questions to highlight potential problems that might confront someone trying to actually perform the sequence of tasks. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Literature reviews, heuristic evaluation and cognitive walkthrough do not involve collecting data from people interacting with the system, which makes them fast to apply to a system, and so are particularly useful early in the design. One limitation of these approaches is that the analysts might suffer from learned intuition and the curse of knowledge about the system work. Usability testing is a formative evaluation technique—it helps diagnose problems and identify opportunities for improvement as part of an iterative development process. Usability testing involves users interacting with the system and measuring their performance as ways to improve the design. Usability is primarily the degree to which the system is easy to use, or “user friendly.” This translates into a cluster of factors, including the following five variables (from Nielson ): Learnability: The system should be easy to learn so that the user can rapidly start getting some work done. Efficiency: The system should be efficient to use so that once the user has learned the system, a high level of productivity is possible. Memorability: The steps needed to operate the system should be easy to remember so that the casual user is able to return to the system after some period of not having used it, without having to learn everything all over again. Errors: The system should induce few errors. If people do make errors, they should be able to easily recover from them. Further, catastrophic errors should not occur. Satisfaction: The system should be pleasant to use so that users are subjectively satisfied when using it; they like it. Usability testing typically includes just 5 participants.Although five participants are often used in each usability test, the exact number of participants for each usability test and the number of tests depends on the complexity of the system, the diversity of intended users, and the importance of identifying usability problems. Comprehensive system evaluation provides a more inclusive, summative assessment of the system than a usability evaluation. The data source for a comprehensive system evaluation often involves controlled experiments. Similarly, user studies aimed at understanding more general factors affecting human behavior, such as how voice control compares to manual operation of a mobile device while driving, also require controlled experiments. Controlled experiments are also needed to establish the validity of general human factors principles, such as control-display compatibility. The experimental method consists of deliberately producing a change in one or more causal or independent variables and measuring the effect of that change on one or more dependent variables. An experiment should change only the independent variables (IVs) of interest while all other variables are held constant or controlled. However, for some human factors studies, participants need to perform the tasks in various real-world contexts for a comprehensive system evaluation. In such cases, control can be difficult. As control is loosened, the researcher will need to depend more on quasi-experiments and descriptive methods: describing relationships even though they could not actually be manipulated or controlled. For example, the researcher might describe greater rate of smartphone crashes in city driving compared to freeway driving to draw a conclusion that smartphones are more likely to distract drivers in complex traffic situations. In-service evaluation refers to evaluations conducted after a design has been released, such as after a car has been on the market, after a modified manufacturing line has been placed in service, or after a new smartphone operating system has been released. Descriptive studies are critical for in-service evaluation because experimental control is often impossible. In the vignette presented at the beginning of this chapter, an in-service evaluation of existing smartphone use might start by examining crash records or moving violations. This will give us some information regarding road safety issues, but there is a great deal of variation, missing data, and underreporting in such databases. Like most descriptive studies, such a comparison of crashes is a challenge because each crash involves many different conditions and important driver-related activities (e.g., eating, cell-phone use, looked but did not see) might go unreported. A-B testing is a type of in-service evaluation where one version of a system (A) is compared to another version of the system (B), where one is typically an improvement over the existing system. A-B testing is very common for internet applications, where thousands of A-B tests provide data to guide screen layout and even shades of color—Google used A-B testing to pick one of 41 shades of blue. A-B testing typically collects data from many thousands of people, compared to the 3–5 for usability testing or the 20–100 participants in a typical experiment. Collecting data, whether in an experimental or descriptive study, is only half of the process. The other part is inferring the meaning or message conveyed by the data, and this usually involves generalizing or predicting from the particular data sampled to the broader population. Do smartphones compromise (or not) driving safety in the broad section of automobile drivers, and not just in the sample of drivers from a driving simulator experiment, from crash data in one geographical area, or from self-reported survey data? The ability to generalize involves care in both the design of experiments and in the statistical analysis. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 A descriptive study, a usability evaluation, and a controlled experiment differ substantially in the amount and type of data collected and how it would be analyzed, but the general steps are similar. Table 3.2 outlines five general steps. The following sections expand on these steps in conducting a controlled study. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 CH6-Cognition The human information-processing system is conveniently represented by different stages at which information gets transformed: (1) sensation, by which the senses transform physical into neural energy, (2) perception of information about the environment, (3) central processing or transforming and remembering that information, and (4) responding to that information. Important dimensions of the cognitive environment include its bandwidth (e.g., how quickly it changes), familiarity (e.g., how often and how long the person has experienced the environment), and the degree of knowledge in the world (e.g., to what extent information that guides behavior is indicated by features in the environment). Working memory is a temporary, effort-demanding store. Attention could either be a filter or a fuel. The selection of channels to attend (and filtering of channels to ignore) is typically driven by four factors: salience, effort, expectancy, and value. Salience contributes to the bottom-up process of allocating attention, influencing attentional capture, which occurs when the environment directs attention. In contrast to salient features that capture attention, many events that do not have these features may not be noticed, even if they are significant, a phenomenon known as change blindness or inattentional blindness. Change blindness leads people to miss surprisingly large features of the environment even when they may look directly at them. - Cocktail party effect = when a nearby speaker utters our name even though we were not initially selecting that speaker. Expectancy and value together define what are characteristically called top-down or knowledge-driven factors in allocating attention. That is, we tend to look at, or “sample,” the world where we expect to find valuable information. The most direct consequence of selective attention is perception. Once attention is directed to an object or area of the environment, perception proceeds by three often simultaneous and concurrent processes: (1) bottom-up feature analysis, (2) top-down processing, and (3) unitization. The latter two are based on long-term memory, and each has different implications for design. Bottomup processing depends on the physical make up of the stimulus. Top-down processing, based on knowledge and context, depends on expectancies stored from experience in long term memory. The third component, unitization joins the physical stimulus and experience. Top-down processing helps us see what would otherwise be very difficult to see, and sometimes even see what isn’t there. The proceeding examples lead us to a few simple guidelines for supporting attention and perception: Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 1. Maximize bottom-up processing by not only increasing visibility and legibility (or audibility of sounds), but also paying careful attention to confusion caused by similarity of message sets that could be presented in the same context. 2. Maximize automaticity and unitization by using familiar perceptual representations (those encountered frequently in long-term memory). Examples include the use of familiar fonts and lowercase text (Chapter 4), meaningful icons (Chapter 8), and words rather than abbreviations. 3. Maximize top-down processing when bottom-up processing may be poor (as revealed by analysis of the environment and the conditions under which perception may take place), and when unitization may be lacking (unfamiliar symbology or language). Improving top-down processing means providing the best opportunities for guessing. For example, putting information in a consistent location, as is done with the height of stop signs. 4. Maximize discriminating features to avoid confusion: Use a smaller vocabulary. This has a double benefit of improving guess rate and allowing the creation of a vocabulary with more discriminating features. This is why in aviation, a restricted vocabulary is enforced for communications with air traffic control. Create context. The meaning of “your fuel is low” is better perceived than that of the shorter phrase “fuel low,” particularly under noisy conditions. Exploit redundancy. This is quite similar to creating context, but redundancy often involves direct repetition of content in a different format. For example, simultaneous display of a visual and auditory message is more likely to guarantee correct perception in a perceptually degraded environment. The phonetic alphabet exploits redundancy by having each syllable convey a message concerning the identity of a letter (alpha = a). Consider expectations. Be wary of the “conspiracy” to invite perceptual errors when encountering unexpected situations when bottom-up processing is degraded. An example of such conditions is flying at night and encountering unusual aircraft attitudes, which can lead to illusions. Another example is driving at night and encountering unexpected roadway construction. In these cases, as top-down processing attempts to compensate for the bottom-up degradation, it encourages the perception of the expected, which will not be appropriate. Under such conditions, perception of the unusual must be supported by providing particularly salient cues. Test symbols and icons in their context of use. When doing usability testing of symbols or icons, make sure that the testing situation is similar to that in which they will eventually be used [246, 247]. This provides a more valid test of the effective perception of the icons, because context affects perception. A special case here is the poor perception of negation in sentences. For example, “do not turn off the equipment” may be readily perceived as “turn off the equipment” if the message is badly degraded, because our perceptual system treats the positive meaning of the sentence as the “default”. We return to this issue in our discussion of comprehension and working memory. If negation is used, highlight it to avoid misinterpretation. One downside of the redundancy and context, which support top-down processing, is that they increase the length of perceptual messages, thereby reducing the efficiency of information transfer. working memory (sometimes termed short-term memory), is relatively transient and limited to holding a small amount of information that may be rehearsed or “worked on” by other cognitive processes [251, 252]. It is the temporary store that keeps information available while we are using it, until we use it, or until we store it in long-term memory. The other memory store, long-term memory, involves the storage of information after it is no longer active in working memory and the retrieval of the information at a later point in time. When retrieval fails from either working or long-term memory, it is termed forgetting. The limits of working memory hold major implications for system design. Perception depends on expectations, context, and redundant cues. - Working memory = temporary holding of information that is retrieved from long-term memory. Which is limited up to 7 +- 2. Designs that require working memory for more than 3 items for more than 7 seconds or 1 item for more than 70 seconds invite errors. - Long term memory = passive store of information which is activated only when it is needed. In order of most easily remembered: pictures, letters, and numbers. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 We pay attention to bottom-up objective factors, such as movement, intensity, novelty, change, repetition, colors, contrast. But also to more top-down, subjective factors, such as interest, motives, and habits. A model describes working memory as consisting of four components [253, 252]. In this model, a central executive component acts 6.4 Working Memory 173 as an attentional control system that coordinates information from three “storage” systems: visuospatial sketchpad, episodic buffer, phonological loop The visuospatial sketchpad holds information in an analog spatial form (e.g., visual imagery). These images consist of encoded information that has been brought from the senses or retrieved from long-term memory. Thus, the air traffic controller uses the visual-spatial sketchpad to retain information regarding where planes are located in the airspace. This representation is essential for the controller if the display is momentarily lost from view. This spatial working-memory component is also used when a driver tries to construct a mental map of necessary turns from a set of spoken navigational instructions. Part of the problem that Laura had in using her north-up map to drive south into the city was related to the mental rotation in spatial working memory that was necessary to bring the map into alignment with the world out her windshield. The phonological loop represents verbal information in an acoustical form. It is kept active, or “rehearsed,” by articulating words or sounds, either vocally or sub-vocally. Thus, when we are trying to remember a phone number, we silently sound out the numbers until we no longer need them, such as when we have dialed the number, or memorized it. The episodic buffer orders and sequences events and communicates with long-term memory to provide meaning to the information held in the phonological loop and visuospatial sketchpad. The episodic buffer is important for design because it enables a meaningful sequence of events—a story—to be remembered much more easily than an unordered sequence. Working memory holds two different types of information: verbal and spatial. The central executive then operates on this material that is temporarily and effortfully preserved, either in the phonological loop or the visuospatial sketchpad. Whether material is verbal (in the phonological loop) or spatial (in the visuospatial sketchpad), our ability to maintain information in working memory is severely limited in four interrelated respects: how much information can be kept active (its capacity), how long it can be kept active, how similar material is to other elements of working memory and to ongoing information processing, and the availability and type of attentional resources required to keep the material active. We describe each of these influences in turn. Limits of working memory: Capacity is four chunks; a unit of working memory space. Helps people manage the severe limits. Time; how long information may remain. Unless information is rehearsed periodically it will probably be not reminded. Confusability and similarity. Availability and type of attention. Implications of Working Memory for Design: Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Minimize working memory load Provide visual echoes: Wherever an auditory presentation is used to convey messages, it these should be coupled with a redundant visual display of the information to minimize the burden on working memory Provide placeholders for sequential tasks. Tasks that require multiple steps, whose actions may be similar in appearance or feedback, benefit from some visual reminder of what steps have been completed, so that the momentarily distracted operator will not return to the task, forgetting what was done, and needing to start from scratch Exploit chunking. We have seen how chunking can increase the amount of material held in working memory and increase its transfer to long-term memory. Thus, any way we can take advantage of chunking is beneficial, including: Physical chunk size. For presenting arbitrary strings of letters, numbers, or both, the optimal chunk size is three to four numbers or letters per chunk [264, 262]. Create meaningful sequences. The best procedure for creating cognitive chunks out of random strings is to find or create meaningful sequences within the total string of characters. A meaningful sequence should already have an integral representation in long-term memory. This means that the sequence is retained as a single item rather than a set of the individual characters. Meaningful sequences include things such as 555, 4321, or a friend’s initials. Superiority of letters over numbers. Letters support better chunking than numbers because of their greater potential for meaningfulness. Advertisers have capitalized on this principle by moving from numbers such as 1-800-663-5900, which has eight chunks, to letterbased chunking such as 1-800-GET HELP, which has three chunks (“1-800” is a sufficiently familiar string that it is just one chunk). Grouping letters into one word, and thus one chunk, can ease working memory demands. Keep numbers separate from letters. If displays must contain a mixture of numbers and letters, it is better to keep them separated. For example, a license plate containing one numeric and one alphabetic chunk, such as 458 GST, will be more easily kept in working memory than a combination such as 4G5 8ST. Minimize confusability Avoid unnecessary zeros in codes to be remembered Ensure congruence of instructions. Congruence reduces working memory load by aligning the order of words and actions Avoid the negative - Semantic or declarative memory = part of long-term memory which involves memory for general knowledge. - Episodic memory = part of long-term memory which is specific for events. - Procedural memory = part of long-term memory which is memory on how to do things. Material in long-term memory has two important features that determine the ease of later retrieval: its strength and its associations. Working memory and long-term memory interaction. Ease of retrieval depends on the richness and number of associations that can be made with other items Forgetting. The decay of item strength and association strength takes the form of an exponential curve, where people experience a very rapid decline in memory within the first few days. This is why evaluating the effects of training immediately after an instructional unit is finished does not accurately indicate the degree of one’s eventual memory. Even when material is rehearsed to avoid forgetting, if there are many associations that must be acquired within a short period of time (massed practice), they can interfere with each other or become confused, particularly if the associations pertain to similar material. Memory retrieval often fails because of (1) weak strength due to low frequency or recency, (2) weak or few associations with other information, and (3) interfering associations To increase the likelihood that information will be remembered at a later time, it should be processed in working memory frequently and conjunction with other information in a meaningful way. Once established, habits occur effortlessly and actually require cognitive resources and effort to avoid. This makes habits hard to break. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Habit Design Although we are familiar with designing software and physical objects, habits can also be designed. The following represent important aspects of a habit specification. Trigger: Cue that indicates the routine should start. Time of day (e.g., noon) Location (e.g,. at desk) People (e.g., with my office mate) Task sequence (e.g., immediately after the daily conference call) Routine: Concrete, self contained activity. (e.g., Go for a run with office mate) Feedback: Information that indicates the completion of the activity. (e.g., Meet exercise goal indicated on smartwatch) Reward: Occasional positive outcome that is intrinsically enjoyable. (e.g., Periodically go for a coffee after the run) Repetition: Repeat in a consistent manner for 70 or more days. Organization of Information in Long-Term Memory: Semantic network , Schemas and scripts ,Mental models, Cognitive maps. 1. Semantic networks: sections contain related pieces of information 2. Schemas: The knowledge structure about a particular topic Scripts: schemas that describe a typical sequence of activities 3. Mental models: include our understanding of system components. Generally generates a set of expectancies about how the equipment or system will behave 4. Cognitive maps: Mental representations of spatial information Implications of Long-Term Memory for Design: 1. Encourage regular use of information to increase frequency and recency. 2. Encourage active reproduction or verbalization of information that is to be recalled. 3. Standardize. The environment and equipment, including controls, displays, symbols, and operating shift pattern. 4. Use memory aids. 5. Design information to be remembered. Meaningful, concrete, well-organized, able to be guessed. 6. Design helpful habits. 7. Support correct mental models. Lower mental model workload of a component task, avails more square capacity for other tasks, and hence a reduced dual-task decrement of one or both, thereby increasing multi-task efficiency. Visual and auditory channels use separate resources, both in the senses and in the brain itself. Forgetting: The decay of item strength and association strength takes the form of an exponential curve Multiple resource theory: establishes that tasks using different resources interfere less with each other than tasks using the same resources Given that two tasks compete for common resources, which task suffers the greater decrement is determined by the resource allocation strategy or policy that guides a person’s decision about what tasks to invest in, and which to sacrifice. We noted that the similarity between items in working memory leads to confusion Task switching is the extreme dual-task attention allocation policy. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 resource allocation strategy guides a person’s decision about what tasks to invest in, and which to sacrifice Voluntary task switching: when performing two tasks concurrently is impossible and we must choose, task switching is the discrete all-or-none analog of an allocation policy. 1. Salience task is one that calls attention to its presence. 2. Priority (a high) is a tasks that if not done, or not done on time imposes considerable costs. 3. Interest or “engagement” is self evident, but it is interest in a lower priority that is sometimes allowed to dominate the higher priority task of attending. 4. Difficulty or mental workload of a task. Data suggest that, in times of high workload overload above the redline, when people choose, they tend to choose easier rather than more difficult tasks. 5. Time on task. Changing attractiveness of staying with a task the longer it has been performed without a break, seems to depend on a number of factors; boring tasks, highly fatiguing ones. Mechanisms of Selective Attention. The selection of channels to attend (and filtering of channels to ignore) is typically driven by four factors: Salience features that capture attention, but sometimes do not inattentional blindness. Effort is what selective attention depends on, example when driving a car and not look at blind spot/mirror Expectancy, and value together define what are characteristically called top-down or knowledge-driven factors in allocating attention. Cognitive tunnelling: occurs when one task grabs a user’s attention for far longer than others Archetype: Prototypical (the usual) objects stored in memory Adhering to peoples prior knowledge will meet their expectations ( electronic devices that look like furniture) CH7- Macro-cognition - Macrocognition = high-level processes that help people negotiate complex situations that are characterized by ambiguous goals, interactions over time, coordination with multiple people and imperfect feedback. Five elements of macrocognition; (1) Planning (2) Decision making (3) Situation awareness (4) Problem solving Metacognition = thinking about one’s own thinking. Parallel approaches to describe expertise and experience in decision making. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 System 2 = serve a deliberative function that involves resource-intensive effortful processes.More analytic. System 1 = engages relatively automatic “gut-feel” snap judgements System 1 is guided by what is easy, effort- free and feels good or bad; that is, the emotional component of decision making. System 1 does not necessarily represent greater expertise than engaging System 2. The two systems operate in parallel. System 1 offering a snap decision of what to do, but then System 2, if time and cognitive resources are available, overseeing and checking the result of System 1. System 1 also aids System 2 by focusing attention and filtering options – without it we would struggle to make a decision. Designs that enable skill-based behavior are “intuitive”. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Signals and skill-based behavior. People who are extremely experienced with a task tend to process the input at the skill- based level, reacting to the perceptual elements at an automatic, subconscious level. They do not have to interpret and integrate the cues or think of possible actions, but only respond to cues as signals that guide responses. Because the behavior is automatic, the demand on attentional resources described in Chapter 6 is minimal. For example, an operator might turn a valve in a continuous manner to counteract changes in flow shown on a meter (see bottom left of Figure 7.3). Signs and rule-based behavior. When people are familiar with the task but do not have extensive experience, they process input and perform at the rule-based level. The input is recognized in relation to typical system states, termed signs, which trigger rules for accumulated knowledge. This accumulated knowledge can be in the person’s head or written down in formal procedures. Following a recipe to bake bread is an example of rule-based behavior. The rules are “if-then” associations between cue sets and the appropriate actions. For example, Figure 7.3 shows how the operator might interpret the meter reading as a sign. Given that the procedure is to reduce the flow if the meter is above a set point, the operator then reduces the flow. Symbols and knowledge-based behavior. When the situation is new, people do not have any rules stored from previous experience to call upon, and do not have a written procedure to follow. They have to operate at the knowledge-based level, which is essentially analytical processing using conceptual information. After the person assigns meaning to the cues and integrates them to identify what is happening, he or she processes the cues as symbols that relate to the goals and decides on an action plan. Figure 7.3 shows how the operator might reason about the low meter reading and think about what might be the reason for the low flow, such as a leak. It is important to note that the same sensory input, the meter in Figure 7.3, for example, can be interpreted as a signal, sign, or symbol. By definition, decision making involves risk, consequences in picking the wrong alternative, good decision maker effectively assesses risks associated with each alternative. What is decision making? (a) A person must select one option from several alternatives. (b) A person must interpret information for the alternatives. (c) The timeframe is relatively long (> 1 sec). (d) The choice includes uncertainty; not clear which is the best alternative. Decision making can generally be represented by four stages as depicted in Figure 7.4: (1) acquiring and integrating information relevant for the decision, (2) interpreting and assessing the meaning of this information, (3) planning and choosing the best course of action after considering the costs and values of different outcomes, and (4) monitoring and correcting the chosen course of action. People typically cycle through the four stages in a single decision. 1. Acquire and integrate a number of cues, or pieces of information, which are received from the environment and go into working memory. For example, an engineer trying to identify the problem in a manufacturing process might receive a number of cues, including unusual vibrations, particularly rapid tool wear, and strange noises. The cues must be Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 selectively attended, interpreted and somehow integrated with respect to one another. The cues may also be incomplete, fuzzy, or erroneous; that is, they may be associated with some amount of uncertainty. 2. Interpret and assess cues and then use this interpretation to generate one or more situation assessments, diagnoses, or inferences as to what the cues mean. This is accomplished by retrieving information from long-term memory. For example, an engineer might hypothesize that the set of cues described previously is caused by a worn bearing. Situation assessment is supported by maintaining good situation awareness, a topic we discuss later in the chapter. The difference is that while maintaining SA refers to a continuous process, making a situation assessment involves a one time discrete action with the goal of supporting a particular decision. 3. Plan and choose one of alternative actions generated by retrieving possibilities from long-term memory. Depending on the time available, one or more of the alternatives are generated and considered. To choose an action, the decision maker might evaluate information such as possible outcomes of each action (where there may be multiple possible outcomes for each action), the likelihood of each outcome, and the negative and positive factors associated with each outcome. This can be formally done in the context of a decision matrix in which actions are crossed against the diagnosed possible states of the world that could occur, and which could have different consequences depending on the action selected. 4. Monitor and correct the effects of decisions. The monitoring process is a particularly critical part of decision making and can serve two general purposes. First, one can revise the current decision as needed. For example, if the outcomes of a decision to prescribe a particular treatment are not as expected, as was the case with Amy’s patient is getting worse, not better, then the treatment can be adjusted, halted or changed. Second, one can revise the general decision process if that process is found wanting and ineffective, as Amy also did. For example, if heuristics are producing errors, one can learn to abandon them in a particular situation and instead adopt the more analytical approach shown to the left of Figure 7.2. In this way, monitoring serves as an input for the troubleshooting element of macrocognition. Humans are effort conserving. Normative decision making considers the four stages of decision making in terms of an idealized situation in which the correct decision can be made by calculating the mathematical optimal choice Multiattribute utility theory: Overall value of a decision option = (magnitude of each attribute) x (utility of each attribute) Expected value theory: overall value of a choice = (worth of each outcome) x (probability) Statistical decision theory: A hypothesis test is a choice between actions, given states of the world that have specific outcomes - Descriptive decision making accounts for how people actually make decisions. People can depart from the optimum, normative, expected utility model. First, people try to maximize.Second, people often shortcut the time and effort-consuming steps of the normative approach. Third, these shortcuts sometimes result in errors and poor decisions. Each of these represent an increasingly large departure from normative decision making. Real life decision making is complex in ways that normative decision calculations cannot address. People tend to make decisions at one of three ways: intuitive skill-based processing, heuristic rule-based processing, and analytical knowledge-based processing. people with a high degree of expertise often approach decision making in a fairly automatic pattern matching style. Cognitive heuristics are rules-of-thumb that are easy ways of making decisions. Heuristics are usually very powerful and efficient , but they do not always guarantee the best solution [354, 379]. Unfortunately, because they represent simplifications, heuristics occasionally lead to systematic flaws and errors. The systematic flaws represent deviations from the normative model and are sometimes referred to as biases. Experts tend to avoid these biases because they draw from a large set of experiences and they are vigilant to small changes in the pattern of cues that might suggest the heuristic is inappropriate. To the extent a situation departs from these experiences, even experts will fall prey to the biases associated with various heuristics. Although the list of heuristics is large (as many as 37 ), the following presents some of the most notorious ones. Acquire and Integrate Cues: Heuristics and Biases Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 1. Attention to a limited number of cues 2. Anchoring and cue primacy 3. Cue salience 4. Overweighting of unreliable cues Interpret And Assess: Heuristics and Biases. 1. Availability 2. Representativeness 3. Overconfidence 4. Cognitive tunneling 5. Simplicity seeking and choice aversion 6. Confirmation bias Plan and Choose: Heuristics and Biases. 1. Planning bias 2. Retrieve a small number of actions 3. Availability of actions 4. Availability of possible outcomes 5. Hindsight bias 6. Framing bias 7. Default heuristic Dunning-Kruger effect: Incompetent people think they're much better than they actually are and competent people underestimate their performance Design solutions for biases: Task redesign: change in the system Choice architecture: structure of the interaction influences choice o Limit the number of options, select useful defaults, make choices concrete, create linear, comparable relationships, sequence and partition choices Proceduralization: Procedures and checklists make decisions more consistent/accurate Training decision making: can lead to relatively rapid and accurate diagnosis Displays: can guide selective attention Automation and decision support tools Situation awareness = characterizes people’s awareness and understanding of dynamic changes in their environment. Combines perception (selective attention), understanding, and prediction Distributed situation awareness = SA that the members of a team jointly hold Principles for improving situation awareness: 1. Create displays that help people notice changes. 2. Make the situation easy to understand. 3. Keep the operator somewhat “in the loop”. 4. Help people project the state of the system into the future. 5. Organize information around goals. 6. Display to broaden attention 7. Train for Situation Awareness (SA). Troubleshoot (understand) before problem solving (implement a solution) The systematic errors associated with troubleshooting suggest several design principles: Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 1. Present alternate hypotheses 2. Create displays that can act as an external mental model. 3. Create system that encourage alternate hypotheses Informed problem solving and troubleshooting often involve careful planning and scheduling of future tests and activities. Principles for improving planning and scheduling: Create contingency plans and plan to re-plan Create predictive displays. Metacognition influences the decision-making process by guiding how people adapt to the particular decision situation. Metacognition: thinking about ones’ own thinking and cognitive processes The most critical elements of metacognition for macrocognition: 1. Knowing what you don’t know. 2. The decision to “purchase” further information. 3. Calibrating confidence in what you know. 4. Choosing the decision strategy adaptively. 5. Processing feedback to improve the toolkit. Principles for improving metacognition 1. Ease information retrieval. 2. Highlight benefits and minimize effort of engaging decision aids. 3. Manage cognitive depletion. 4. Training metacognition Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 CH8- Displays gulf of evaluation—the difference between actual state of the system and the people’s understanding of the system relative to their goals effective displays must include the information from the system relevant to the intended tasks of people and represent this information in a manner that is compatible with the perceptual and cognitive properties of people. Displays = are artifacts designed to guide attention to relevant system information, and then support its perception and interpretation. They show information that describes the state of a system or action requested of the person. People perceive this display information through top-down processing guided by their mental model, and through bottom- up processing driven by the displayed information Displays are classified along three dimensions: their physical features, the tasks they support, and the properties of people that dictate the best match of display and task. 15 principles of display design: There are 4 categories: (1) those relate to attention, (2) those that directly reflect perceptual operations, (3) those that relate to memory, (4) those that can be trace to the concept of mental model. 1. Principles Based on Attention: Salience compatibility; Important and urgent information should attract attention. Highly salient indicators should be used for highly important information. Minimize information access cost; It cost time and effort to “move” selective attention from one display location to another to access information. Proximity compatibility; Two or more sources of information are related to the same task and must be mentally integrated to complete the task these information sources are defined to have close mental proximity. if mental proximity is high, information must be integrated, then display proximity should also be high; close in space, color, format, linkage, or configuration. If mental proximity is low, elements require focused attention, then display proximity can, and sometimes should, be lower. Avoid resource competition; Multiple resources describes information processing demands, and sometimes processing a lot of information can be facilitated by dividing that information across resources. Presenting some information visually and some auditorily, can be less demanding than presenting it all visually or auditorily. 2. Perceptual principles Make displays legible (or audible); Legibility is so critical to the design of good displays. Although it is not sufficient, for creating usable displays. There should be some additional perceptual principles applied. Avoid absolute judgement limits; We should not require the operator to judge the level of a represented variable based on a single sensory variable, which contains more than five to seven possible levels. Support top-down processing; People perceive and interpret signals according to what they expect to perceive based on their past experience. Exploit redundancy gain; It can be more efficient if the same massages is expressed in multiple ways. Color and position of traffic lights. Use redundancy to avoid confusion by representing the same information through different channels. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Make discriminable; Similarity causes confusion. Similar appearing signals are likely to be confused. 3. Memory principles Knowledge in the world; Replace memory with visual information. Support visual momentum; Displays that include multiple separated elements, such as windows or pages, require people to remember information from one display so they can orient to another. Provide predictive aiding; When our mental resources are consumed with other tasks, prediction falls apart and we become reactive, responding to what has already happened, rather than proactive, responding in anticipation of the future. Because proactive behavior is usually more effective than reactive, displays that can explicitly predict what will happen are generally quite effective in supporting human performance. – A predictive display replaces a resource-demanding cognitive task and with a simpler perceptual one. Be consistent; When our long term-memory works too well, it may continue to trigger actions that are no longer appropriate, and this is an automatic human tendency. Old habits die hard. 4. Mental model principles Principle of pictorial realism; A display should look like the variable that it represents. Environment Principle of the moving part; The moving element(s) of any display of dynamic information should move in a spatial pattern and direction that is compatible with the user’s mental model of how the represented element actually moves in the physical system. Alerts: Warnings are the most critical category of alerts If it is critical to alert the operator to a particular condition, then the omnidirectional auditory channel is best. Conventionally, system designers have classified three levels of alerts—warnings, cautions, and advisories—which can be defined in terms of the severity of consequences of failing to heed their indication. Warnings, the most critical category, should be signaled by salient auditory alerts; cautions may be signaled by auditory alerts that are less salient (e.g., softer voice signals); advisories need not be auditory at all, but can be purely visual. Labels and icons: Their purpose is to unambiguously signal the identity or function of an entity, such as a control, display, piece of equipment, entry on a form, or other system component; that is, they present knowledge in the world (M10) of what something is. Pair labels with icons to ensure people interpret the icon correctly. The following principles are important when designing labels: visibility an legibility (P5), discriminability(P9), Meaningfulness, Location. Earcons = synthetic sounds that have a direct, meaningful association with the thing they represent. In choosing between icons and earcons, it is important to remember that earcons are most compatible for indicating actions or events that play out over time whereas icons are better for labeling states or variables. Displays for Monitoring are those that support the viewing of potentially changing quantities. Four important guidelines can guide the design of monitoring displays: Legibility (P5), Analog versus digital, Analog form and direction. And Prediction and sluggishness One disadvantage of a linear moving pointer display = it cannot present a winder range of scale values in a small physical space. So solutions could be: 1. The moving scale display, which can present a wide range of numbers with precision. If the variable does not change rapidly, then the principle of the moving part has less relevance. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 2. Use circular moving pointer displays that take less space. While circular displays are less consistent with the principle of pictorial realism, they are consistent with the stereotype of increase clockwise. 3. Hybrid scale in which high-frequency changes of the displayed variable drive a moving pointer against a stable scale, while sustained low-frequency changes can gradually shift the scale quantities to the new range of values as needed. Users expect (moving part, MM15) that an upward/clockwise movement of the control will be required to increase the displayed quantity. Prediction and sluggishness; many monitored variables in high-inertia systems, are sluggish in that they change relatively slowly. But as a consequence of the dynamic properties of the system that they present, the slow change means that their future state can be known with some degree of certainty. Just as displays for sluggish systems might need to be quickened, signals with a lot of high frequency noise might need to be smoothed. Display layout: The Primary Visual Area (PVA) defines the reference point for many display layout guidelines. 1. Frequency of use dictates that frequently used displays should be closer to the PVA. 2. Importance of use, dictates that important information, even if it may not be frequently used, be displayed so that attention will be captured when it is presented. 3. Display relatedness or sequence of use dictates that related displays and those pairs that are often used in sequence should be closer together. 4. Consistency is related to both memory and attention. If displays are always consistently laid out with the same item positioned in the same spatial location, then our memory of where things are serves us well, and memory can easily and automatically guide selective attention to find the items we need. 5. Phase-related displays are needed because guidelines of consistency conflict with those of frequency of use and relatedness. Phases-related operations are situations where the variables that are frequently used during one phase of operation are very different from those during another phase. 6. Organizational grouping is an organized, “clustered” display, which provides an aid to guides visual attention to particular groups, as long as all displays within a group are functionally related and their relatedness is clearly indicated to the user. If these guidelines are not followed and unrelated items are placed in a common spatial cluster, it can undermine performance because it violates the principle of proximity compatibility. 7. Control-display compatibility dictates that displays should be close to their associated controls. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 8. Cluster avoidance dictates that there should ideally be minimum visual angle between all pairs of related displays, and much greater separation between unrelated displays. - Head-Up Display (HUD) = displays near dominance information, typically used for vehicles display. Advantages: 1. Assuming that the driver should spend most time with the eyes directed outward at the far domain - Head-mounted display (HMD) = the display is rigidly mounted to the head so that it can be viewed no matter which way the head and body are oriented. - A navigational display = should serve four fundamentally different classes of tasks 1. Provide guidance about how to get to a destination, 2. Facilitate planning, 3. Help recovery if the traveler becomes lost, 4. Maintain situation awareness regarding the location of broad range of objects. - Creating a useful map displays requires consideration of the full range of tasks they support, not just guidance. - Route list / Command display = provides the traveler with a series of commands to reach a desired location. Most simplest form of navigational displays. Maps need to be: Legible to be useful. Clutter and overlay slows down the time to access information and it slows down to read the items as a consequence the disruption of attention. Position representation. Show the user where they are. Map orientation Scale, needs to be adjustable for the user. Three-dimensional maps, not always necessary and do not give a precise judgement of distance. - Configural display = combines two variables to create a third; distance = speed x time. The emergent variable is the area formed by plotting speed and time. Multiple displays of single variables can be arranged in space and format so that certain properties relevant to the monitoring task will emerge from the combination of values on the individual variables. Lottridge 2011 Emotions have been described in multiple ways: Basic types: anger, happiness, sadness. Related to facial expressions (Ekman, 1992). Dimensions: valence and arousal. Emotional dimensions have been related to physical reactions and conscious feelings. Processing level: associated with different regions of the brain. o Reactions refers to low-level processes; affective signals and motor reactions. o Routines refers to midlevel processing; memory to produce affect, emotion, and arousal. o Reflection refers to high-level; deliberation. Temporal dimension: affective states can be classified by their time course into: Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 o Short-term emotions; occur shorter periods o Medium-term moods; may be longer lasting o Long-term temperament; more permanent characteristic of a person. Emotion is intertwined with cognitive reasoning, even though much of the time it is unconscious. - Cocktail party phenomenon = a person hears a emotionally significant word (your name) in an unattended stream during dichotic listening. Utility functions describe how humans and organisms respond to “units” of affective input. An important consequence of individual differences is that different people will not likely use emotional rating scales in the same way. Emotional intelligence by Mayer and Salovey (1997) grouped into four components: (a) accurately perceiving emotions in the self, others, and artifacts; (b) using emotions to communicate and to facilitate thought; (c) understanding emotions; (d) managing emotions in oneself and others. Emotional intelligence can be used as a covariate in measuring responses to tasks and task performance. Competition has an impact on emotional experience; for instance, emotional reactions were stronger when participants played with a friend rather then with a computer. Include affective interaction as a criterion in user testing. Take the peak-end experience into account. Enhance performance through emotional input and regulation. Visualize emotions for decision support. Foster appropriate emotion for different learning goals. Include preconscious emotions in situation awareness. Select appropriate emotion measurement techniques Affective interaction: any interaction that is colored by an emotional experience The term affect denotes all types of affective experience Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Different ways to describe emotions: processing level and temporal level Different conceptualizations of emotions: discrete (basic) emotions, dimensional models of emotions Processing level High road and low road Temporal level Emotions: relatively short duration Moods last hours or days Temperament: stable characteristics of a person An affective state at a certain point of time is likely the compound of a person's temperament, current mood, and current emotion Basic emotions There are basic emotions we all understand, but there is still unclarity on what these basic emotions are. The basic emotions go together with facial expressions. Paul Ekman: anger, disgust, fear, happiness, sadness, and surprise Robert Plutchik: four pairs of polar opposites (joy-sadness, anger-fear, trust-distrust, surprise-anticipation) Dimensional models Affective states are composed of multiple dimensions Two dimensional: Valence (unpleasant>pleasant) and arousal (deactivated > activated) A third dimension: dominance (feel like being controlled - being in control) Emotions & cognition Early views(Greeks > 1950s): Emotions as separate from cognition Emotions influences cognitions and cognition regulates emotions through centers in the prefrontal cortex (co-pilots) Emotions & decision making Appraisal theory: emotions are extracted from our evaluations of events Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Criticism > too cognitively focused Counter > there are low levels of automatic processing that are hard wired Emotions can be an input into decision making without the person bein gaware of the rationale for the decision Skin conductance can be a marker of pre-decisions states Stimuli > physiologically arousing > skin better conductor for electricity Emotions & attention Cocktail party phenomenon: a person hears emotionally significant word (name) in an unattended stream during dichotic listening Attentional blink (AB, or blink): the phenomenon whereby the second of two targets is not detected or identified because it appears close in time to the first Emotional attentional blink (EBA): phenomenon with an irrelevant emotionally laden image captures so much attention that one cannot detect target stimuli after that Emotions & working memory Affect also has an impact on working memory Amusement enhance verbal working memory and negative effect on spatial working memory o Fear have opposite effect Negative emotional states prepare the organism for fight-or-flight behavior Mood & information processing Negative moods > more carefully to the environment Positive moods > safe state and motivate to preserve cognitive resources Emotions regulates many information-processing activities o Arousal and attentional capacity have an inverted-U-shaped relationship Cognitive regulation of emotions Affect appears to have a stronger influence in the presence of uncertainty Emotions in likely to have the strongest impact on cognition precisely in those situation in which clear thinking is most required (plane crash) Solutions: o Select people who perform well under stress o Train people so hard that their momentarily diminished cognition does not matter Utility & emotions Positivity offset: surrounding as positive, whenever a clear threat is not present Negativity bias: increased reaction per unit of negative information o React more strongly to negative stimuli Emotional utility can be judged to other objects in the environment as well as personal goals Peak-end rule: peak and the end of an affective episode, have the strongest effect on overall judgements of intensity Individual differences Mediate choice of evaluation method as people have differential access to their felt emotion Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Emotional bandwidth: quantifies the number of rating points used when rating emotions Emotional majority agreement: the amount of agreement with other participants affective self-reports within a sample Emotional intelligence: set of abilities grouped into four components (perceiving, communicate, understanding, managing) Cultural differences Independent cultures: focus on self and one's individual characteristics Interdependent cultures: focus on group and harmony within the group Feedback messages: attaining success for participants from independent cultures, avoiding failure for participants from interdependent cultures Some languages express emotional concepts through terms and expressions that cannot be translated o Designers shouldn't overgeneralize about how their interface may be interpreted across cultures (robots in Japan vs West) Assessment of emotion Experiential-subjective: how one feels - self-report questionnaires Behavioral / expressive: how one behaves behavioral measures Peripheral / physiological: how ones nervous system reacts - physiological measures Self-reports Positive and Negative Affect Schedule (PANAS) Self-Assessment Manikin (SAM) Affective grid Easy to collect, due to the peak-end-rule might miss temporal changes Continuous self-reports Can be used to assess moment-to-moment subjective experience of the affective interaction o Digital tools: warmth monitor (joystick) o Sliders: positive-negative slider scale Good to measure valence, not good to measure arousal, immersive games, intrude cognitive processing Physiological measures GSR Continuous event-related measurement, detects only intense emotional activation EMG Mental workload Relates to arousal Should be assessed continuously without interrupting the task being performed Decrease in HRV generally indicates an increase in mental load Emotions & affect in design Emotions are increasingly seen at the heart of user experience (Norman, 2003) (Picard, 1997) computers should be able to sense, process and express basic emotions Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Affect & product qualities (Jordan) Safety and functionality > usability > pleasure Physio-pleasure, Socio-pleasure, Ideo-pleasure, Psycho-pleasure (Demirbilek and Sener, 2003) Multisensory stimulation, fun, cuteness, familiarity Affect & games Different users respond differently to games Should be tested also with non-normative participants Competition has an impact on emotional experience Affect & healthcare Early approaches: motivate health behavior change with negative messages Research: optimism is an important mediator in behavior change Social robots Robots capable of engaging in meaningful social interactions with people Anthropomorphism: the attribution of human traits, emotions or intentions to non-human entities Babies stare at face-like-images longer Media equation: effects that occur between two people in social psychology can also occur when a person interacts with some form of media or technology Human likeness to increase familiarity Robots appearance should match it capabilities Guidelines Include affective interaction as a criterion in user testing Take the peak-end experience into account Enhance performance through emotional input and regulation Visualize emotions for decision support Foster appropriate emotions for different earning goals Include preconscious emotions in situation awareness Select appropriate emotion measurement techniques Chapter 9 Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Information theory describes the complexity of the response options and the control task. Careful selection of controls can reduce complexity and produce faster and more accurate responses. The complexity of the controller should match the complexity of the controlled task. There is a positive correlation between response time and error rate between speed and accuracy. Good designs tend to increase both speed and accuracy. Attention principles: 1. Proximity compatibility; controls should be placed near other controls whose activation needs to be mentally integrated as part of a sequence of control input, as in the buttons on the computer. 2. Avoid resource competition; multiple resource theory predicts a benefit of dividing the tasks across different mental resources. Voice control to keep your hands free for other things. Perception principles: 1. Make accessible; design to support blind operation, both benefit those with visual impairments, but also for everyone who might need to use a control while looking elsewhere. 2. Make discriminable; identifying a particular control from an array of controls requires that they are discriminable. 3. Exploit redundancy gain; it can be more effective if the same message is expressed in multiple ways, make controls easier to identify and discriminate. shape and location for beer taps. 4. Avoid absolute judgment limits; the variation and precision required for control should match the variation and precision of the controller. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 Memory principles: 1. Knowledge in the world: Visual displays relieve the burden on memory by placing knowledge in the world rather than forcing people to keep it in their heads. The same applies with controls. The actuation of the control should be reflected in the control itself, such that the position of toggle switch indicates its state or when the illumination of a button indicates a system is on. Buttons and levers that return to a set position after people actuate them provide no indication of system status, which forces people to keep that information in working memory if it is not displayed elsewhere. Forcing people to rely on knowledge in the head rather than knowledge in the world increases the chance that they will forget what action they performed, and hence the state of the system they are controlling. In contrast with physical input devices, software-based controls, such as voice and gesture-based interactions, have few indications of the control opportunities, system state, or control actuation. For some gestures are intuitive, easily learned, such as selecting by touching, pinching to expand, and swiping to reject or accept. Such gestures naturally fit a touch screen, but others are less easily discovered. Gestures and voice lack the codes in Table 9.2 and so place a premium on defining intuitive conventions and providing clear feedback to guide people towards successful control, an issue we return to later in this chapter and in Chapter 10. 2. Be consistent: Similar to the principle for visual displays, consistency makes it possible for people to apply skills from one situation to another, reducing errors and response time. Table 9.2 shows that each of the features of control devices can contribute to consistency and standardization. This standardization should be considered for functions within a system, as well as across systems. Our unfortunate driver probably encountered inconsistency in the location of the light control between cars. Mental model principles: 9. Location compatibility: The control location should be close to (and in fact closest to) the entity being controlled or the display of that entity. Similar to labels for displays, labels that are separated from controls can confuse people as they try to link labels to controls. 10. Movement compatibility; a direction of movement of a control should be congruent with the direction both of movement of the displayed indicator and of the system movement itself. Response selection principles: 1. Avoid accidental activation 2. Hick-Hyman Law; the speed with which an action can be selected is strongly influenced by the number of possible alternative action that could be selected in that context. Law for reaction time. 3. Decision complexity advantage; it is more efficient to require a smaller number of complex decisions than many simple decisions. Downloaded by Luuk Pigmans ([email protected]) lOMoARcPSD|51767539 4. Fitts’s Law; controls typically require movement of two different sorts: (1) movement is often required for the hands or fingers to reach the control and (2) the control may then be moved in some direction, often to position a cursor. Law for movement time. 5. Provide feedback Summary of principles : In concluding our discussion of principles for control design, it should be apparent that just as display principles sometimes conflict, so do principles for control design. To show how these conflicts are resolved, we turn to a discussion of various categories of controls. As we encounter each principle in an application, we place a reminder of the principle number in parentheses, for example, (A1) refers to the principle of proximity compatibility, the first principle discussed under attention. The letter refers to the category: attention (A), perception (P), memory (M), mental model (MM), and response selection and execution (R), which are summarized in Table 9.4. In the following sections we apply these principles to specific applications. With these applications we use the term “guidelines” to distinguish them from the 15 principles; the guidelines are more specific design suggestions derived from the principles Downloaded by Luuk Pigmans ([email protected])

Use Quizgecko on...
Browser
Browser