Summary

This textbook provides a comprehensive overview of human-computer interaction (HCI). It discusses different aspects of interaction including visual, auditory, haptic, and motor systems. It also examines cognitive processes involved in reading and provides details about various types of memory such as sensory, short-term, and long-term.

Full Transcript

Chapter one summary The human system input and output channels are 1. Visual channel 2. Auditory channel 3. Haptic channel 4. Movement. Human vision system is a highly complex activity with a range of physical and perceptual Limitations, we can roughly divide visual perception into two stages:...

Chapter one summary The human system input and output channels are 1. Visual channel 2. Auditory channel 3. Haptic channel 4. Movement. Human vision system is a highly complex activity with a range of physical and perceptual Limitations, we can roughly divide visual perception into two stages: 1. The physical reception of the stimulus from the outside world, 2. And the processing and interpretation of that stimulus. Visual perception aspects (factors) are: 1. Perceiving size and depth 2. Perceiving brightness 3. Perceiving color The visual system compensates for: 1. Movement 2. Changes in luminance. Reading is the perception and processing of text. Reading is defined as the cognitive process of decoding symbols to determine a text's meaning. The stages in the reading process are: 1. Visual pattern perceived (recognize) 2. Decoded using internal representation of language 3. Interpreted using knowledge of syntax, semantics, pragmatics Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. Hearing, Provides information about environments such as distances, directions, objects etc. The human ear can hear frequencies from about 20 Hz to 15 kHz. It can distinguish frequency changes of less than 1.5 Hz at low frequencies but is less accurate at high frequencies The auditory system performs some filtering of the sounds received, allowing us to ignore background noise and concentrate on important information. Sense of touch is the faculty by which external objects or forces are perceived through contact with the body (especially the hands), the apparatus of touch receives stimuli through the skin. The apparatus of touch receives stimuli through the skin, Stimulus received via receptors in the skin, the skin contains three types of sensory receptor: 1. Thermo receptors respond to heat and cold, 2. nociceptors respond to intense pressure, heat and pain, 3. And mechanoreceptors respond to pressure. The haptic perception factors 1. the faculty by which external objects or forces are perceived through contact with the body. 2. Awareness of the position of the body and limbs. This is due to receptors in the joints. Kinesthesis is the awareness of the position of the body and limbs; this is due to three receptors in the joints. The three receptors in the joints are responsible for awareness of the position of the body and limbs. 1. Rapidly adapting, which responds when a limb is moved in a particular direction. 2. Slowly adapting, which responds to both movement and static position. 3. positional receptors, which only respond when a limb is in a static position. Movement is generally defined as a state of changing the position from rest to motion or vice-versa, Movement can be both voluntary and involuntary. Movement time taken to respond to stimulus is the reaction time plus the movement time Movement Reaction time - dependent on stimulus type are: 1. Visual 2. Auditory 3. Pain Fitts' Law describes the time taken to hit a screen target. Fitts’ law states that the amount of time required for a person to move a pointer (e.g., mouse cursor) to a target area is a function of the distance to the target divided by the size of the target. human memory Information is stored in: 1. Sensory memory 2. Short-term (working) memory 3. Long-term memory. The three types of memory function are: 1. Sensory memory 2. Short-term (working) memory 3. Long-term memory. Sensory memories act as buffers for stimuli received through the senses. A sensory memory exists for each sensory channel: iconic memory for visual stimuli, echoic memory for aural stimuli and haptic memory for touch. These memories are constantly overwritten by new information coming in on these channels. Short-term memory or working memory acts as a ‘scratch-pad’ for temporary recall of information. It is used to store information which is only required fleetingly. For example, calculate the multiplication 35 × 6 in your head. ▪ Short-term memory can be accessed rapidly, in the order of 70 ms. ▪ However, it also decays rapidly, meaning that information can only be held there temporarily, in the order of 200 ms. ▪ Short-term memory also has a limited capacity, the average person can remember 7 ± 2 digits. Human Long-term memory is our Repository (Warehouse) for all our knowledge. Here we store information, experiential knowledge, procedural rules of behavior – in fact, everything that we ‘know’. ▪ It has a huge capacity. ▪ It has a relatively slow access time of approximately a tenth of a second. ▪ Forgetting occurs more slowly in long-term memory. ▪ Human Long-term memory is relatively the permanent memory system that holds vast amounts of information, experiential knowledge, procedural rules of behavior for a long time, possibly indefinitely. ▪ LTM is classified into procedural memory and declarative memory. ▪ There are two types of declarative memory Episodic memory and semantic memory ▪ The declarative memory of information we have about the world is called semantic memory. The 'Process of Thinking' refers to a series of cognitive performatives involved in disciplinary work, encompassing acts of intervention, representation, practice, and contemplation to create knowledge. ▪ Thinking can require different amounts of knowledge. ▪ Some thinking activities are very directed, and the knowledge required is constrained. ▪ Others require vast amounts of knowledge from different domains. ▪ Reasoning and problem solving are two categories of thinking. ▪ In practice these are not distinct (different) since the activity of solving a problem may well involve reasoning and vice versa. ▪ Reasoning is the process by which we use the knowledge to draw conclusions or infer (conclude) something new about the domain of interest. ▪ Reasoning is a means of inferring new information from what is already known. There are a number of different types of reasoning: ▪ deduction (assumption), ▪ induction (stimulation), ▪ abduction (capture) We use each of these types of reasoning in everyday life, but they differ in significant ways. problem solving is the process of finding a solution to an unfamiliar task, using the knowledge we have. ▪ Human problem solving is characterized by the ability to adapt the information we have to deal with new situations. ▪ There are a number of different views of how people solve problems. ▪ The earliest, dating back to the first half of the twentieth century, is the Gestalt view that problem solving involves both reuse of knowledge and insight. ▪ A second major theory, proposed in the 1970s by Newell and Simon, was the problem space theory, which takes the view that the mind is a limited information processor. Chapter two summary What is meant by Batch processing? processing interactions takes place over hours or days. In contrast the typical desktop computer system has interactions taking seconds or fractions of a second (or with slow web pages sometimes minutes!). The field of Human Computer Interaction largely grew due to this change in interactive pace. It is easy to assume that faster means better, but some of the paper-based technology. What are the text entry devices? 1. The alphanumeric keyboard a. The QWERTY keyboard b. Ease of learning - alphabetic keyboard c. Ergonomics of use DVORAK keyboard and split designs 2. Chord keyboards 3. Phone pad and T9 entry 4. Handwriting recognition 5. Speech recognition What are the POSITIONING, POINTING AND DRAWING devices? 1. The mouse 2. Touchpad 3. Trackball and thumbwheel 4. Joystick and keyboard nipple 5. Touch-sensitive screens (touchscreens) 6. Stylus and light pen 7. Digitizing tablet 8. Eyegaze 9. Cursor keys 10. Discrete positioning controls In computer-aided design (CAD), where positioning and drawing are the major activities. Pointing devices allow the user to point, position and select items, either directly or by manipulating a pointer on the screen. True The trackball is really just an upside-down mouse. True Eyegaze system is one of the computer-aided design (CAD), where positioning and drawing are the major activities. False (only positioning) Explain the Touchscreen theory of action. Touchscreen is a method of allowing the user to point and select objects on the screen, it detect the presence of the user’s finger, or a stylus, on the screen itself. It work in one of a number of different ways: ▪ by the finger (or stylus) interrupting a matrix of light beams, ▪ or by capacitance changes on a grid overlaying the screen, ▪ or by ultrasonic reflections. Because the user indicates exactly which item is required by pointing to it, no mapping is required and therefore this is a direct device. What are the display devices? 1. Bitmap displays – resolution and color 2. Technologies a. Cathode ray tube b. Liquid crystal display c. Special displays 3. Large displays and situated displays 4. Digital paper What are the Devices for virtual reality and 3D interaction? 1. Positioning in 3D space a. Cockpit and virtual controls b. The 3D mouse c. Dataglove d. Virtual reality helmets e. Whole-body tracking 2. 3D Display a. Seeing in 3D b. VR motion sickness c. Simulators and VR caves. In Human-Computer Interaction (HCI), devices for virtual reality (VR) and 3D interaction are essential for creating immersive experiences and enabling users to interact with virtual environments effectively. Discuss These devices work together to provide users with interactive and immersive experiences in virtual environments. By combining visual, auditory, and haptic feedback, they facilitate more natural and engaging interactions, making VR and 3D applications effective for gaming, training, design, and other fields. In Human-Computer Interaction (HCI), devices for virtual reality (VR) and 3D interaction are essential for creating immersive experiences and enabling users to interact with virtual environments effectively. Mention some common devices used in this context. 1. VR Headsets Examples: Oculus Quest, HTC Vive, Valve Index, PlayStation VR Function: These headsets provide stereoscopic displays, tracking head movements, and audio feedback to create a sense of presence in virtual environments. 2. Motion Controllers Examples: Oculus Touch Controllers, HTC Vive Controllers, PlayStation Move Function: Handheld devices that track the user's hand movements and gestures, allowing for interaction with virtual objects through pointing, grabbing, or performing gestures. 3. Haptic Feedback Devices Examples: Haptic gloves, vest, or pads like HaptX Gloves or the Teslasuit Function: Provide tactile feedback (e.g., vibrations, pressure) to simulate the sense of touch, enhancing immersion by allowing users to "feel" virtual objects. 4. Tracking Systems Examples: External cameras (e.g., Oculus Sensors), infrared tracking (e.g., Vicon) Function: Systems that track the position and movement of the user (both head and body) in real-time, allowing for natural movement within the VR space. 5. 3D Display Systems Examples: CAVEs (Cave Automatic Virtual Environments), 3D monitors Function: Systems designed for group experiences, displaying 3D content in a way that multiple users can engage with without the need for headsets. 6. Input Devices Examples: Touchpads, keyboards, and VR-specific gamepads Function: Traditional input devices can also be used in VR environments, offering additional control options for interaction and navigation. 7. Augmented Reality (AR) Devices Examples: Microsoft HoloLens, Magic Leap Function: These devices overlay digital information onto the real world, allowing for 3D interaction with both virtual and physical elements. 8. Body Tracking Devices Examples: Kinect, Leap Motion Function: These systems track body movements and gestures, enabling interaction without handheld controllers, beneficial for applications such as fitness or therapy. 9. Eye Tracking Devices Examples: Tobii eye trackers, integrated eye tracking in VR headsets Function: Measure where the user is looking, allowing for gaze-based interaction and enhancing realism in VR experiences. Conclusion These devices work together to provide users with interactive and immersive experiences in virtual environments. By combining visual, auditory, and haptic feedback, they facilitate more natural and engaging interactions, making VR and 3D applications effective for gaming, training, design, and other fields. What is the function of the following: A. VR Headsets B. Motion Controllers C. Haptic Feedback Devices D. Tracking Systems A. VR Headsets Function is to provide stereoscopic displays, tracking head movements, and audio feedback to create a sense of presence in virtual environments. B. Motion Controllers Function is to Handheld devices that track the user's hand movements and gestures, allowing for interaction with virtual objects through pointing, grabbing, or performing gestures. C. Haptic Feedback Devices Function is to Provide tactile feedback (e.g., vibrations, pressure) to simulate the sense of touch, enhancing immersion by allowing users to "feel" virtual objects. D. Tracking Systems Function is to track the position and movement of the user (both head and body) in real-time, allowing for natural movement within the VR space. Complete 1. VR Headsets Function is to provide stereoscopic displays, tracking head movements, and audio feedback to create a sense of presence in virtual environments. 2. Motion Controllers Function is to Handheld devices that track the user's hand movements and gestures, allowing for interaction with virtual objects through pointing, grabbing, or performing gestures. 3. Haptic Feedback Devices Function is to Provide tactile feedback (e.g., vibrations, pressure) to simulate the sense of touch, enhancing immersion by allowing users to "feel" virtual objects. 4. Tracking Systems Function is to track the position and movement of the user (both head and body) in real-time, allowing for natural movement within the VR space. 5. 3D Display Systems designed for group experiences, displaying 3D content in a way that multiple users can engage with without the need for headsets. 7. Augmented Reality (AR) Devices overlay digital information onto the real world, allowing for 3D interaction with both virtual and physical elements. 8. Body Tracking Devices Function is to track body movements and gestures, enabling interaction without handheld controllers, beneficial for applications such as fitness or therapy. 9. Eye Tracking Devices Function is to Measure where the user is looking, allowing for gaze-based interaction and enhancing realism in VR experiences. In Human-Computer Interaction (HCI), physical controls, sensors, and special devices play a crucial role in enabling users to interact with computers and systems effectively. Discuss Together, these components enhance user experience, making interactions more intuitive, efficient, and enjoyable. They facilitate a diverse range of applications from everyday computing to specialized tasks in gaming, design, and health monitoring. When designing interfaces, it's important to consider the right combination of these elements to meet user needs effectively. In HCI, what are the PHYSICAL CONTROLS, SENSORS AND SPECIAL DEVICES In Human-Computer Interaction (HCI), physical controls, sensors, and special devices play a crucial role in enabling users to interact with computers and systems effectively. Here's a breakdown of these components: Physical Controls These are the tangible components that users interact with directly. They include: 1. Buttons: Simple input controls that provide on/off or mode changes when pressed. 2. Knobs: Rotating controls that adjust settings like volume or brightness. 3. Switches: Toggle devices that turn functions on or off. 4. Touchscreens: Allow users to interact through touch gestures and multi- touch inputs. 5. Joysticks: Used for navigation or control in games and simulations. 6. Trackballs: Allow for cursor movement by rotating a ball; useful in precision applications. Sensors Sensors are devices that detect and respond to input or environmental changes: 1. Motion Sensors: Detect movement and can be used for gesture recognition or security applications. 2. Ambient Light Sensors: Adjust screen brightness based on surrounding light conditions. 3. Touch Sensors: Detect touch on screens or surfaces, enabling interaction without physical buttons. 4. Proximity Sensors: Detect the presence of objects nearby, often used in smartphones to turn off the screen when near the face. 5. Fingerprint Scanners: Used for user authentication by recognizing unique fingerprint patterns. 6. Cameras: Capture visual input for gesture recognition or augmented reality applications. Special Devices These include unique tools and interfaces designed for specific applications or enhanced user experience: 1. Virtual Reality (VR) Headsets: Provide immersive experiences by combining sensory input and digital environments. 2. Augmented Reality (AR) Glasses: Overlay digital information onto the real world, enhancing interaction. 3. Game Controllers: Designed for gaming, providing specialized buttons and feedback systems. 4. Styluses: Allow precise input on touchscreens, useful for design and drawing applications. 5. Wearable Devices: These monitor health metrics or provide notifications directly to the user, such as smartwatches or fitness trackers. 6. Voice Recognition Systems: Enable hands-free control of devices through spoken commands. Conclusion Together, these components enhance user experience, making interactions more intuitive, efficient, and enjoyable. They facilitate a diverse range of applications from everyday computing to specialized tasks in gaming, design, and health monitoring. When designing interfaces, it's important to consider the right combination of these elements to meet user needs effectively. Define the terms "environment" and "bio-sensing" In HCI In (HCI), the terms "environment" and "bio-sensing" refer to important aspects of how users interact with systems. Here’s a definition of each: Environment In the context of HCI, the "environment" refers to the surrounding context in which human interaction with computer systems takes place. This can include: Physical Environment: The actual space where the interaction occurs, including lighting, layout, noise levels, and the presence of other individuals. For example, users might interact with a system in a quiet office or a bustling cafe, which can affect usability and design considerations. Social Environment: The interpersonal dynamics and social context influencing how users engage with technology. For instance, interactions might differ in solitary versus collaborative settings, affecting user behavior and choice of interface. Virtual Environment: In the case of virtual reality (VR) or augmented reality (AR), the simulated surroundings that users find themselves in while interacting with a system. The design of these environments plays a crucial role in user experience and immersion. Understanding the environment is essential for designing user interfaces that accommodate diverse user needs, enhance usability, and create an engaging experience. Bio-sensing Bio-sensing refers to the use of various technologies to monitor and measure biological data from individuals. In HCI, bio-sensing can enhance interactions by incorporating physiological data into the system's responses. Key aspects include: Physiological Measurements: Bio-sensors can collect data such as heart rate, skin temperature, galvanic skin response (GSR), muscle activity (EMG), and brain activity (EEG). These measurements can provide insights into a user’s state of health, emotions, and stress levels. Contextual Adaptation: By analyzing bio-sensing data, systems can adapt their behavior to improve user experience. For example, a VR application might adjust its difficulty level based on the user’s stress indicators. Feedback Mechanisms: Bio-sensing can enable systems to provide real- time feedback to users, enhancing engagement and personalization. For instance, wearable devices might offer biofeedback to help users understand their physical responses during exercise. Applications: Bio-sensing is commonly used in areas like health monitoring, mental wellness, gaming, and user experience research to better adapt technologies to individual needs and emotional states. Conclusion Both environment and bio-sensing are critical components in HCI that contribute to improving user experience. By considering the physical and social contexts of use, as well as integrating physiological responses into interactions, designers can create more effective, personalized, and engaging systems Define Digital paper. Digital paper, also known as interactive paper, is patterned paper used in conjunction with a digital pen to create handwritten digital documents. The printed dot pattern uniquely identifies the position coordinates on the paper. The digital pen uses this pattern to store the handwriting and upload it to a computer. Chapter three summary Interaction in HCI refers to the ways in which users engage with computer systems and technology. Interaction encompasses (includes) the design, evaluation, and implementation of interactive computing systems, focusing on making the user experience more efficient, effective, and enjoyable. Key components of interaction Key components of interaction in HCI include: 1. Usability 2. User Experience (UX) 3. Interaction Design 4. Feedback 5. Accessibility 6. Cognitive Load 1. Usability: Ensuring that systems are easy to use and meet the user's needs effectively. 2.User Experience (UX): Understanding and optimizing the overall experience of users when interacting with technology, which includes emotional responses, aesthetics, and usability. 3. Interaction Design: Creating user interfaces that facilitate interaction, including layout, visual elements, and control mechanisms. 4. Feedback: Providing users with information about their actions and system status to enhance understanding and control. 5. Accessibility: Designing systems that are usable by people with a wide range of abilities and disabilities. 6. Cognitive Load: Minimizing the mental effort required to use a system, ensuring that users can focus on their tasks without being overwhelmed by unnecessary information or complexity. o Effective interaction in HCI often involves testing and adapting designs based on user feedback and behavior through user-centered design principles. o Key methodologies might include usability testing, surveys and observation. MODELS OF INTERACTION o In Human-Computer Interaction (HCI), an interaction model refers to the conceptual framework that defines how users interact with a computer system or software. o It encompasses the methods, mechanisms, and structure through which human users communicate with and control a machine or application. o The interaction model shapes how user interfaces are designed, impacting usability, accessibility, and overall user experience. FRAMEWORKS AND HCI Human-Computer Interaction (HCI) frameworks provide structured approaches to understanding, designing, and evaluating interactions between users and computers. These frameworks are crucial (necessary) for guiding the development of user-friendly, efficient, and effective systems. Below are some key HCI frameworks: 1. Interaction Frameworks 2. Usability Frameworks 3. Cognitive Frameworks 4. Participatory Design Frameworks 5. Activity-Centred Design Framework 6. User-Centred Design (UCD) 7. Design Thinking 1. Interaction Frameworks The Interaction Framework by Ben Shneiderman: This framework emphasizes the importance of eight golden rules of interface design that aim to create interfaces that are consistent, efficient, and easy to use. These rules are: 1. Strive for Consistency: Ensure that similar actions and elements have the same meanings across the interface to help users learn and predict outcomes. 2. Enable Frequent Users to Use Shortcuts: Allow experienced users to speed up their interactions with the system through shortcuts or advanced features without hindering novice users. 3. Offer Informative Feedback: Provide users with clear feedback about their actions, including confirmations, error messages, or status updates, so they know the results of their interactions. 4. Design Dialogs to Yield Closure: Structure interactions so that they have clear beginnings and endings, helping users to recognize when tasks are complete. 5. Error Prevention and Handling: Design interfaces that prevent problems from occurring and provide helpful error messages when they do, guiding users toward solutions. 6. Permit Easy Reversal of Actions: Allow users to undo actions easily, which encourages exploration and helps reduce anxiety about making mistakes. 7. Support Internal Locus of Control: Design the interface so that users feel in control of the interaction, allowing them to initiate actions rather than feeling forced by the system. 8. Reduce Short-Term Memory Load: Minimize the amount of information that users have to remember by presenting options and information clearly and using familiar conventions. 2. Usability Frameworks The Usability Heuristics by Jakob Nielsen: Nielsen outlined ten usability heuristics that focus on general principles for user interface design. They include: 1. Visibility of System Status: Keep users informed about system status with timely feedback. 2. Match Between System and the Real World: Use familiar language and concepts for users. 3. User Control and Freedom: Provide options to undo actions and exit states easily. 4. Consistency and Standards: Maintain uniformity and follow platform conventions. 5. Error Prevention: Design to prevent errors before they occur. 6. Recognition Rather Than Recall: Make information and actions visible to minimize memory load. 7. Flexibility and Efficiency of Use: Cater to both novice and expert users with adaptable interfaces. 8. Aesthetic and Minimalist Design: Avoid irrelevant information to keep the interface clean. 9. Help Users Recognize, Diagnose, and Recover from Errors: Offer clear error messages and solutions. 10. Help and Documentation: Provide easy-to-find, task-focused help and documentation. In summary, heuristic principles are foundational tools that enhance usability, consistency, and user satisfaction, making them indispensable in the field of user experience design. Heuristic principles are important because they provide a quick and effective way to identify and fix usability issues early in the design process, saving time and resources. They ensure consistency, reducing cognitive load and helping users navigate systems more efficiently. Additionally, they focus on user needs, enhancing satisfaction and engagement, which are critical for the success of any product. 3. Cognitive Frameworks Norman’s Model of Interaction: o Donald Norman proposed a model based on the stages of user interaction. o It focuses on understanding how users engage with products and systems. Here’s a breakdown of each stage: 1. Goal Formation: This is the initial stage where users identify what they want to achieve. Goals can be explicit, like completing a task or solving a problem, or implicit, such as improving efficiency or enjoying a task. 2. Intent to Act: After forming a goal, users develop an intention to take specific actions. This step involves deciding how to approach the goal, including selecting potential actions or strategies. 3. Action: At this stage, users execute the chosen action. This involves interaction with the system, such as clicking buttons, navigating menus, or using tools. 4. Perception: Once an action is taken, users perceive the system's response. This includes noticing how the system behaves and what feedback it provides, relying on sensory experiences to gather information. 5. Interpretation: Users then interpret the perception of the response. This involves understanding what the feedback means in relation to their goals, assessing whether it meets their expectations and needs. 6. Evaluation: Finally, users evaluate the outcome of their actions based on their interpretations. They determine if the goal was achieved successfully or if adjustments need to be made for future interactions. This model highlights the cognitive processes involved in user interactions and emphasizes the importance of user-centred design. Understanding these stages can help designers create more intuitive and effective interfaces that align with user needs and enhance the overall experience 4. Participatory Design Frameworks Cooperative Design: o A user-centred approach where users and designers collaborate throughout the design process. o It is particularly focused on ensuring that users’ needs and preferences are central to the development of the system. Contextual Design: o Developed by Hugh Beyer and Karen Holtzblatt, this framework involves gathering in-depth knowledge about how users interact with technology in their natural settings. o It involves techniques like field studies, scenario development, and user personas. 5. Activity-Centred Design Framework Activity Theory: o This framework, rooted in psychology, suggests that people engage in purposeful activities, and the design of systems should focus on supporting activities rather than individual tasks. It involves understanding the tools, subjects, and goals involved in a particular activity to design systems that support them holistically. 6. User-Centred Design (UCD) UCD Process: o A widely used methodology that places the user at the center of the design process. It involves iterative cycles of: 1. Research: Understanding users’ needs and contexts. 2. Design: Developing and prototyping solutions. 3. Testing: Gathering user feedback and iterating on the design. o The goal is to create systems that are intuitive and meet the users’ needs by involving them at every stage of development. 7. Design Thinking The Design Thinking Process: o A solution-based framework that involves empathy, defining problems, ideation, prototyping, and testing. o It emphasizes understanding users’ emotions, perspectives, and pain points to create innovative solutions that are practical and effective. In summary, HCI frameworks help designers and researchers ensure that systems are intuitive (native), user-friendly, and aligned with the needs of users. Whether through usability heuristics, cognitive models, or participatory approaches, these frameworks guide the design and evaluation of technology to optimize user experience. ERGONOMICS AND HCI Ergonomics and Human-Computer Interaction (HCI) are closely related fields that focus on optimizing the interaction between people and systems, particularly computers. Ergonomics Ergonomics, also known as human factors, is the study of designing equipment and systems that fit the human body and its cognitive abilities. The goal is to improve comfort, safety, and efficiency. Key aspects include: 1. Physical Ergonomics: Focuses on the physical interaction between humans and objects, including workstation design, posture, and movement. 2. Cognitive Ergonomics: Concerns the mental processes involved in human interaction with systems, such as perception, memory, and decision-making. 3. Organizational Ergonomics: Looks at the optimization of socio-technical systems, including organizational structures, policies, and processes. The top 5 aspects (features) of ergonomics are all relevant to ensuring a safe and healthy workplace. There are many potential benefits to be gained by taking a proactive approach to ergonomics. Such as: 1. Elevated safety and productivity for everyone. 2. Increased worker presenteeism and overall retention. 3. Fewer work-related injuries (thereby protecting workers from harm and saving the company both time and money). 4. Faster output with greater profit. 5. Efficient and cohesive (organized) place of employment. Intersection of Ergonomics and HCI o Both fields aim to enhance user performance and satisfaction while reducing the risk of injury or cognitive overload. o Ergonomic principles are often applied in HCI to create more comfortable and efficient interfaces. o User testing and feedback are crucial in both disciplines to continually improve designs based on real-world use. Importance of Combining ergonomics and HCI. Combining ergonomics and HCI allows for the development of products and systems that not only perform well but also accommodate the physical and cognitive needs of users, leading to better usability and overall satisfaction. INTERACTION STYLES in HCI o In Human-Computer Interaction (HCI), interaction styles refer to the ways in which users interact with computer systems or software. o These styles define the methods or techniques that users employ to communicate their intentions to the system and receive feedback. o The choice of interaction style significantly impacts user experience, ease of use, and efficiency. The most common interaction styles in HCI include: 1. Command-Line Interaction (CLI) Description: Users input text-based commands via a terminal or command prompt to interact with the system. Strengths: Efficient for experienced users; fast for performing complex tasks; minimal resource consumption. Weaknesses: Requires users to know the specific commands and syntax, not intuitive for beginners. Example: Unix/Linux shell or Windows Command Prompt. 2. Menu-Based Interaction Description: Users select commands or options from a list of menus. Menus can be hierarchical or flat. Strengths: Intuitive for beginners; reduces the need to remember commands; structured. Weaknesses: Can become cumbersome with too many options or deep menu hierarchies. Example: Application menus (e.g., File, Edit, View) in software like Microsoft Word. 3. Direct Manipulation Description: Users interact directly with on-screen objects using visual representations (e.g., clicking, dragging, resizing). Strengths: Immediate feedback; intuitive; often engaging; fosters a sense of control and mastery. Weaknesses: May require a more complex graphical interface; not suitable for all tasks. Example: Using a mouse to drag and drop files in a graphical desktop environment. 4. Form-Fill Interaction Description: Users interact with the system by filling out fields in forms (e.g., entering data in text fields, selecting checkboxes). Strengths: Structured, easy to follow; effective for data entry and collecting specific information. Weaknesses: Can become tedious for long forms; may not offer flexibility or adaptability in more complex interactions. Example: Online registration forms or data entry interfaces. 5. Pointing and Clicking (Graphical User Interface - GUI) Description: Involves using a pointing device (e.g., mouse, trackpad) to select options and manipulate objects on a screen, typically in the form of windows, buttons, and icons. Strengths: Intuitive and visual; requires less cognitive load compared to CLI. Weaknesses: Not ideal for large-scale or complex tasks requiring detailed input; can be slow for power users. Example: Clicking buttons or selecting icons in software like Microsoft Word, web browsers, or mobile apps. 6. Touch-Based Interaction Description: Users interact with the system using gestures (e.g., tap, swipe, pinch) on a touchscreen device. Strengths: Highly intuitive, especially for mobile or tablet devices; direct interaction with the screen; engaging. Weaknesses: Limited precision compared to mouse/keyboard; can be tiring for prolonged use; not ideal for all types of tasks. Example: Smartphones, tablets, and touch-enabled laptops or monitors. 7. Voice Interaction (Speech Recognition) Description: Users provide input via spoken language, and the system interprets and responds based on voice commands or queries. Strengths: Hands-free, natural interaction; useful for accessibility; quick for certain tasks. Weaknesses: May require training for speech recognition; sensitive to noise; limited by the system’s ability to understand diverse accents and contexts. Example: Virtual assistants like Amazon Alexa, Apple Siri, or Google Assistant. 8. Gestural Interaction Description: Users interact with the system by performing physical gestures, such as hand movements, body motions, or facial expressions. Strengths: Natural and intuitive; can be immersive in virtual or augmented reality settings; engaging. Weaknesses: Requires specialized hardware (e.g., motion sensors or cameras); limited precision; tiring for extended use. Example: Using hand gestures in virtual reality environments (e.g., Oculus Rift, Microsoft Kinect). 9. Natural Language Interaction (NLI) Description: Users communicate with the system using written or spoken natural language, and the system processes and responds accordingly. Strengths: Extremely user-friendly and intuitive for non-experts; more flexible than rigid command structures. Weaknesses: Requires sophisticated language understanding algorithms; might be limited in scope or accuracy. Example: Chatbots, virtual assistants, or search engines where users input questions in natural language. 10. Multimodal Interaction Description: Combines two or more interaction styles (e.g., voice, touch, gesture) in a single interface to offer more flexible and adaptable ways to interact. Strengths: Provides more options for interaction; adapts to user preferences or context; can improve accessibility. Weaknesses: Can be complex to design and implement; requires more processing power. Example: Smartphones where users can interact via voice commands, touch, and gestures. 11. Eye-Tracking Interaction Description: Uses eye-tracking technology to detect where the user is looking on the screen and uses that as an input mechanism. Strengths: Hands-free interaction; useful in accessibility settings for users with mobility impairments. Weaknesses: Requires specialized hardware; can be inaccurate under certain conditions (e.g., eye fatigue, poor lighting). Example: Eye-controlled interfaces for people with disabilities or hands-free web browsing. 12. Brain-Computer Interface (BCI) Description: Uses brainwave signals to directly interface with a computer or device, enabling control without physical input devices. Strengths: Potential for hands-free, highly personalized control, especially in assistive technologies. Weaknesses: Currently expensive, with limited consumer applications; requires special hardware and training. Example: Research applications in assistive technology, such as controlling a wheelchair or a prosthetic arm using thought alone. Brain-Computer Interface (BCI) in HCI in detail A Brain-Computer Interface (BCI) in Human-Computer Interaction (HCI) refers to a direct communication pathway between the brain and an external device, typically a computer. BCIs are designed to interpret brain activity and translate it into commands that can be understood and executed by computers or machines. Here are some key aspects of BCIs: 1. Signals: BCIs typically capture brain signals using methods like electroencephalography (EEG), which records electrical activity in the brain, or other neuroimaging techniques. 2. Signal Processing: The captured brain signals are processed and analyzed to identify patterns that correspond to specific thoughts, intentions, or commands. 3. Control Mechanism: Once the brain signals are interpreted, they can be used to control external devices, such as prosthetic limbs, computers, or even video games. 4. Applications: BCIs have a wide range of applications, including aiding individuals with disabilities to communicate or control devices, enhancing gaming experiences, and potentially offering new modalities for interaction in various fields. 5. Research and Development: The field is still actively evolving, with ongoing research aimed at improving the accuracy, efficiency, and usability of BCIs, as well as exploring ethical and privacy issues related to brain data. In HCI, BCIs represent a significant leap towards more intuitive and direct forms of interaction, enabling users to engage with technology using their neural activity rather than relying solely on traditional input devices like keyboards or mice. Interaction styles in HCI are critical o Interaction styles in HCI are critical in defining how users engage with a system and selecting the right one can dramatically affect the user experience. o By considering the specific needs of users and tasks, designers can create more intuitive, efficient, and accessible systems. THE CONTEXT OF THE INTERACTION In HCI, the context of the interaction refers to the environment or circumstances in which a user interacts with a computer system, including the physical, social, and cognitive conditions that affect how the interaction occurs. Understanding context is essential because it influences user behaviour, perception, and the design of interactive systems. In other words, a system’s design should consider the specific context in which it will be used to ensure effectiveness, usability, and satisfaction. There are several dimensions (scopes, aspects) to consider when analysing the context of interaction in HCI: 1. Physical Context 2. Social Context 3. Cognitive Context 4. Temporal Context 5. Cultural Context 6. Technological Context 1. Physical Context This refers to the physical environment in which the interaction occurs, including: o Location: Whether the user is interacting with the system at home, in the office, outdoors, or in a specific work environment (e.g., factory floor, healthcare setting). o Device and Interface: The type of device being used (e.g., desktop computer, smartphone, tablet, wearable device) and the interaction modality (e.g., touchscreen, keyboard, voice input, gesture control). o Environmental Conditions: Factors such as lighting, noise, temperature, and other environmental variables that can impact user experience. For instance, working in a noisy environment can make voice-based interfaces less effective. 2. Social Context This refers to the social environment and the roles that individuals play during the interaction. This includes: o Social Influence: How other people around the user may affect their use of technology, either by directly influencing their actions or by providing feedback. This is especially important in collaborative or group settings, where interactions may be coordinated between multiple users (e.g., in video conferencing or team collaboration tools). o User Roles: The role or identity of the user in each context can affect their goals, expectations, and interactions. For example, a user may approach a medical device very differently from a recreational device, depending on whether they are a healthcare professional or a patient. o Collaboration and Communication: Many modern systems are designed for social interaction, such as social media platforms, collaborative tools (e.g., Google Docs), or multiplayer games. These systems often prioritize features that support group dynamics, sharing, and communication. 3. Cognitive Context The cognitive context pertains to the mental state and cognitive abilities of the user, including: o Mental Models: Users bring pre-existing mental models (knowledge and expectations about how things work) to their interactions with systems. Designers must ensure that the system's behavior aligns with users' mental models to minimize cognitive load and confusion. o User Goals and Tasks: The context is shaped by the goals users are trying to achieve. Different goals require different types of interactions. For instance, navigating a website to purchase a product requires different interactions than using the same website to research information. o Cognitive Load: The amount of mental effort required to use the system is influenced by the design of the interface. Interfaces that are too complex or require excessive memory or attention may overwhelm users, while intuitive interfaces ease cognitive load. 4. Temporal Context This refers to the time-related aspects of the interaction, which include: o Time of Interaction: Whether the user is engaging with the system during work hours, leisure time, or in urgent situations. Time constraints, urgency, or available time can shape user behavior and expectations. o History of Interaction: The context of prior experiences with the system or related systems. A user familiar with a system may have different expectations and behaviors than a novice user. o Duration: The duration of interaction influences how the system is used. A user might engage with an application for a brief period (e.g., checking a weather app) or for extended sessions (e.g., playing a game, completing a complex task). 5. Cultural Context Users’ cultural background can influence their approach to technology, including: o Language and Symbolism: Different cultures interpret symbols, icons, and language differently. A button labeled with a specific symbol might be understood in one culture but not in another or might have a different meaning altogether. o Norms and Expectations: What is considered normal or acceptable in one culture may not be in another. For example, the way people interact with touchscreens or the expectations for privacy and data security can vary significantly across cultures. 6. Technological Context This refers to the specific characteristics and limitations of the technologies involved, such as: o System Capabilities: The computational power, software features, and hardware of the system influence what tasks can be performed. For example, a mobile phone has different interaction possibilities compared to a desktop computer or a specialized medical device. o Connectivity and Bandwidth: Whether the system requires an internet connection, and the quality of that connection (e.g., low bandwidth or offline capabilities) will influence the user experience. o Interaction Modalities: Whether users are interacting with a system via touch, voice, gesture, or traditional keyboard and mouse. Different modalities may be appropriate depending on the context. Incorporating the context of interaction into HCI design ensures that systems are more user centred (focused). By understanding the context, designers can: 1. Anticipate User Needs: Different contexts require different interface features, such as simplifying interactions for environments where users are on the move (mobile) versus where they are stationary (desktop). 2. Minimize Friction (Conflict): Systems designed with context in mind reduce unnecessary friction, helping users to achieve their goals more efficiently. 3. Enhance Usability: Context-aware design leads to more intuitive, usable systems that align with the expectations and needs of users in each context. 4. Improve Accessibility: Context-aware systems can be more adaptive to users with disabilities, adjusting the interface based on the user’s needs and environment. There are several dimensions (scopes, aspects) to consider when analysing the context of interaction in HCI for fitness tracker. Example, a fitness tracker would be designed differently depending on: o physical context (outdoor activity vs. indoors), o social context (whether the user is alone or with a group), o cognitive context (if the user is focusing on a challenging workout or casually monitoring progress). In summary, context in HCI plays a critical role in shaping how users engage with technology and how effective, efficient, and satisfying their experiences are. Understanding and integrating the context of interaction into design is fundamental to creating user-friendly, effective interfaces. Chapter four summary What is a paradigm shift? A paradigm shift refers to a fundamental change in the basic concepts, practices, or assumptions underlying a particular field or system. How does a paradigm shift affect thinking and problem-solving? It transforms how people think, approach problems, or understand the world. Who popularized the term "paradigm shift"? Philosopher and historian of science Thomas Kuhn popularized the term. In which book did Thomas Kuhn discuss paradigm shifts? He discussed it in his book The Structure of Scientific Revolutions (1962). What are the stages described by Kuhn that lead to a paradigm shift? 1. Normal Science: Stable progress within the existing framework. 2. Crisis: The existing paradigm faces significant challenges. 3. Paradigm Shift: Transition to a new paradigm that better explains data or solves problems. Can you provide examples of paradigm shifts? The transition from a geocentric (Earth-centered) to a heliocentric (Sun-centered) model of the universe. The shift from Newtonian mechanics to quantum mechanics and relativity in physics. The development of the internet and digital technology, transforming communication, business, and social structures. Are paradigm shifts limited to science? No, they can occur in various domains, including technology, politics, and culture. What does a paradigm shift mean in Human-Computer Interaction (HCI)? In HCI, a paradigm shift refers to a significant change in how people interact with computers and technology or a major transformation in the design and development of user interfaces. What drives paradigm shifts in HCI? They are typically driven by new technologies, innovations in interaction methods, or evolving user needs, leading to reevaluation of existing practices and new approaches to design. What is the paradigm shift from Command-Line Interfaces (CLI) to Graphical User Interfaces (GUI)? Early computer interactions relied on text-based CLI, where users typed specific commands. The shift to GUI introduced graphical elements like windows, icons, and buttons, making computers more accessible and intuitive for the general public. How did the transition from desktop computing to mobile computing change user interaction with technology? Mobile computing introduced devices like smartphones and tablets, featuring touchscreens and gestures. This required more adaptive, context-sensitive interfaces and consideration for smaller screens and battery life. What shift occurred from keyboard and mouse interfaces to touch and voice interfaces? Touchscreens and voice-activated systems, such as Siri and Alexa, prioritized direct manipulation and natural language processing, reducing reliance on traditional input devices like keyboards and mice. What is the concept of ubiquitous computing, and how is it different from desktop computing? Ubiquitous computing integrates technology seamlessly into everyday objects and environments, moving beyond traditional desktop setups. Examples include smart homes, wearables, and IoT devices. Why are Graphical User Interfaces (GUI) considered a major milestone in HCI? GUIs made computing more intuitive and user-friendly by replacing text-based commands with graphical elements, broadening accessibility for non-technical users. What new challenges arose with the shift to mobile computing? Designing for smaller screens, managing battery life, and creating interfaces adaptable to various contexts were key challenges introduced by mobile computing. How do touch and voice interfaces enhance user interaction? They provide more natural and direct ways of interacting with devices, enabling users to perform tasks using gestures or voice commands instead of traditional input methods. Can you give examples of ubiquitous computing in daily life? Examples include smart home devices like thermostats and lights, wearable fitness trackers, and IoT-enabled appliances. What is the paradigm shift from focused interfaces to context-aware and adaptive systems? This shift involves systems that dynamically adapt to the user’s situation, environment, or behavior. For example, smartphones use sensors to detect location, activity, or mood, enabling interfaces to change dynamically for better usability and relevance. How do context-aware systems improve usability? By adapting to the user's current context, such as location or activity, these systems provide more relevant and tailored interactions. What is the paradigm shift from individual interaction to collaborative and social computing? Early HCI emphasized single-user interactions with computers. With the rise of the internet, there has been a shift toward designing for group interactions and shared experiences, such as multi-user interfaces, virtual collaboration tools, and social platforms. Can you provide examples of tools that support collaborative and social computing? Examples include Google Docs for real-time collaboration, Slack for team communication, and social media platforms like Facebook or Instagram for shared social experiences. Why is context awareness important in modern interfaces? It allows systems to be more intuitive and personalized by dynamically responding to the user’s environment or actions, improving overall user satisfaction. How has the rise of the internet influenced HCI design? It shifted the focus from individual usage to collaborative and social interactions, emphasizing tools and platforms that support group communication and shared activities. What technologies enable context-aware and adaptive systems? Sensors, machine learning algorithms, and data analytics are key technologies enabling systems to detect and respond to context dynamically. What is the paradigm shift from graphical interfaces to immersive interfaces? Augmented Reality (AR) and Virtual Reality (VR) represent a shift where users interact with 3D virtual environments or overlay digital information onto the real world. What new methods of interaction are introduced by immersive interfaces? They include spatial gestures, motion tracking, and immersive experiences. What do these paradigm shifts in HCI aim to achieve? They aim to make technology more intuitive, human-centered, and responsive to real-world needs. What challenges do these paradigm shifts bring to designers? Designers must rethink interaction models to ensure technology remains accessible, useful, and empowering for users. Q5: What technologies are involved in AR and VR interfaces? These include technologies for spatial gestures, motion tracking, and immersive environments. What is time sharing in HCI? Time sharing is a method that allows multiple users to access a single computer system simultaneously, improving system utilization and user interaction. How did Video Display Units (VDUs) change user interaction with computers? VDUs introduced visual displays, replacing text-only interfaces and drastically enhancing how users interact with computers. What are programming toolkits, and how do they assist developers? Programming toolkits are collections of software tools that provide predefined components or templates to streamline application development. What does personal computing refer to? Personal computing refers to the use of computers by individuals for personal tasks, leading to the proliferation of home computers and personalized software. What is the WIMP interface, and why is it significant? WIMP stands for Windows, Icons, Menus, and Pointer. It’s a common interface paradigm that enhances usability by allowing users to interact visually and intuitively. How are metaphors used in HCI? Metaphors in HCI help users understand new systems by relating them to familiar concepts, such as the desktop metaphor for graphical user interfaces. What is direct manipulation in HCI? Direct manipulation is a style of interaction where users can directly engage with visible objects on the screen, such as dragging and dropping files. What is the balance discussed in "Language versus Action" in HCI? It refers to the balance between command-based interfaces (language) and action-based interfaces (direct manipulation) in user interaction. What is hypertext, and how has it changed information access? Hypertext is a system allowing users to navigate between documents via hyperlinks, fundamentally transforming how we access and organize information. What does multi-modality mean in HCI? Multi-modality refers to using multiple forms of interaction, such as speech, touch, and gestures, to enhance user experience and accessibility. What is Computer-Supported Cooperative Work (CSCW)? CSCW involves research and practice focused on how technology can facilitate collaborative work among groups of people. What is the significance of the World Wide Web in HCI? The World Wide Web is an information system that connects documents through links, enabling easy access and sharing of information globally. What are agent-based interfaces? These interfaces use autonomous agents to assist users by performing tasks on their behalf and adapting to their preferences. What is ubiquitous computing? Ubiquitous computing integrates technology into everyday objects and activities, making it seamlessly embedded in our environment. What are sensor-based and context-aware interactions? These interactions use sensors to gather data about the user’s environment, enabling more responsive and personalized computing experience Revision (Answers) Define the following: 1. Kinesthesis. Kinesthesis is the awareness of the position of the body and limbs; this is due to three receptors in the joints. 2. The process of human thinking. The 'Process of Thinking' refers to a series of cognitive performatives involved in disciplinary work, encompassing acts of intervention, representation, practice, and contemplation to create knowledge. ▪ Thinking can require different amounts of knowledge. ▪ Some thinking activities are very directed, and the knowledge required is constrained. ▪ Others require vast amounts of knowledge from different domains. 3. Reasoning. -Reasoning is the process by which we use the knowledge to draw conclusions or infer (conclude) something new about the domain of interest. -is a means of inferring new information from what is already known. 4. Problem solving and mention the problem-solving major theory problem solving is the process of finding a solution to an unfamiliar task, using the knowledge we have. ▪ Human problem solving is characterized by the ability to adapt the information we have to deal with new situations. ▪ There are a number of different views of how people solve problems. ▪ The earliest, dating back to the first half of the twentieth century, is the Gestalt view that problem solving involves both reuse of knowledge and insight. ▪ A second major theory, proposed in the 1970s by Newell and Simon, was the problem space theory, which takes the view that the mind is a limited information processor. What is the function of the following? 1.VR Headsets. Examples: Oculus Quest, HTC Vive, Valve Index, PlayStation VR Function: These headsets provide stereoscopic displays, tracking head movements, and audio feedback to create a sense of presence in virtual environments. 2. Motion Controllers. Examples: Oculus Touch Controllers, HTC Vive Controllers, PlayStation Move Function: Handheld devices that track the user's hand movements and gestures, allowing for interaction with virtual objects through pointing, grabbing, or performing gestures. 3. Haptic Feedback Devices. Examples: Haptic gloves, vest, or pads like HaptX Gloves or the Tesla suit Function: Provide tactile feedback (e.g., vibrations, pressure) to simulate the sense of touch, enhancing immersion by allowing users to "feel" virtual objects. Define the following terms in HCI: 1.Physical Context. This text discusses the physical environment in which interaction with a system occurs, including: 1. Location: The setting of interaction (e.g., home, office, outdoors, or specific work environments like factories or healthcare facilities). 2. Device and Interface: The type of device used (desktop computer, smartphone, tablet, or wearable) and the interaction method (e.g., touchscreen, keyboard, voice input, or gesture control). 3. Environmental Conditions: Factors like lighting, noise, temperature, and other variables that influence the user experience (e.g., noise can reduce the effectiveness of voice interfaces). 2.Social Context. This text focuses on the social environment and the roles individuals play during interaction, including: 1. Social Influence: How people around the user impact their technology use, either by influencing actions or providing feedback, especially in collaborative settings (e.g., video conferencing, team tools). 2. User Roles: A user's role or identity affects their interaction with the system. For example, a healthcare professional and a patient may use the same medical device differently. 3. Collaboration and Communication: Modern systems prioritize social interaction, supporting group dynamics, sharing, and communication (e.g., social media, collaborative tools like Google Docs, multiplayer games). 3.Technological Context. This text focuses on the technological characteristics and limitations that affect interaction, including: 1. System Capabilities: The system's hardware, computational power, and software features determine the tasks it can handle. For instance, a mobile phone offers different functionalities compared to a desktop or specialized device. 2. Connectivity and Bandwidth: The need for an internet connection and its quality (e.g., low bandwidth or offline functionality) significantly impact user experience. 3. Interaction Modalities: The method of interaction, such as touch, voice, gestures, or keyboard and mouse, varies in suitability depending on the context. Define the following terms in HCI: 1. Multi-modality. Multi-modality refers to using multiple forms of interaction, such as speech, touch, and gestures, to enhance user experience and accessibility. 2.Computer-Supported Cooperative Work (CSCW). CSCW involves research and practice focused on how technology can facilitate collaborative work among groups of people. 3. Ubiquitous Computing. Ubiquitous computing integrates technology into everyday objects and activities, making it seamlessly embedded in our environment.

Use Quizgecko on...
Browser
Browser