Summary

This document provides an overview of key concepts in Human-Computer Interaction (HCI) such as perception bias, selective attention, and Gestalt laws. It covers different aspects of design, including visual perception, structures, and color usage, to enhance user experience. The document examines input/output design principles for users.

Full Transcript

**1. Perception Bias** **Perception bias** refers to how our **prejudices, beliefs, and experiences** can shape how we **interpret information**. People may **see what they expect** to see or **filter out** information that doesn't align with their views. In design, understanding **perception bias*...

**1. Perception Bias** **Perception bias** refers to how our **prejudices, beliefs, and experiences** can shape how we **interpret information**. People may **see what they expect** to see or **filter out** information that doesn't align with their views. In design, understanding **perception bias** helps ensure that all users experience the interface **fairly** and don't make assumptions based on prior knowledge. **2. Selective Attention** **Selective attention** is the ability to focus on certain aspects of our environment while **ignoring others**. For example, when you're reading a webpage, your attention is drawn to the **most important content**, like headlines or key images, while you might ignore the surrounding elements. This concept is essential in design because it helps to **emphasize key elements** and reduce **visual clutter** so users can focus on what matters most. **3. Gestalt Laws** The **Gestalt laws** are a set of principles that explain how we **perceive visual information** as whole, organized patterns. These principles include: - **Proximity**: Objects that are close together are seen as related. - **Similarity**: Similar objects are grouped together. - **Continuity**: We perceive lines or patterns as **continuous**, even if interrupted. - **Closure**: We fill in missing parts of a shape to complete it. - **Figure-Ground**: We separate objects (figure) from the background (ground).\ In design, applying these laws helps create **clear, understandable layouts** that users can navigate intuitively. **4. Visual Perception** **Visual perception** is the process by which our brains interpret **visual stimuli** (like light, color, and shapes) from the environment. In design, **visual perception** plays a huge role in how users understand and interact with content. Effective design uses visual cues (e.g., contrast, alignment) to **guide the user\'s attention** and make the interface easier to use. **5. Structures** **Structures** refer to the way information is **organized and arranged** visually. In design, structure involves how elements are **positioned** on the screen or page to create a sense of **order** and **clarity**. Well-structured content makes it easier for users to **find and process information**. Examples of structure in design include **layouts**, **grid systems**, and **hierarchical organization**. **6. Colors** **Colors** influence how users **perceive** and **interact** with a design. They can convey emotions, highlight important elements, and guide user actions. For example, **red** may signal **urgency** or **warnings**, while **blue** may evoke **calmness** or **trust**. Designers must consider color **contrast**, **accessibility** (e.g., colorblindness), and **cultural associations** when choosing colors for their interfaces. **7. Peripheral** **Peripheral perception** refers to the ability to perceive **objects or information** outside of our **direct line of sight**. We use peripheral vision to notice things happening in the background without actively focusing on them. In design, it\'s important to understand peripheral perception so that key elements are positioned in areas where users are likely to **notice them without distraction** (e.g., sidebars, footers). **8. Information Scanning** **Information scanning** is the process by which users **look through** a page or screen to find the information they need. Users typically scan for **key terms**, **headings**, or **visual cues** (like buttons or icons). Understanding how users scan information helps designers structure content effectively, ensuring important elements stand out and are easy to locate. **9. Design Guidelines** **Design guidelines** are recommendations based on research and best practices that help designers create **effective and usable interfaces**. These include principles like: - **Consistency**: Ensuring elements behave and look the same throughout the interface. - **Clarity**: Avoiding clutter and making sure text and visuals are easy to understand. - **Feedback**: Providing users with **clear feedback** when they interact with the system (e.g., buttons changing color when clicked). - **Affordance**: Designing elements that visually suggest their function (e.g., buttons that look like buttons).\ Following these guidelines helps improve **usability** and **user satisfaction**. **. Memory** **Memory** refers to how we **encode, store, and retrieve** information. In design, understanding how users remember things helps create interfaces that are **easy to recall** and navigate. There are two main types of memory relevant to design: - **Short-term Memory**:\ This is where we store **information temporarily**---usually for just a few seconds or minutes. Short-term memory is limited, so users can only remember a few items at a time. For example, if you\'re filling out an online form, users may struggle to remember the instructions if they are too long.\ **Design Implication**: Keep information **short and simple**, provide **clear instructions**, and minimize distractions to help users retain the most important details. - **Long-term Memory**:\ This is where we store **information over a long period**, sometimes for a lifetime. Long-term memory is more reliable but requires **repetition** and **meaningful association** for information to stick. For instance, users will remember how to use familiar apps or websites after repeated use.\ **Design Implication**: Leverage **familiar patterns** and **consistency** to help users build muscle memory and recall information easily. **2. Design Implications** **Design implications** refer to how our understanding of memory influences design decisions. Knowing that **short-term memory** has limitations means we need to: - Break information into **manageable chunks** (e.g., using step-by-step processes). - Ensure users don't need to **memorize complex information** (e.g., avoid asking users to remember usernames or passwords by offering **auto-complete** or **password managers**).\ By considering memory, we create designs that align with the user's cognitive abilities, making the experience **effortless** and **intuitive**. **3. Attention** **Attention** is the ability to focus mental resources on specific tasks or stimuli. There are limited resources, meaning users can't focus on everything at once. In design, it's crucial to **guide users\' attention** to the most important elements.\ **Design Implication**: Use **visual hierarchy** (e.g., larger fonts, bold colors) to draw attention to key items like call-to-action buttons, ensuring users focus on what matters most. **4. Learning** **Learning** refers to the process of acquiring and retaining new information. In design, users need to **learn how to interact** with an interface. This is especially important when using **new systems** or features.\ **Design Implication**: **Onboarding** and clear **instructions** can facilitate learning by gradually introducing users to the system. Additionally, using **consistent design patterns** can help users quickly learn how to use the product through repetition. **5. Task-based Design** **Task-based design** is an approach where the design is centered around the specific **tasks users need to perform**. This focuses on making it **easy for users to accomplish goals** rather than adding unnecessary features.\ **Design Implication**: Ensure the system supports users in performing their tasks **efficiently** by minimizing steps, providing **clear feedback**, and organizing information based on the task flow. **6. Time** **Time** plays a significant role in user experience, as users may experience frustration when they feel a task takes too long or is too complex. The faster users can complete tasks, the **better the experience**.\ **Design Implication**: **Speed up interactions** by reducing load times, simplifying processes, and offering **progress indicators** (e.g., loading bars). Additionally, tasks should be **predictable** to minimize confusion and mental effort. **History of Speech Technology** **Speech technology** has evolved over decades, starting with simple **voice recognition systems** that could recognize a few spoken words to more advanced systems capable of **understanding and processing natural language**. The first speech recognition systems were basic, requiring users to speak in a limited vocabulary and at a slow pace. Over time, advancements in **machine learning** and **AI** have allowed for much more sophisticated systems that can understand and respond to **varied speech patterns** and **contexts**. Today, speech technology powers popular **virtual assistants** like Siri, Alexa, and Google Assistant. **2. How VI Works** A **Voice Interface (VI)** works by converting **spoken input** into data that can be understood and processed by a system. Here\'s a simplified breakdown of how it works: - **Speech Input**: The user speaks into a microphone. - **Speech Recognition**: The system uses algorithms to **convert spoken words** into text (this is the process of **speech-to-text**). - **Natural Language Processing (NLP)**: The system uses **NLP** to understand the meaning of the text. - **Action**: The system processes the input and provides an appropriate response, which could be in the form of **voice feedback** or performing an action (e.g., setting a timer or playing music). - **Output**: If needed, the system responds with **speech output** (text-to-speech).\ In short, a VI listens, understands, and responds---just like a conversation between a person and a computer. **3. Speech Features** **Speech features** refer to the different aspects of **spoken language** that voice interfaces need to process and understand. These include: - **Pitch**: The highness or lowness of the voice. - **Tone**: The emotional quality or attitude expressed in the voice. - **Speed**: The rate at which words are spoken. - **Volume**: How loud or soft the speech is. - **Accent and Pronunciation**: Variations in how people pronounce words based on **geography** or **culture**.\ Voice interfaces need to account for these factors to ensure accurate recognition and **natural interactions** with users. **4. Natural Language Processing (NLP)** **Natural Language Processing (NLP)** is a field of **artificial intelligence** that focuses on enabling machines to understand and respond to human language in a way that feels **natural**. NLP involves tasks like: - **Speech recognition**: Converting spoken language into text. - **Syntax analysis**: Understanding the grammatical structure of sentences. - **Semantic analysis**: Interpreting the meaning of the words. - **Context understanding**: Considering the context of a conversation to generate more accurate responses.\ In voice interface design, **NLP** allows the system to **understand** what users are saying and provide intelligent, context-aware responses. **5. VI Applications** **Voice Interface (VI) applications** are software programs that use speech recognition and NLP to interact with users through voice commands. Some common VI applications include: - **Virtual Assistants**: Siri, Alexa, Google Assistant, which can perform tasks like setting reminders, answering questions, and controlling smart devices. - **Voice-activated Controls**: Systems in cars, home appliances, or even smart TVs that allow users to control devices using voice commands. - **Speech-to-text**: Apps that convert spoken words into written text, like dictation software or transcription services. - **Customer Service**: Voice-activated systems that handle customer queries or support requests (e.g., automated phone systems).\ These applications allow users to interact with technology **hands-free** and in a more **natural way**, making them increasingly popular across industries. **6. UX Design Principles for VI** **UX design principles for Voice Interfaces** ensure that interactions with voice systems are **easy**, **intuitive**, and **efficient**. Some key principles include: - **Clarity**: The system should give clear instructions and feedback so users know what to do next. - **Simplicity**: Keep commands and responses short, direct, and easy to remember. - **Feedback**: Provide audible or visual feedback to show that the system has understood the command or is processing it (e.g., a beep or a confirmation phrase). - **Context-awareness**: The system should consider the context of the conversation and respond appropriately (e.g., asking a follow-up question if the first request was vague). - **Error Handling**: If the system doesn\'t understand the user, it should provide a way for them to rephrase or try again.\ By following these principles, designers can create voice interfaces that feel **intuitive** and **human-like** while minimizing frustration. **7. Emotion-aware VI** **Emotion-aware Voice Interfaces** are systems that can detect and respond to the **emotional state** of the user based on their **tone of voice**, **speech patterns**, or **word choice**. For example, if a user sounds frustrated, the system may respond with a **soothing tone** or provide more **patient assistance**.\ Emotion-aware VIs can create a more **empathetic and personalized** user experience by adapting the system's tone and responses to match the user's emotional state, improving overall user satisfaction.\ These systems use techniques like **voice sentiment analysis** to detect emotions in speech and adjust the system's responses accordingly. **Reality-Virtuality Continuum** The **Reality-Virtuality Continuum** is a concept that explains the spectrum of **reality** and **virtual experiences**. It visualizes the blend between the physical world and the virtual world, showing how technology can mix the two. - On one end of the spectrum, you have **physical reality** (the real world we see and interact with every day). - On the other end, you have **virtual reality (VR)**, where everything is computer-generated, and no physical world elements are present. - In the **middle** is **Mixed Reality (MR)**, where the virtual and real worlds coexist and interact, blending elements from both. MR lies somewhere between these extremes and allows the user to interact with both real-world objects and virtual ones, with the virtual objects appearing to **exist in the real world**. **2. Characteristics of MR Systems** **Mixed Reality systems** combine physical reality with virtual elements, allowing users to **interact with both** in real time. Some key characteristics of MR systems include: - **Real-time interaction**: Users can interact with both real and virtual objects. - **Spatial awareness**: The system can track the user's position and adjust virtual content accordingly, making it feel like part of the real world. - **Integration**: Virtual objects are integrated seamlessly into the real world, interacting with physical objects or the user's actions. - **Immersion**: MR systems often aim to immerse the user by making the interaction feel as natural and intuitive as possible. These characteristics make MR systems unique, providing a more **interactive and immersive** experience than VR or AR alone. **3. Immersion** **Immersion** refers to the degree to which a user feels **\"part of\"** the environment or experience. The more immersive the system, the more the user feels **transported into** or **surrounded by** the virtual or mixed environment. This can be achieved through: - **Visual immersion**: High-quality graphics and a field of view that fills the user's visual space. - **Audio immersion**: Spatial or 3D sound that enhances the sense of presence. - **Physical interaction**: Using haptic feedback or motion tracking to involve the user physically in the experience. The more immersive a system is, the more it feels like the user is **present** in the virtual environment. **4. Extent of World Knowledge** **Extent of world knowledge** refers to how much the **system knows** about the real world to create a seamless mix of virtual and real elements. In MR, the system must understand and **model the real-world environment** so that virtual objects can be **properly anchored** and interact with real objects.\ For example: - If a user is interacting with a virtual object, the system needs to **know where physical walls, furniture, or people are** to place the virtual objects in the correct positions. - The system might use sensors, cameras, or depth mapping to gather this world knowledge. The **better the system\'s understanding** of the real world, the more realistic and functional the MR experience will be. **5. Coherence** **Coherence** refers to how well the **virtual content** fits into the real-world environment and behaves logically. In MR, the virtual elements need to interact with the real world in ways that make sense.\ For example: - A virtual ball should **fall to the ground** when dropped, just like a real one. - If a virtual object interacts with real objects, the response should appear **realistic** (e.g., a virtual object bouncing off a real object). High coherence means that the user can experience both the real and virtual world **in harmony**, without feeling that something is out of place. **6. Presence / Co-presence** - **Presence** refers to the feeling of being in a different environment while using MR. It\'s a psychological experience where users feel like they're truly **part of the mixed reality world**, rather than just observing it. - **Co-presence** refers to the feeling of **being together** with other people or agents (real or virtual) in the same mixed reality environment. In MR, users can be **co-present** with other users, even if they are physically in different locations. For example, in an MR game, you might feel present in the game world, and if your friends join the game, you\'ll experience co-presence, where all of you feel like you\'re together in that shared virtual space. **7. Types of Mixed Reality** There are several types of Mixed Reality, depending on the level of integration between the real and virtual worlds. These types include: - **Augmented Reality (AR)**:\ AR adds virtual elements (like images, sounds, or information) **on top** of the real world but does not interact deeply with the real world. For example, Pokémon Go adds virtual creatures to real-world locations. - **Augmented Virtuality (AV)**:\ AV is closer to VR but includes some real-world elements. It mixes a mostly virtual environment with a few real-world components, such as a real object or person interacting with the virtual space. - **True Mixed Reality (MR)**:\ MR integrates the virtual and real worlds more seamlessly. Virtual objects are **not just overlaid** but **interact** with the physical world, allowing for a more immersive experience. For example, a virtual character could walk around a physical table and react to real objects. **Movement** Movement in the context of **Post-Desktop Interfaces** refers to **using physical motion** or **gestures** as an input to interact with a computer or digital system. This represents a shift from traditional desktop interfaces (where you use a mouse or keyboard) to **more natural, body-based interactions**, where gestures or movement are the primary form of input. These systems can detect things like your **hands, arms, or whole body movements** to control digital environments. This technology enables more **intuitive, immersive experiences**, like virtual reality (VR), games, or interactive exhibits. **2. Full-Body Input, Motion Capture** **Full-body input** refers to using **your entire body** as a way to interact with digital systems. This can be achieved through **motion capture**, which is the technology that tracks and records a person's movements to translate them into digital input.\ In systems that use full-body input: - You might **move or gesture** (like waving your hands or stepping forward) to control or manipulate virtual objects. - For example, in virtual reality games, players often move around physically, and motion capture tracks those movements to update the virtual world. **Motion capture** uses sensors or cameras to detect your body\'s movements and **map them** into the virtual world in real time. **3. Optical Motion Capture** **Optical motion capture** is a type of motion tracking where cameras detect markers or patterns on the body to track movement. These markers can be **reflective balls or LED lights** placed on different parts of the body, which the system detects using **infrared or visible light cameras**.\ In this setup: - The cameras create a 3D map of the body's movement by calculating how the markers move in space. - This method is often used in **film production** and **animation** to capture realistic human movement, but it is also used in interactive systems like VR and gaming.\ Optical motion capture is accurate and can capture very detailed body movements, making it ideal for applications that require **precise tracking**. **4. Pattern Projection Cameras** **Pattern projection cameras** are a type of system that projects **light patterns** (such as grids or dots) onto the user or an object. These patterns help the system track **movement and depth**. The camera then observes how the patterns distort as the user interacts with them or moves, allowing it to **map out 3D positions**. This technology is commonly used in systems that involve **gesture recognition** or **3D scanning**. For instance, a pattern projection system might be used in a **gesture control interface**, where the camera tracks hand movements by projecting a pattern onto the hand or object, and then interpreting the movement based on how the pattern changes. **5. Hand Tracking (Project Soli)** **Hand tracking** is a technology that detects and interprets the movements of your hands in the air, allowing for **gesture-based control** of devices without touching them. **Project Soli**, developed by Google, is a specific example of this technology.\ Project Soli uses **radar** to detect the motion of your hands and fingers. Unlike optical-based systems (which use cameras), radar works by emitting electromagnetic signals that bounce off objects. By analyzing the reflections, the system can determine the precise **position, movement, and gestures** of your hands.\ This technology enables interactions like **scrolling, swiping, or zooming** just by moving your hands, without the need for a touchscreen or physical contact. **6. Human Body as an Input Device** The **human body as an input device** refers to using **body movements** (hands, arms, eyes, or even the full body) to interact with a digital system. This is a key element of **Post-Desktop Interfaces**, where traditional input devices (like a mouse or keyboard) are replaced by **natural, physical actions**.\ Examples include: - **Body gestures** to control an interactive display. - **Eye tracking** to control a system using where the user is looking. - **Voice commands** (while not strictly body movement, it\'s another natural input) to interact with a device or system. Using the human body as an input device is more **intuitive** because it allows people to interact with systems in a **way that feels natural**, like gesturing or speaking. **7. Input Devices: Properties, Language** **Input devices** in the context of Post-Desktop Interfaces refer to the tools and technologies used to detect and interpret human actions (such as **movement**, **gestures**, or **speech**) and translate them into digital commands.\ Key properties of these devices include: - **Accuracy**: How precisely the device can track and interpret the input. - **Response Time**: How quickly the device can detect and respond to the input. - **Range**: The distance over which the device can effectively track input (e.g., how far you can move your hands before the system stops detecting it). - **Resolution**: The level of detail that the device can capture, such as detecting small or fast movements. **Input language** refers to how users communicate with the system through the device. For example: - **Gestures**: Specific hand or body movements that represent a command (e.g., a swipe to scroll). - **Voice**: Spoken commands or words. - **Eye movements**: Looking at certain points to trigger actions. The system must be able to **understand** these different \"languages\" of input and translate them into meaningful actions. **Office Automation** **Office automation** refers to the use of computers and software to **automate** repetitive office tasks like data entry, scheduling, communication, and document management. It's about **streamlining** and **optimizing** office work to save time and improve efficiency.\ Examples include: - **Email** for communication. - **Calendars** for scheduling. - **Word processors** for document creation and editing. - **Spreadsheets** for managing data and calculations. In CSCW, office automation is part of how teams use technology to **enhance collaboration** and reduce manual work. **2. Groupware** **Groupware** refers to software tools or systems that support **collaborative work** between individuals or groups. It enables users to share information, communicate, and work together in real-time or asynchronously.\ Groupware tools can include: - **Document sharing** (like Google Docs or Dropbox). - **Instant messaging or chat systems** (like Slack or Microsoft Teams). - **Collaborative project management tools** (like Trello or Asana). The goal of groupware is to enhance **communication, coordination, and collaboration** in teams, making it easier for individuals to work together, regardless of their physical location. **3. Time/Space Groupware Matrix** The **Time/Space groupware matrix** is a framework used to classify different types of collaborative systems based on two key dimensions: - **Time**: How synchronously or asynchronously people collaborate. - **Synchronous** collaboration happens in real-time (e.g., video calls, chats). - **Asynchronous** collaboration happens at different times (e.g., email, forums). - **Space**: Whether collaboration happens in the **same** physical space or **distributed** across different locations. - **Same space** means all participants are in one location (e.g., face-to-face meetings). - **Distributed space** means participants are in different locations (e.g., remote teams). This matrix helps determine which type of **groupware** is best suited for different collaboration needs, depending on time and space constraints. **4. Taxonomy of Collaboration** A **taxonomy of collaboration** is a classification system that categorizes the different types of **collaborative work** based on the nature of the work and the technology used.\ The taxonomy helps us understand the variety of **collaborative activities**, which can range from: - **Simple cooperation**, where people share resources. - **Complex collaboration**, where people work on interdependent tasks and require ongoing interaction. By categorizing different collaboration activities, designers can develop tools that support the **specific needs** of each type of collaboration. **5. Mechanics of Collaboration** **Mechanics of collaboration** refer to the **structural aspects** or **processes** that enable people to work together. It includes: - **Communication**: How people exchange information (e.g., emails, meetings). - **Coordination**: How people organize and manage tasks (e.g., assigning roles, tracking progress). - **Cooperation**: The actual work people do together, often involving **shared goals** and **interactions**. Understanding these mechanics helps create tools that **facilitate collaboration** more effectively, ensuring that the processes are smooth and efficient. **6. Articulation Work** **Articulation work** is the behind-the-scenes effort people put in to make sure that collaborative tasks can be carried out smoothly. It includes: - **Coordinating** tasks, like making sure everyone knows their role. - **Resolving conflicts** or mismatches between different parts of the work. - **Adjusting processes** to account for unexpected issues. This kind of work is often invisible but essential for collaboration. It's the effort that goes into **aligning people, resources, and tools** so that the work can progress. **7. Ecologies of Tools** **Ecologies of tools** refer to the idea that **collaboration** often involves using a combination of different tools, each serving a specific function. Rather than relying on one single tool, teams use an **ecosystem** of tools that are integrated to support different aspects of work.\ For example, a team might use: - **Slack** for communication. - **Google Docs** for document collaboration. - **Trello** for task management. - **Zoom** for video calls. The tools should **work together seamlessly**, creating a **supportive environment** for collaboration. This concept helps designers understand how different tools interact and contribute to the overall collaborative process. **8. Awareness** **Awareness** in CSCW refers to the understanding of **who is doing what**, **when**, and **how**, within a collaborative context. It\'s about staying informed and keeping track of: - **Task progress**: Who is working on which task and how it\'s going. - **Availability**: Knowing when team members are free or busy. - **Shared understanding**: Ensuring that all collaborators are on the same page and aware of each other\'s needs and goals. Maintaining awareness is crucial for effective collaboration, as it helps teams avoid duplication of effort and makes sure everyone is aligned on the project's goals.

Use Quizgecko on...
Browser
Browser