HCI in the Software Process Lecture (7) PDF
Document Details
Uploaded by StylishSpessartine
جامعة العلوم والتقانة
Dania Mohamed Ahmed
Tags
Summary
This lecture provides an overview of HCI within the software process and covers the software life cycle. It elaborates on activities such as requirements specification, architectural design, coding and unit testing, integration and testing, and maintenance. The material also addresses usability engineering, design rationale and iterative design/prototyping.
Full Transcript
HCI in the Software Process Lec (7) Dania Mohamed Ahmed Introduction The goal is to develop reliable methods for creating effective and user-friendly interactive systems. This involves moving beyond just identifying design par...
HCI in the Software Process Lec (7) Dania Mohamed Ahmed Introduction The goal is to develop reliable methods for creating effective and user-friendly interactive systems. This involves moving beyond just identifying design paradigms to thoroughly examining the interactive system design process. The focus is on integrating user-centered design within the broader context of software development frameworks. Software engineering, a key field within computer science, manages the technical and managerial aspects of software system development through its software life cycle. This life cycle covers everything from the initial idea to the eventual retirement of the software (which describes the activities that take place from the initial concept formation for a software system up until its eventual phasing out and replacement.). Human-Computer Interaction (HCI) considerations are integral to every stage of this life cycle. Thus, designing interactive systems involves applying techniques that are woven throughout the entire life cycle, rather than merely adding another isolated activity. The Software Life Cycle A fundamental feature of software engineering is that it provides the structure for applying techniques to develop software systems. The software life cycle is an attempt to identify the activities that occur in software development. These activities must then be ordered in time in any development project and appropriate techniques must be adopted to carry them through. In software development, there are typically two main parties involved: the customer, who needs the product, and the designer, who creates it. These roles can be filled by different groups or even by individuals who play both roles. It's crucial to differentiate between the customer who commissions the design and the end-user who will ultimately use the system, as they may not always be the same people. For clarity, in this discussion, "customer" will refer to those who work with the design team, while "user" or "end-user" will denote those who interact with the final system. Activities in the life cycle The life cycle activities of software development are illustrated in a diagram resembling a waterfall, where each activity flows naturally into the next. However, this waterfall analogy doesn't fully capture the actual dynamics between these activities. The following description will explain the stages of this waterfall model in detail. 1. Requirements specification 2. Architectural design 3. Detailed design 4. Coding and unit testing 5. Integration and testing 6. Maintenance The activities in the waterfall model of the software life cycle Activities in the life cycle Requirements specification In requirements specification, the designer and customer define what the system should do, focusing on the desired outcomes rather than the implementation details. This process involves gathering information about the work environment, including the functions the software must perform, the context in which it will operate, and its interactions with other products. Requirements are initially documented in the customer's native language, which is more expressive but less precise. They must eventually be translated into a more precise, mathematical-like language suitable for software implementation. This transformation from natural language to executable language is crucial for successful development. Architectural design After specifying what the system should do, the next phase focuses on how to achieve these goals. The first step is to decompose the system into components through architectural design. This process involves breaking down the system into manageable parts, which may be sourced from existing software or developed from scratch. Architectural design not only addresses the functional roles of each component but also outlines their interdependencies and resource-sharing requirements. Activities in the life cycle Detailed design Detailed design refines the architectural component descriptions while preserving their intended behavior. Multiple refinements may meet the behavioral requirements, and selecting the best one involves balancing non-functional requirements like performance and usability. The language used for detailed design should support analysis to evaluate design properties. Additionally, it is crucial to document the design options considered, the final decisions made, and the rationale behind them. Coding and unit testing The detailed design of a system component should be structured for implementation in an executable programming language. Once coded, the component is tested based on criteria set earlier. Research in this area focuses on two main aspects: automating the coding process from detailed design, which is often explored through formal methods that view this as a transformation between mathematical representations, and generating tests automatically from earlier outputs to ensure the code performs correctly. Activities in the life cycle Integration and testing Once components are implemented and individually tested, they are integrated according to the architectural design. Further testing ensures the integrated system behaves correctly and uses shared resources properly. Acceptance testing with customers is also performed to confirm the system meets their requirements. Only after this acceptance is the product released to the customer. Additionally, certification may be required from external authorities, such as an aircraft certification board. Regulations like the European health and Safety Act and standards such as ISO 9241 mandate that systems must be usable, emphasizing the importance of considering Human-Computer Interaction (HCI) in the design process. Maintenance After a product is released, it enters the maintenance phase, which typically constitutes the majority of its lifecycle. Maintenance involves fixing errors discovered post-release and updating the system to meet previously unmet requirements. This phase also provides feedback to earlier stages of the development lifecycle, informing improvements and adjustments as needed, as shown in the Figure below Feedback from maintenance activity to other design activities Verification Definition: Verification is the process of evaluating software to ensure that it meets the specified requirements and design specifications. It focuses on whether the system is being built correctly according to the design and requirements. Purpose: To ensure that the software adheres to its specifications and design documents. To identify issues early in the development process before the software is fully integrated or deployed. Questions Answered: Are we building the system right? Does the system meet the design and specification requirements? Validation Definition: Validation is the process of assessing whether the software meets the needs and expectations of the end-users and stakeholders. It ensures that the right product has been built and that it performs its intended functions in the real-world environment. Purpose: To ensure that the software satisfies user needs and works in the intended real-world scenarios. To verify that the system is suitable for its intended purpose and that it provides value to the users. Questions Answered: Are we building the right system? Does the system meet the needs and expectations of users? Usability Engineering In user-centered design, usability engineering introduces specific goals and criteria for evaluating a product's usability. This approach emphasizes understanding how usability will be measured based on users' experiences with the product. Although the user interface is a critical focus, effective usability assessment also requires consideration of the system’s overall functional architecture and users' cognitive capacities. Usability engineering is typically constrained by its focus on observable physical interactions rather than the more complex aspects of user interaction. To address this, usability engineering includes a usability specification within the requirements specification of the software life cycle. This specification details attributes that impact usability and outlines six specific items for evaluating each attribute, aiming to provide a comprehensive measure of the product's usability. The table below provides an example of a usability specification for the design of a control panel for a video cassette recorder (VCR) Usability Engineering Sample usability specification for undo with a VCR Attribute Backward recoverability Measuring concept: Undo an erroneous programming sequence Measuring method: Number of explicit user actions to undo current program Now level: No current product allows such an undo Worst case: As many actions as it takes to program-in mistake Planned level: A maximum of two explicit user actions Best case: One explicit cancel action ISO usability standard 9241 Adopts traditional usability categories: Effectiveness can you achieve what you want to? Efficiency can you do it without wasting effort? Satisfaction do you enjoy the process? Some metrics from ISO 9241 Usability objective Effectiveness Measures Efficiency Measures Satisfaction Measures Suitability for the task Percentage of goals Time to complete a Task Rating scale for Achieved Satisfaction Appropriate for trained Number of power Relative efficiency Rating scale for users features used compared with an expert satisfaction with power user features Learnability Percentage of Time to learn Criterion Rating scale for ease of functions learned learning Error tolerance Percentage of errors Time spent on correcting Rating scale for error corrected errors handling Successfully Problems with usability engineering Usability engineering emphasizes establishing explicit usability metrics early in the design process to evaluate a system's usability after delivery. These metrics are crucial for creating usable systems, as they provide empirical data on user performance. However, using these metrics can be problematic, especially early in the design process when specific user actions and situations are not yet well-defined. The challenge with usability metrics is that they are based on specific user actions in predefined contexts, which may not be available at the design stage. For example, setting metrics for VCR functionality assumes certain user errors and solutions, like an undo button, without addressing the root causes of those errors. Usability engineering is limited in that it ensures compliance with usability specifications but does not guarantee overall usability. Designers must understand how these metrics genuinely improve user experience. In the VCR example, the assumption that reducing the number of actions improves usability needs careful evaluation to ensure it aligns with real user needs and behaviors. Iterative Design and Prototyping Requirements for interactive systems often cannot be fully defined at the start of the software life cycle. To ensure accurate design features, they must be built and tested with real users. This process reveals any incorrect assumptions, allowing for modifications and improvements. Iterative design embodies this approach by repeatedly cycling through designs, refining and enhancing the product with each iteration. The challenges that necessitate iterative design are not limited to usability; they affect requirements specification broadly and encompass technical and managerial issues in software engineering. On the technical side, iterative design is described by the use of prototypes, artifacts that simulate or animate some but not all features of the intended system. There are three main approaches to prototyping: 1. Throw-away 2. Incremental 3. Evolutionary Throw-away The prototype is built and tested. The design knowledge gained from this exercise is used to build the final product, but the actual prototype is discarded. The figure below depicts the procedure for using throw-away prototypes to arrive at a final requirements specification in order for the rest of the design process to proceed Incremental The final product is built as separate components, one at a time. There is one overall design for the final system, but it is partitioned into independent and smaller components. The final product is then released as a series of products, each subsequent release including one more component. This is depicted in the Figure below Evolutionary Here the prototype is not discarded and serves as the basis for the next iteration of design. In this case, the actual system is seen as evolving from a very limited initial version to its final release, as depicted in the Figure below. Evolutionary prototyping also fits in well with the modifications which must be made to the system that arise during the operation and maintenance activity in the life cycle. Iterative Design and Prototyping Prototypes vary in functionality and performance relative to the final product. They can range from simple animations with no real functionality to fully functional versions that may sacrifice other performance aspects like speed or error tolerance. The key to a useful prototype is its realism, which is crucial for accurately testing requirements with real users. Ensuring that evaluation conditions closely match those expected in the final system is essential for reliable results. However, achieving high realism can be costly, so designers need efficient methods and support to create realistic prototypes quickly. On the management side, there are several potential problems: 1. Time: Building prototypes takes time, and with throw-away prototypes, it can feel like a distraction from the main design. That’s why rapid prototyping is important—it allows for quick creation and changes. However, it’s essential to distinguish rapid prototyping from rushing evaluations. Fast development shouldn’t mean rushing the evaluation process, as this can lead to mistakes and reduce the benefits of prototyping. Iterative Design and Prototyping 2. Planning: Many project managers lack the experience needed to effectively plan and budget for a design process that includes prototyping. 3. Non-functional Features: Important aspects like safety and reliability may be overlooked in prototypes. This can also affect usability, such as how quickly the product responds, which may impact user acceptance. This relates to the challenge of making prototypes realistic, as they may not fully reflect the final product's qualities. 4. Contracts: The design process is shaped by contracts between customers and designers, which involve various management and technical issues. Prototypes can't serve as legal contracts on their own, so detailed documentation is necessary to support the iterative design process. Even though rapid prototyping allows for quick updates, it’s still important to maintain thorough documentation to ensure the design process runs smoothly. Techniques for prototyping Storyboards Limited functionality simulations High-level programming support Storyboards Storyboards are visual tools used to illustrate the sequence of interactions and the flow of a system or application. They are similar to comic strips, where each panel represents a step in the user experience. Purpose: Storyboards help in visualizing user interactions and the overall user journey through the application. They are useful for communicating ideas and scenarios to stakeholders and team members. Benefits: They provide a clear, visual representation of user interactions and help in understanding and refining user requirements and workflows. Limitations: They do not offer interactive or functional aspects of the system, so they are less useful for testing detailed functionality or user experience. Limited Functionality Simulations Limited Functionality Simulations are basic, often non-interactive versions of the software that focus on demonstrating core concepts, workflows, or user interfaces. Purpose: These simulations help stakeholders and users visualize and interact with a basic version of the system to gather feedback on design and functionality. Benefits: They can be created quickly and are useful for early-stage testing and validation of design concepts. They allow users to explore fundamental features without the need for a fully developed system. Limitations: Limited functionality simulations might not capture all aspects of user interactions or system performance, and they may not fully represent the final user experience. High-Level Programming Support High-Level Programming Support refers to using high-level programming languages or frameworks to build prototypes that are closer to the final product in terms of functionality and interactivity. Purpose: To create a more functional prototype that can be used for in-depth testing and validation of more complex features and interactions. Benefits: High-level prototypes can offer a more accurate representation of the final system, allowing for better testing and user feedback on more comprehensive functionality. Limitations: They can be time-consuming and resource-intensive to create. There is a risk of investing too much effort into a prototype that may undergo significant changes. Design Rationale Design rationale refers to the reasoning behind the design choices made during the development of a software system. It captures why certain decisions were made, how different options were evaluated, and how the final design aligns with the goals and requirements of the project. Documenting design rationale is essential for ensuring that design decisions are transparent, justifiable, and can be communicated effectively to stakeholders. Design rationale relates to an activity of both reflection (doing design rationale) and documentation (creating a design rationale) that occurs throughout the entire life cycle. Importance of Design Rationale 1. Clarity and Understanding: Provides a clear explanation of why particular design decisions were made, helping team members and stakeholders understand the reasoning behind the design. 2. Justification: Helps in justifying decisions to stakeholders, which is particularly useful when there are conflicting requirements or when changes need to be communicated. 3. Future Reference: Acts as a record that can be referenced in future projects or iterations, making it easier to understand the history of design decisions and their impacts. 4. Consistency: Ensures that design decisions are consistent with the project's goals and requirements, and helps in aligning the team towards a common vision. 5. Issue Resolution: Assists in identifying and resolving design issues by providing a basis for evaluating whether certain choices were appropriate or if adjustments are needed. Examples of Design Rationale User Interface Design: If a particular user interface layout was chosen over others, the design rationale might include reasons related to user feedback, ease of navigation, and alignment with user goals. System Architecture: If a specific architecture pattern (e.g., microservices vs. monolithic) was chosen, the rationale might address factors like scalability, maintainability, and deployment considerations. Process-oriented design rationale IBIS Method: This approach uses Rittel's IBIS (Issue-Based Information System) from the 1970s to structure design discussions. It focuses on a central issue or problem, presenting possible solutions as positions debated with supporting or opposing arguments. This creates a hierarchical framework for the discussion. gIBIS Visualization: A graphical version, called gIBIS, represents this structure as a directed graph. In this graph, nodes show issues, positions, and arguments, while connections clarify their relationships, helping to understand and edit the design rationale. Variations of IBIS: There are other versions, both visual and textual, that add elements like design artifacts or enhanced vocabularies for showing how different issues relate. For example, McCall's Procedural Hierarchy of Issues (PHI) provides a richer set of relationships and can be linked to CAD tools for real-time design feedback. Overall Focus: IBIS and its variations aim to capture and organize the design process by documenting discussions and decisions, rather than generalizing design knowledge for various projects. The structure of a gIBIS design rationale Design space analysis QOC Notation: Developed by MacLean and colleagues, the Questions, Options, and Criteria (QOC) method focuses on analyzing the design space after design decisions have been made. It structures the design space around key questions that represent major issues. Reflection-Based: Unlike IBIS, which captures issues in real-time, QOC questions are created through reflection to give a complete view of the design challenges. Options and Criteria: For each question, there are potential solutions (options) evaluated against set criteria to find the best choice. In QOC diagrams, solid lines indicate positive assessments, while dashed lines indicate negatives, highlighting the best option. Challenges: Effective QOC analysis requires selecting specific yet broad questions and criteria, which can be difficult. General principles may not fully address context needed for making trade-offs. Alternative Method (DRL): Decision Representation Language (DRL), created by Lee and Lai, offers a more formal structure with richer vocabulary, helping manage complex information and decision dependencies. However, it can be complicated to use. Documentation Benefits: QOC aids in documenting parts of the design space for future reference, but it requires extra time and effort, which can be a drawback when resources are tight. The QOC notation Psychological design rationale Overview: Introduced by Carroll and Rosson, this approach focuses on understanding the psychological aspects of usability in interactive systems to better align products with user tasks. Task–Artifact Cycle: This concept emphasizes that systems should be designed based on current user tasks and potential new tasks. After a system is implemented, users often find new ways to use it, revealing additional tasks and problems that inform future design improvements. Evolution of Tools: Examples like electronic spreadsheets and word processors show how tools originally designed for specific tasks have evolved to meet a wider range of user needs. Focus on Impact: Psychological design rationale aims to clearly explain how design decisions affect user tasks, moving beyond just the designer’s original intentions. It emphasizes the psychological consequences of these decisions. Psychological design rationale Identifying Tasks: The first step involves identifying key tasks the system will support by addressing important user questions. For example, if creating a system for learning Smalltalk, the focus might be on what operations are possible and how to perform them. Creating Scenarios: Designers develop scenarios to help users explore these tasks. For example, demo programs can showcase Smalltalk’s capabilities, which should be easily accessible in the system. Reflection and Documentation: After implementation, designers observe how the system is used and document their insights, including assumptions about user behavior (like learning by doing) and any issues (such as non-interactive demos). This reflection guides improvements for future versions. Goal: By documenting this rationale, designers aim to better understand how user tasks evolve and use insights from one design to enhance future designs.