🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Test Your Knowledge
123 Questions
0 Views

Test Your Knowledge

Created by
@VividStream

Podcast Beta

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the purpose of evaluation in the design process?

  • To increase sales of products
  • To develop prototypes
  • To collect data about users' experiences with design artifacts (correct)
  • To create products that are aesthetically pleasing
  • What are the different evaluation methods used in design?

  • Market research, focus groups, and surveys
  • Usability testing, experiments, field studies, modeling, and analytics (correct)
  • Coding, debugging, and deployment
  • Sketching, prototyping, wireframing, and user testing
  • What are the three broad categories that evaluations can be classified into?

  • Lab-based, field-based, and in-the-wild studies
  • Low-tech prototypes, complete systems, and screen functions
  • Usability testing, experiments, and modeling
  • Controlled settings directly involving users, natural settings involving users, and any settings not directly involving users (correct)
  • What is the difference between lab-based studies and field studies?

    <p>Lab-based studies are good at revealing usability problems, but poor at capturing context of use; field studies are good at capturing context of use, but poor at controlling users' activities</p> Signup and view all the answers

    What is the purpose of living labs?

    <p>To evaluate people's everyday lives and habits over a period of several months</p> Signup and view all the answers

    What is heuristic evaluation?

    <p>An evaluation method that applies knowledge of typical users to identify usability problems</p> Signup and view all the answers

    What is the purpose of opportunistic evaluations?

    <p>To provide designers with feedback quickly</p> Signup and view all the answers

    What is the main advantage of using a chatbot like Ethnobot to collect evaluation data in outdoor settings?

    <p>Researchers can collect data from participants on the move or in places that are hard for researchers to access</p> Signup and view all the answers

    What is the main purpose of evaluation in design?

    <p>To collect data and analyze users' experiences to improve design</p> Signup and view all the answers

    Which of the following is NOT a focus of evaluation in design?

    <p>Marketing</p> Signup and view all the answers

    What are some of the different methods used in evaluation?

    <p>Observing participants, measuring performance, modeling user behavior, and analytics</p> Signup and view all the answers

    Why are evaluations important in design?

    <p>To ensure designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale</p> Signup and view all the answers

    What can be evaluated in design?

    <p>Low-tech prototypes, complete systems, particular screen function, and safety features</p> Signup and view all the answers

    What are some types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What are the advantages of living labs?

    <p>They provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is usability testing?

    <p>Collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What are field studies used for?

    <p>To evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online for a very small reward</p> Signup and view all the answers

    What are some advantages of using a bot to collect in-the-wild evaluation data?

    <p>It has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access</p> Signup and view all the answers

    What is evaluation in design?

    <p>A process that involves data collection and analysis of users' experiences with design artifacts to improve their design.</p> Signup and view all the answers

    What are the two main aspects of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, and analytics</p> Signup and view all the answers

    What are the advantages of controlled settings involving users in evaluation studies?

    <p>They are good at revealing usability problems</p> Signup and view all the answers

    What are the advantages of natural settings involving users in evaluation studies?

    <p>They allow for unexpected data and insights into real-world usage</p> Signup and view all the answers

    What are living labs?

    <p>Labs that simulate real-life environments for evaluation studies</p> Signup and view all the answers

    What are the advantages of using a bot to collect in-the-wild evaluation data?

    <p>It is useful when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A process that involves collecting data from a large group of people online</p> Signup and view all the answers

    What is the purpose of opportunistic evaluations in the design process?

    <p>To provide designers with quick feedback</p> Signup and view all the answers

    What is the purpose of field studies in HCI?

    <p>To evaluate products with users in their natural settings</p> Signup and view all the answers

    What is the purpose of analytics in HCI?

    <p>To measure and optimize web usage and assess learning in MOOCs and OERs</p> Signup and view all the answers

    What is the purpose of models in HCI?

    <p>To compare the efficacy of different interfaces for the same application</p> Signup and view all the answers

    What is the main purpose of evaluation in the design process?

    <p>To collect data on users' experiences with design artifacts</p> Signup and view all the answers

    What are the two main focuses of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are some examples of where evaluation can occur?

    <p>Labs, people's homes, outdoors, and work settings</p> Signup and view all the answers

    Why are evaluations important?

    <p>To ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale</p> Signup and view all the answers

    What can be evaluated in the design process?

    <p>Low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What is the difference between controlled settings and natural settings in evaluation?

    <p>Controlled settings are good at revealing usability problems, but poor at capturing context of use; natural settings are good at capturing context of use, but have little control over users' activities</p> Signup and view all the answers

    What are living labs?

    <p>A compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is usability testing?

    <p>Collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What are field studies used for?

    <p>To evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What are some examples of evaluation methods?

    <p>Observation, interviews, interaction logging, web analytics, learning analytics, and physiological responses</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward</p> Signup and view all the answers

    What is the main purpose of evaluation in the design process?

    <p>To collect data on users' experiences with design artifacts</p> Signup and view all the answers

    What are the two main focuses of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are some examples of where evaluation can occur?

    <p>Labs, people's homes, outdoors, and work settings</p> Signup and view all the answers

    Why are evaluations important?

    <p>To ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale</p> Signup and view all the answers

    What can be evaluated in the design process?

    <p>Low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What is the difference between controlled settings and natural settings in evaluation?

    <p>Controlled settings are good at revealing usability problems, but poor at capturing context of use; natural settings are good at capturing context of use, but have little control over users' activities</p> Signup and view all the answers

    What are living labs?

    <p>A compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is usability testing?

    <p>Collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What are field studies used for?

    <p>To evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What are some examples of evaluation methods?

    <p>Observation, interviews, interaction logging, web analytics, learning analytics, and physiological responses</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward</p> Signup and view all the answers

    What is the purpose of evaluation in design?

    <p>To collect data on user experiences</p> Signup and view all the answers

    What are the two main areas of focus for evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are some of the locations where evaluation can take place?

    <p>All of the above</p> Signup and view all the answers

    Why are evaluations important in design?

    <p>To ensure that designs are appropriate and acceptable for the target user population</p> Signup and view all the answers

    What are some of the things that can be evaluated in design?

    <p>All of the above</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What are the advantages and disadvantages of controlled settings involving users?

    <p>Good at revealing usability problems, but poor at capturing context of use</p> Signup and view all the answers

    What are the advantages and disadvantages of natural settings involving users?

    <p>Good at capturing context of use, but poor at revealing usability problems</p> Signup and view all the answers

    What are living labs?

    <p>A compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What are the different types of evaluation approaches in HCI?

    <p>Usability testing, modeling and predicting, and analytics</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online</p> Signup and view all the answers

    What are some of the advantages of using a bot to collect in-the-wild evaluation data?

    <p>It can be used when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access</p> Signup and view all the answers

    What is the purpose of evaluation in design?

    <p>To collect data on user experiences</p> Signup and view all the answers

    What are the two main focuses of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are some locations where evaluation can take place?

    <p>Labs, people's homes, outdoors, and work settings</p> Signup and view all the answers

    What is the importance of evaluations in design?

    <p>To ensure designs are appropriate and acceptable for the target user population</p> Signup and view all the answers

    What are some types of evaluations?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What are some advantages of controlled settings in evaluations?

    <p>Revealing usability problems</p> Signup and view all the answers

    What are some advantages of natural settings in evaluations?

    <p>Capturing context of use</p> Signup and view all the answers

    What are living labs?

    <p>A compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What are some evaluation approaches in HCI?

    <p>Usability testing, modeling and predicting, and analytics</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online</p> Signup and view all the answers

    What are some advantages of crowdsourcing in HCI?

    <p>More flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies</p> Signup and view all the answers

    What are some methods used in field studies?

    <p>Observation, interviews, and interaction logging to record events and conversations</p> Signup and view all the answers

    What is evaluation in the design process?

    <p>A process that involves collecting and analyzing data to improve design artifacts</p> Signup and view all the answers

    What are the two main focuses of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are some settings where evaluation can occur?

    <p>Labs, people's homes, outdoors, and work settings</p> Signup and view all the answers

    Why are evaluations important?

    <p>To ensure that designs are appropriate and acceptable for the target user population</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What is usability testing?

    <p>Collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What are some methods used in field studies?

    <p>Observation, interviews, and interaction logging to record events and conversations</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online</p> Signup and view all the answers

    What is evaluation in design?

    <p>A process of collecting and analyzing data to improve design</p> Signup and view all the answers

    What are the two main focuses of evaluation in design?

    <p>Usability and user experience</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What are the advantages of using living labs for evaluation?

    <p>They provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is usability testing?

    <p>Collecting data to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What are the advantages of field studies?

    <p>They can help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What is the purpose of inspection methods in evaluation?

    <p>To predict user behavior and identify usability problems</p> Signup and view all the answers

    What is crowdsourcing in HCI?

    <p>A service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward</p> Signup and view all the answers

    What is the purpose of opportunistic evaluations?

    <p>To provide designers with quick feedback</p> Signup and view all the answers

    What are the two main types of data collected in Case Study 2?

    <p>Participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot</p> Signup and view all the answers

    What is the advantage of using a bot to collect in-the-wild evaluation data?

    <p>It has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access</p> Signup and view all the answers

    What are some of the terms used to describe evaluation in the text?

    <p>Analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study</p> Signup and view all the answers

    What is evaluation in the design process?

    <p>A data collection and analysis process to improve the design</p> Signup and view all the answers

    What does evaluation focus on in the design process?

    <p>Both usability and user experience</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What is the purpose of evaluations in design?

    <p>To ensure designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale</p> Signup and view all the answers

    What are the different types of settings where evaluation can occur?

    <p>Labs, people's homes, outdoors, and work settings</p> Signup and view all the answers

    What is the downside of using controlled settings in evaluation?

    <p>They are poor at capturing context of use</p> Signup and view all the answers

    What are the advantages of natural settings involving users in evaluation?

    <p>They are good at capturing context of use</p> Signup and view all the answers

    What are living labs and how are they useful in evaluation?

    <p>Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is usability testing in HCI?

    <p>Collecting data to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What is the purpose of field studies in HCI?

    <p>To evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What is the advantage of using crowdsourcing in HCI?

    <p>It is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies</p> Signup and view all the answers

    What is the purpose of case studies in user evaluation?

    <p>To provide valuable insights into user experience and inform the design process</p> Signup and view all the answers

    What is the purpose of evaluation in design?

    <p>To collect data and analyze users' experiences with design artifacts</p> Signup and view all the answers

    What are the two main focuses of evaluation?

    <p>Usability and user experience</p> Signup and view all the answers

    What are the different types of evaluation studies?

    <p>Usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics</p> Signup and view all the answers

    What is the downside of using natural settings involving users for evaluation?

    <p>It makes it difficult to anticipate what is going to happen and to be present when something interesting does happen</p> Signup and view all the answers

    What are living labs?

    <p>A compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies</p> Signup and view all the answers

    What is the purpose of usability testing?

    <p>To determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed</p> Signup and view all the answers

    What is the purpose of field studies?

    <p>To evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts</p> Signup and view all the answers

    What is the purpose of analytics?

    <p>To measure and optimize web usage and assess learning in MOOCs and OERs</p> Signup and view all the answers

    What is the purpose of inspection methods?

    <p>To predict user behavior and identify usability problems</p> Signup and view all the answers

    What is the purpose of ethnographic data gathering?

    <p>To collect participants' experiences, impressions, and feelings as they interact with a product in their natural setting</p> Signup and view all the answers

    What is the purpose of crowdsourcing in HCI?

    <p>To enroll participants quickly and inexpensively for various activities online</p> Signup and view all the answers

    What is the purpose of opportunistic evaluations?

    <p>To provide designers with quick feedback early in the design process</p> Signup and view all the answers

    Study Notes

    Introduction to Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis about users' experiences with design artifacts to improve the design's usability and user experience.

    • Different evaluation methods are used for different purposes, such as usability testing, experiments, field studies, modeling, and analytics.

    • Evaluation can take place in different settings, such as labs, natural settings, or a compromise between the two, known as living labs.

    • The importance of evaluation lies in meeting user needs and wants, creating products that are pleasing and engaging, and ensuring well-designed products sell.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Evaluations can be conducted in the formative stage of design to check that a product meets users' needs, or in the summative stage to assess the success of a finished product.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test are common.

    • Evaluations can be classified into three broad categories: controlled settings directly involving users, natural settings involving users, and any settings not directly involving users.

    • Lab-based studies are good at revealing usability problems, but poor at capturing context of use; field studies are good at capturing context of use, but poor at controlling users' activities.

    • Different types of evaluations are needed depending on the type of product, prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Standards by which particular types of products, such as aircraft navigation systems and consumer products that have safety implications for users, have to be evaluated are set by agencies such as NIST, ISO, and BSI.

    • WCAG 2.1 describes how to design websites so that they are accessible, and is discussed in more detail in Box 16.2.Types of Evaluation in Human-Computer Interaction

    • Different evaluation approaches are available in human-computer interaction, including experiments, user tests, modeling and predicting, and analytics.

    • The choice of evaluation approach depends on the project's goals and desired level of control over the evaluation setting.

    • Usability testing is a common approach to evaluating user interfaces, involving a combination of methods in a controlled setting to assess usability and user satisfaction.

    • Usability testing is widely used in UX design and has started to gain more prominence in other fields, such as healthcare.

    • Experiments are designed to control what users do and reduce outside influences to reliably test specific interface features and hypotheses.

    • Field studies aim to evaluate products with users in their natural settings and are used to identify opportunities for new technology, establish design requirements, and facilitate technology deployment.

    • Field studies methods typically include observation, interviews, and interaction logging, with data taking the form of recorded events and conversations.

    • In-the-wild studies are field studies that observe how new technologies or prototypes are deployed and used by people in various settings, with researchers giving up some control over the evaluation.

    • Living labs have been developed to evaluate people's everyday lives and habits over a period of several months, using ambient-assisted homes and wearable devices to measure health and behavior.

    • Citizen science, in which volunteers work with scientists to collect data on scientific research issues, can also be thought of as a type of living lab.

    • Living labs are being developed that form an integral part of smart buildings to investigate the effects of different configurations of building features on human experiences.

    • The challenge with living labs is finding the right balance between natural and experimental settings to enable research and evaluation without losing the sense of it being natural.Evaluation Methods and Case Studies

    • Field studies are used to examine social processes in online communities and games.

    • Virtual field studies are used in geological and biological sciences to supplement field studies.

    • Evaluation methods not involving users use inspection methods to predict user behavior and identify usability problems.

    • Heuristic evaluation applies knowledge of typical users to identify usability problems.

    • Cognitive walk-throughs focus on evaluating designs for ease of learning.

    • Analytics is a technique for logging and analyzing data to understand and optimize web usage.

    • Learning analytics are useful for guiding course and program design and evaluating pedagogical decision-making.

    • Models are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of usability problems.

    • Controlled settings are useful for testing hypotheses, while uncontrolled settings provide unexpected data and insights.

    • Opportunistic evaluations are conducted early in the design process to provide designers with feedback quickly.

    • Case studies illustrate evaluation methods in different settings, such as an experiment investigating a computer game and gathering ethnographic data at a show using a live chatbot.Evaluation Case Studies: Using Ethnobot to Collect Data in Outdoor Settings

    • Researchers conducted an evaluation study using Ethnobot, a chatbot, to collect data in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Participants spent an average of 120 minutes with Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The Ethnobot collected answers to a specific set of predetermined questions (closed questions) and prompted participants for additional information and photographs.

    • The pre-established comments collected in the Ethnobot chatlogs were analyzed quantitatively by counting the responses.

    • The in-person interviews were audio-recorded and transcribed for analysis, and that involved coding them, which was done by two researchers who cross-checked each other’s analysis for consistency.

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that while there are some challenges to using a bot to collect in-the-wild evaluation data, there are also advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a powerful tool for improving, enhancing, and scaling up a wide range of tasks, including HCI research.

    • Online crowdsourcing studies have raised ethical questions about whether participants are being appropriately rewarded and acknowledged.

    • The case studies provide examples of different evaluation methods used in different physical settings that involve users in different ways to answer various kinds of questions.

    Introducing Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis of users' experiences with design artifacts to improve their design.

    • Evaluation focuses on both usability and user experience, and different methods are used depending on the goals of the evaluation.

    • Evaluation can occur in labs, people's homes, outdoors, and work settings, and can involve both observing participants and measuring their performance and modeling user behavior and analytics.

    • Evaluations are important to ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Where evaluation takes place depends on what is being evaluated, and evaluations can occur in controlled settings, natural settings, and settings not directly involving users.

    • The stage in the product lifecycle when evaluation takes place depends on the type of product and the development process being followed, with formative and summative evaluations being carried out.

    • There are different types of evaluation studies, including usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics.

    • Controlled settings involving users are good at revealing usability problems, but poor at capturing context of use; natural settings involving users are good at capturing context of use, but have little control over users' activities; and settings not directly involving users use consultants and researchers to critique, predict, and model aspects of the interface.

    • Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies.

    • Different types of evaluations are needed depending on the type of product, the prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Evaluation enables designers to focus on real problems and the needs of different user groups and make informed decisions about the design, rather than on debating personal preferences.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test (evaluate) are common, and many agencies set standards for how particular types of products should be evaluated.Types of Evaluation in Human-Computer Interaction

    • Evaluation approaches in HCI include usability testing, modeling and predicting, and analytics, each with its own advantages and limitations.

    • Usability testing involves collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed.

    • Usability testing is a fundamental, essential HCI process that has been used for many years and is used in the development of standard products.

    • Experiments and user tests are designed to control what users do, when they do it, and for how long to reduce outside influences and distractions that might affect the results.

    • Field studies are used to evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts.

    • In-the-wild studies look at how new technologies or prototypes have been deployed and used by people in various settings, such as outdoors, in public places, and in homes.

    • Living labs have been developed to evaluate people’s everyday lives, which would be simply too difficult to assess in usability labs, for example, to investigate people’s habits and routines over a period of several months.

    • Methods used in field studies include observation, interviews, and interaction logging to record events and conversations.

    • Diary studies require people to document their activities or feelings at certain times, and this can make them reflect on and possibly change their behavior.

    • One downside of handing over control in field studies is that it makes it difficult to anticipate what is going to happen and to be present when something interesting does happen.

    • Natural settings involving users may be used to evaluate products when it is too disruptive to evaluate a design in a laboratory setting, such as in a military conflict.

    • Living labs are being developed that form an integral part of a smart building that can be adapted for different conditions to investigate the effects of different configurations of lighting, heating, and other building features on the inhabitant’s comfort, work productivity, stress levels, and well-being.Methods and Case Studies in User Experience Evaluation

    • Field studies are used to examine social processes in online communities and supplement studies in geological and biological sciences.

    • Inspection methods, such as heuristic evaluation and cognitive walkthroughs, are used to predict user behavior and identify usability problems.

    • Analytics, including web analytics and learning analytics, are used to measure and optimize web usage and assess learning in MOOCs and OERs.

    • Models, such as Fitts' law, are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of user experience.

    • Controlled settings provide the ability to test hypotheses about specific interface features, while uncontrolled settings allow for unexpected data and insights into real-world usage.

    • Opportunistic evaluations are informal evaluations done early in the design process to provide designers with quick feedback.

    • Case Study 1: An experiment using physiological responses to evaluate engagement in an online ice hockey game found that playing against a friend was more exciting than playing against a computer.

    • Case Study 2: Ethnographic data was gathered at a large agricultural show using a live chatbot called Ethnobot to collect participants' experiences, impressions, and feelings as they wandered around the show.

    • Ethnobot asked pre-established questions and prompted participants to expand on their answers and take photos.

    • Two main types of data were collected: participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot.

    • Field studies, inspection methods, analytics, models, and case studies can all provide valuable insights into user experience and inform the design process.Case Studies in User Evaluation

    • The study involved participants interacting with the Ethnobot in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Data was collected through pre-established comments, in-person interviews, and open-ended online comments.

    • Participants spent an average of 120 minutes with the Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that using a bot to collect in-the-wild evaluation data has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward.

    • Crowdsourcing in HCI is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies.

    • Crowdsourcing can be a powerful tool for improving, enhancing, and scaling up a wide range of tasks.

    • The case studies demonstrate how researchers exercise different levels of control in different settings and how it is necessary to be creative when working with innovative systems and when dealing with constraints created by the evaluation setting and the technology being evaluated.

    • The text defines terms describing evaluation such as analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study.

    Introducing Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis of users' experiences with design artifacts to improve their design.

    • Evaluation focuses on both usability and user experience, and different methods are used depending on the goals of the evaluation.

    • Evaluation can occur in labs, people's homes, outdoors, and work settings, and can involve both observing participants and measuring their performance and modeling user behavior and analytics.

    • Evaluations are important to ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Where evaluation takes place depends on what is being evaluated, and evaluations can occur in controlled settings, natural settings, and settings not directly involving users.

    • The stage in the product lifecycle when evaluation takes place depends on the type of product and the development process being followed, with formative and summative evaluations being carried out.

    • There are different types of evaluation studies, including usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics.

    • Controlled settings involving users are good at revealing usability problems, but poor at capturing context of use; natural settings involving users are good at capturing context of use, but have little control over users' activities; and settings not directly involving users use consultants and researchers to critique, predict, and model aspects of the interface.

    • Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies.

    • Different types of evaluations are needed depending on the type of product, the prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Evaluation enables designers to focus on real problems and the needs of different user groups and make informed decisions about the design, rather than on debating personal preferences.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test (evaluate) are common, and many agencies set standards for how particular types of products should be evaluated.Types of Evaluation in Human-Computer Interaction

    • Evaluation approaches in HCI include usability testing, modeling and predicting, and analytics, each with its own advantages and limitations.

    • Usability testing involves collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed.

    • Usability testing is a fundamental, essential HCI process that has been used for many years and is used in the development of standard products.

    • Experiments and user tests are designed to control what users do, when they do it, and for how long to reduce outside influences and distractions that might affect the results.

    • Field studies are used to evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts.

    • In-the-wild studies look at how new technologies or prototypes have been deployed and used by people in various settings, such as outdoors, in public places, and in homes.

    • Living labs have been developed to evaluate people’s everyday lives, which would be simply too difficult to assess in usability labs, for example, to investigate people’s habits and routines over a period of several months.

    • Methods used in field studies include observation, interviews, and interaction logging to record events and conversations.

    • Diary studies require people to document their activities or feelings at certain times, and this can make them reflect on and possibly change their behavior.

    • One downside of handing over control in field studies is that it makes it difficult to anticipate what is going to happen and to be present when something interesting does happen.

    • Natural settings involving users may be used to evaluate products when it is too disruptive to evaluate a design in a laboratory setting, such as in a military conflict.

    • Living labs are being developed that form an integral part of a smart building that can be adapted for different conditions to investigate the effects of different configurations of lighting, heating, and other building features on the inhabitant’s comfort, work productivity, stress levels, and well-being.Methods and Case Studies in User Experience Evaluation

    • Field studies are used to examine social processes in online communities and supplement studies in geological and biological sciences.

    • Inspection methods, such as heuristic evaluation and cognitive walkthroughs, are used to predict user behavior and identify usability problems.

    • Analytics, including web analytics and learning analytics, are used to measure and optimize web usage and assess learning in MOOCs and OERs.

    • Models, such as Fitts' law, are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of user experience.

    • Controlled settings provide the ability to test hypotheses about specific interface features, while uncontrolled settings allow for unexpected data and insights into real-world usage.

    • Opportunistic evaluations are informal evaluations done early in the design process to provide designers with quick feedback.

    • Case Study 1: An experiment using physiological responses to evaluate engagement in an online ice hockey game found that playing against a friend was more exciting than playing against a computer.

    • Case Study 2: Ethnographic data was gathered at a large agricultural show using a live chatbot called Ethnobot to collect participants' experiences, impressions, and feelings as they wandered around the show.

    • Ethnobot asked pre-established questions and prompted participants to expand on their answers and take photos.

    • Two main types of data were collected: participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot.

    • Field studies, inspection methods, analytics, models, and case studies can all provide valuable insights into user experience and inform the design process.Case Studies in User Evaluation

    • The study involved participants interacting with the Ethnobot in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Data was collected through pre-established comments, in-person interviews, and open-ended online comments.

    • Participants spent an average of 120 minutes with the Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that using a bot to collect in-the-wild evaluation data has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward.

    • Crowdsourcing in HCI is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies.

    • Crowdsourcing can be a powerful tool for improving, enhancing, and scaling up a wide range of tasks.

    • The case studies demonstrate how researchers exercise different levels of control in different settings and how it is necessary to be creative when working with innovative systems and when dealing with constraints created by the evaluation setting and the technology being evaluated.

    • The text defines terms describing evaluation such as analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study.

    Introducing Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis of users' experiences with design artifacts to improve their design.

    • Evaluation focuses on both usability and user experience, and different methods are used depending on the goals of the evaluation.

    • Evaluation can occur in labs, people's homes, outdoors, and work settings, and can involve both observing participants and measuring their performance and modeling user behavior and analytics.

    • Evaluations are important to ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Where evaluation takes place depends on what is being evaluated, and evaluations can occur in controlled settings, natural settings, and settings not directly involving users.

    • The stage in the product lifecycle when evaluation takes place depends on the type of product and the development process being followed, with formative and summative evaluations being carried out.

    • There are different types of evaluation studies, including usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics.

    • Controlled settings involving users are good at revealing usability problems, but poor at capturing context of use; natural settings involving users are good at capturing context of use, but have little control over users' activities; and settings not directly involving users use consultants and researchers to critique, predict, and model aspects of the interface.

    • Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies.

    • Different types of evaluations are needed depending on the type of product, the prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Evaluation enables designers to focus on real problems and the needs of different user groups and make informed decisions about the design, rather than on debating personal preferences.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test (evaluate) are common, and many agencies set standards for how particular types of products should be evaluated.Types of Evaluation in Human-Computer Interaction

    • Evaluation approaches in HCI include usability testing, modeling and predicting, and analytics, each with its own advantages and limitations.

    • Usability testing involves collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed.

    • Usability testing is a fundamental, essential HCI process that has been used for many years and is used in the development of standard products.

    • Experiments and user tests are designed to control what users do, when they do it, and for how long to reduce outside influences and distractions that might affect the results.

    • Field studies are used to evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts.

    • In-the-wild studies look at how new technologies or prototypes have been deployed and used by people in various settings, such as outdoors, in public places, and in homes.

    • Living labs have been developed to evaluate people’s everyday lives, which would be simply too difficult to assess in usability labs, for example, to investigate people’s habits and routines over a period of several months.

    • Methods used in field studies include observation, interviews, and interaction logging to record events and conversations.

    • Diary studies require people to document their activities or feelings at certain times, and this can make them reflect on and possibly change their behavior.

    • One downside of handing over control in field studies is that it makes it difficult to anticipate what is going to happen and to be present when something interesting does happen.

    • Natural settings involving users may be used to evaluate products when it is too disruptive to evaluate a design in a laboratory setting, such as in a military conflict.

    • Living labs are being developed that form an integral part of a smart building that can be adapted for different conditions to investigate the effects of different configurations of lighting, heating, and other building features on the inhabitant’s comfort, work productivity, stress levels, and well-being.Methods and Case Studies in User Experience Evaluation

    • Field studies are used to examine social processes in online communities and supplement studies in geological and biological sciences.

    • Inspection methods, such as heuristic evaluation and cognitive walkthroughs, are used to predict user behavior and identify usability problems.

    • Analytics, including web analytics and learning analytics, are used to measure and optimize web usage and assess learning in MOOCs and OERs.

    • Models, such as Fitts' law, are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of user experience.

    • Controlled settings provide the ability to test hypotheses about specific interface features, while uncontrolled settings allow for unexpected data and insights into real-world usage.

    • Opportunistic evaluations are informal evaluations done early in the design process to provide designers with quick feedback.

    • Case Study 1: An experiment using physiological responses to evaluate engagement in an online ice hockey game found that playing against a friend was more exciting than playing against a computer.

    • Case Study 2: Ethnographic data was gathered at a large agricultural show using a live chatbot called Ethnobot to collect participants' experiences, impressions, and feelings as they wandered around the show.

    • Ethnobot asked pre-established questions and prompted participants to expand on their answers and take photos.

    • Two main types of data were collected: participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot.

    • Field studies, inspection methods, analytics, models, and case studies can all provide valuable insights into user experience and inform the design process.Case Studies in User Evaluation

    • The study involved participants interacting with the Ethnobot in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Data was collected through pre-established comments, in-person interviews, and open-ended online comments.

    • Participants spent an average of 120 minutes with the Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that using a bot to collect in-the-wild evaluation data has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward.

    • Crowdsourcing in HCI is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies.

    • Crowdsourcing can be a powerful tool for improving, enhancing, and scaling up a wide range of tasks.

    • The case studies demonstrate how researchers exercise different levels of control in different settings and how it is necessary to be creative when working with innovative systems and when dealing with constraints created by the evaluation setting and the technology being evaluated.

    • The text defines terms describing evaluation such as analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study.

    Introducing Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis of users' experiences with design artifacts to improve their design.

    • Evaluation focuses on both usability and user experience, and different methods are used depending on the goals of the evaluation.

    • Evaluation can occur in labs, people's homes, outdoors, and work settings, and can involve both observing participants and measuring their performance and modeling user behavior and analytics.

    • Evaluations are important to ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Where evaluation takes place depends on what is being evaluated, and evaluations can occur in controlled settings, natural settings, and settings not directly involving users.

    • The stage in the product lifecycle when evaluation takes place depends on the type of product and the development process being followed, with formative and summative evaluations being carried out.

    • There are different types of evaluation studies, including usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics.

    • Controlled settings involving users are good at revealing usability problems, but poor at capturing context of use; natural settings involving users are good at capturing context of use, but have little control over users' activities; and settings not directly involving users use consultants and researchers to critique, predict, and model aspects of the interface.

    • Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies.

    • Different types of evaluations are needed depending on the type of product, the prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Evaluation enables designers to focus on real problems and the needs of different user groups and make informed decisions about the design, rather than on debating personal preferences.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test (evaluate) are common, and many agencies set standards for how particular types of products should be evaluated.Types of Evaluation in Human-Computer Interaction

    • Evaluation approaches in HCI include usability testing, modeling and predicting, and analytics, each with its own advantages and limitations.

    • Usability testing involves collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed.

    • Usability testing is a fundamental, essential HCI process that has been used for many years and is used in the development of standard products.

    • Experiments and user tests are designed to control what users do, when they do it, and for how long to reduce outside influences and distractions that might affect the results.

    • Field studies are used to evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts.

    • In-the-wild studies look at how new technologies or prototypes have been deployed and used by people in various settings, such as outdoors, in public places, and in homes.

    • Living labs have been developed to evaluate people’s everyday lives, which would be simply too difficult to assess in usability labs, for example, to investigate people’s habits and routines over a period of several months.

    • Methods used in field studies include observation, interviews, and interaction logging to record events and conversations.

    • Diary studies require people to document their activities or feelings at certain times, and this can make them reflect on and possibly change their behavior.

    • One downside of handing over control in field studies is that it makes it difficult to anticipate what is going to happen and to be present when something interesting does happen.

    • Natural settings involving users may be used to evaluate products when it is too disruptive to evaluate a design in a laboratory setting, such as in a military conflict.

    • Living labs are being developed that form an integral part of a smart building that can be adapted for different conditions to investigate the effects of different configurations of lighting, heating, and other building features on the inhabitant’s comfort, work productivity, stress levels, and well-being.Methods and Case Studies in User Experience Evaluation

    • Field studies are used to examine social processes in online communities and supplement studies in geological and biological sciences.

    • Inspection methods, such as heuristic evaluation and cognitive walkthroughs, are used to predict user behavior and identify usability problems.

    • Analytics, including web analytics and learning analytics, are used to measure and optimize web usage and assess learning in MOOCs and OERs.

    • Models, such as Fitts' law, are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of user experience.

    • Controlled settings provide the ability to test hypotheses about specific interface features, while uncontrolled settings allow for unexpected data and insights into real-world usage.

    • Opportunistic evaluations are informal evaluations done early in the design process to provide designers with quick feedback.

    • Case Study 1: An experiment using physiological responses to evaluate engagement in an online ice hockey game found that playing against a friend was more exciting than playing against a computer.

    • Case Study 2: Ethnographic data was gathered at a large agricultural show using a live chatbot called Ethnobot to collect participants' experiences, impressions, and feelings as they wandered around the show.

    • Ethnobot asked pre-established questions and prompted participants to expand on their answers and take photos.

    • Two main types of data were collected: participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot.

    • Field studies, inspection methods, analytics, models, and case studies can all provide valuable insights into user experience and inform the design process.Case Studies in User Evaluation

    • The study involved participants interacting with the Ethnobot in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Data was collected through pre-established comments, in-person interviews, and open-ended online comments.

    • Participants spent an average of 120 minutes with the Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that using a bot to collect in-the-wild evaluation data has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward.

    • Crowdsourcing in HCI is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies.

    • Crowdsourcing can be a powerful tool for improving, enhancing, and scaling up a wide range of tasks.

    • The case studies demonstrate how researchers exercise different levels of control in different settings and how it is necessary to be creative when working with innovative systems and when dealing with constraints created by the evaluation setting and the technology being evaluated.

    • The text defines terms describing evaluation such as analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study.

    Introducing Evaluation in Design

    • Evaluation is integral to the design process, involving data collection and analysis of users' experiences with design artifacts to improve their design.

    • Evaluation focuses on both usability and user experience, and different methods are used depending on the goals of the evaluation.

    • Evaluation can occur in labs, people's homes, outdoors, and work settings, and can involve both observing participants and measuring their performance and modeling user behavior and analytics.

    • Evaluations are important to ensure that designs are appropriate and acceptable for the target user population and to fix problems before the product goes on sale.

    • What to evaluate ranges from low-tech prototypes to complete systems, from a particular screen function to the whole workflow, and from aesthetic design to safety features.

    • Where evaluation takes place depends on what is being evaluated, and evaluations can occur in controlled settings, natural settings, and settings not directly involving users.

    • The stage in the product lifecycle when evaluation takes place depends on the type of product and the development process being followed, with formative and summative evaluations being carried out.

    • There are different types of evaluation studies, including usability testing, experiments, field studies, inspections, heuristics, walk-throughs, models, and analytics.

    • Controlled settings involving users are good at revealing usability problems, but poor at capturing context of use; natural settings involving users are good at capturing context of use, but have little control over users' activities; and settings not directly involving users use consultants and researchers to critique, predict, and model aspects of the interface.

    • Living labs provide a compromise between the artificial, controlled context of a lab and the natural, uncontrolled nature of in-the-wild studies.

    • Different types of evaluations are needed depending on the type of product, the prototype or design concept, and the value of the evaluation to the designers, developers, and users.

    • Evaluation enables designers to focus on real problems and the needs of different user groups and make informed decisions about the design, rather than on debating personal preferences.

    • Rapid iterations of product development that embed evaluations into short cycles of design, build, and test (evaluate) are common, and many agencies set standards for how particular types of products should be evaluated.Types of Evaluation in Human-Computer Interaction

    • Evaluation approaches in HCI include usability testing, modeling and predicting, and analytics, each with its own advantages and limitations.

    • Usability testing involves collecting data using a combination of methods in a controlled setting to determine whether an interface is usable by the intended user population to carry out the tasks for which it was designed.

    • Usability testing is a fundamental, essential HCI process that has been used for many years and is used in the development of standard products.

    • Experiments and user tests are designed to control what users do, when they do it, and for how long to reduce outside influences and distractions that might affect the results.

    • Field studies are used to evaluate products with users in their natural settings to help identify opportunities for new technology, establish the requirements for a new design, or facilitate the introduction of technology or inform deployment of existing technology in new contexts.

    • In-the-wild studies look at how new technologies or prototypes have been deployed and used by people in various settings, such as outdoors, in public places, and in homes.

    • Living labs have been developed to evaluate people’s everyday lives, which would be simply too difficult to assess in usability labs, for example, to investigate people’s habits and routines over a period of several months.

    • Methods used in field studies include observation, interviews, and interaction logging to record events and conversations.

    • Diary studies require people to document their activities or feelings at certain times, and this can make them reflect on and possibly change their behavior.

    • One downside of handing over control in field studies is that it makes it difficult to anticipate what is going to happen and to be present when something interesting does happen.

    • Natural settings involving users may be used to evaluate products when it is too disruptive to evaluate a design in a laboratory setting, such as in a military conflict.

    • Living labs are being developed that form an integral part of a smart building that can be adapted for different conditions to investigate the effects of different configurations of lighting, heating, and other building features on the inhabitant’s comfort, work productivity, stress levels, and well-being.Methods and Case Studies in User Experience Evaluation

    • Field studies are used to examine social processes in online communities and supplement studies in geological and biological sciences.

    • Inspection methods, such as heuristic evaluation and cognitive walkthroughs, are used to predict user behavior and identify usability problems.

    • Analytics, including web analytics and learning analytics, are used to measure and optimize web usage and assess learning in MOOCs and OERs.

    • Models, such as Fitts' law, are used to compare the efficacy of different interfaces for the same application.

    • Combinations of methods are often used to obtain a richer understanding of user experience.

    • Controlled settings provide the ability to test hypotheses about specific interface features, while uncontrolled settings allow for unexpected data and insights into real-world usage.

    • Opportunistic evaluations are informal evaluations done early in the design process to provide designers with quick feedback.

    • Case Study 1: An experiment using physiological responses to evaluate engagement in an online ice hockey game found that playing against a friend was more exciting than playing against a computer.

    • Case Study 2: Ethnographic data was gathered at a large agricultural show using a live chatbot called Ethnobot to collect participants' experiences, impressions, and feelings as they wandered around the show.

    • Ethnobot asked pre-established questions and prompted participants to expand on their answers and take photos.

    • Two main types of data were collected: participants' online responses to pre-established questions and their additional open-ended comments and photos in response to prompts from Ethnobot.

    • Field studies, inspection methods, analytics, models, and case studies can all provide valuable insights into user experience and inform the design process.Case Studies in User Evaluation

    • The study involved participants interacting with the Ethnobot in a natural outdoor setting at the Royal Highland Show in Scotland.

    • Data was collected through pre-established comments, in-person interviews, and open-ended online comments.

    • Participants spent an average of 120 minutes with the Ethnobot on each session and recorded an average of 71 responses, while submitting an average of 12 photos.

    • The most frequent response was "I learned something" followed by "I tried something" and "I enjoyed something."

    • Participants responded well to prompting by the Ethnobot and were eager to add more information.

    • Participants provided more detail about their experiences and feelings in response to the in-person interview questions than to those presented by Ethnobot.

    • The researchers concluded that using a bot to collect in-the-wild evaluation data has advantages, particularly when researchers cannot be present or when the study involves collecting data from participants on the move or in places that are hard for researchers to access.

    • Crowdsourcing is a service hosted by Amazon that has thousands of people registered, who have volunteered to take part by performing various activities online, known as human intelligence tasks (HITs), for a very small reward.

    • Crowdsourcing in HCI is more flexible, relatively inexpensive, and often much quicker to enroll participants than with traditional lab studies.

    • Crowdsourcing can be a powerful tool for improving, enhancing, and scaling up a wide range of tasks.

    • The case studies demonstrate how researchers exercise different levels of control in different settings and how it is necessary to be creative when working with innovative systems and when dealing with constraints created by the evaluation setting and the technology being evaluated.

    • The text defines terms describing evaluation such as analytics, bias, controlled experiment, crowdsourcing, ecological validity, expert review or crit, field study, formative evaluation, heuristic evaluation, informed consent form, and in-the-wild study.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    chap14.pdf

    Description

    Are you interested in learning about evaluation in the design process? Take our quiz to test your knowledge on the different evaluation methods, settings, and approaches used in human-computer interaction. From usability testing to living labs, this quiz covers a range of evaluation techniques and case studies to help you understand the importance of evaluating user experiences in design. Don't miss out on the opportunity to improve your understanding of evaluation in design and take the quiz today!

    Use Quizgecko on...
    Browser
    Browser