User Interface Evaluation - IT1906

Document Details

Uploaded by Deleted User

STI

Tags

user interface evaluation human-computer interaction usability design

Summary

This document outlines evaluation criteria, methods, factors, and example scenarios for user interface design. It discusses quantitative and qualitative assessments, user experience (UX), and evaluation methods like focus interviews and cognitive walkthroughs. The document also includes expert heuristic evaluation.

Full Transcript

IT1906 User Interface Evaluation Evaluation Criteria Human-centered design, also known as user-centered design, is a design process which starts with a specific group of people who intend to use the software and ends with the new solutions that suit the...

IT1906 User Interface Evaluation Evaluation Criteria Human-centered design, also known as user-centered design, is a design process which starts with a specific group of people who intend to use the software and ends with the new solutions that suit their needs. It is about building a solid connection with the intended user by generating new ideas and prototypes (Designkit.org, n.d.) Evaluation is considered as the last stage of software development. The overall software development is a gradual refinement process, where the next refinement stage would be based on the evaluation results of the previous development. The following criteria are used in evaluating a human-centered design (Kim, 2015): Usability – This refers to the ease of use and learnability of the user interface (UI). Usability can be measured through the following: o Quantitative Assessment – This assessment often involves task performance measurement. Performance measurement can be in the form of completion time, score, and error rate. A UI has a high percentage of usability criteria if the user can show a minimum user task accomplishment in using the software or application. Its downside, however, is the difficulty of gathering a homogeneous pool of subjects with parallel background for fair evaluation. o Qualitative Evaluation – This evaluation is often conducted in order to complement the insufficiency of quantitative assessment. Conducting a usability-related survey to a pool of subjects who have experienced the UI falls under qualitative evaluation. A usability survey often includes questions related but not limited to the following:  Ease of use  Simple preference  Ease of learning  Interface specific question User Experience (UX) – The notion of UX is generally accepted as the “totality” of the involvement of the user to the software or application. It does not only pertain to the interface, but it also involves the whole product, software, or application. User experience is closely related to the user’s emotion, satisfaction, and perception. As such, high UX usually translates to high usability and high demonstrative attachment. Evaluation Methods Here are some of the factors that mainly affect user interface evaluation (Kim, 2015): Factor Example The timing of an analysis The analysis in between or throughout the application development stage: early, middle, late/after The type and number of Several human-computer interaction (HCI) experts as evaluators vs. hundreds of domain evaluators users as evaluators The evaluation set-up Performing a controlled experiment or a quick and informal evaluation The place of evaluation Performing an evaluation in a laboratory vs. an on-site/field testing Below are the different user interface (UI) evaluation methods: Focus interview – This is the easiest and most straightforward evaluation method, which involves an interview with the actual/potential user to observe their interaction behavior through a simple question and answer form. This method of evaluation is often focused on particular user groups (e.g., college students and elderly) and on a specific feature of the system or interface (e.g., mode of input and information layout), although this is not structured to be comprehensive (Kim, 2015). o Cognitive walkthrough – This is an interview technique in evaluating the design of a user interface, with special attention to how well the interface supports exploratory learning for the target user. This evaluation method is done by having a subject or a group of evaluators go step by step through commonly used tasks. The evaluators can also perform it in the early stage of the design process (Usability.gov, n.d.). 08 Handout 1 *Property of STI  [email protected] Page 1 of 2 IT1906 Expert heuristic evaluation – This evaluation method is similar to focus interview or observation study. The difference of this method is the involvement of human-computer interaction (HCI) experts as evaluators, and the analysis is carried out against a prepared HCI guidelines, hence the term heuristics. Expert heuristic evaluation is fast and relatively cost effective; thus, it is one of the most popular methods in user interface evaluation. Some examples of heuristics in evaluating are system status, display layout, ergonomics, consistency, error prevention, and help (Kim, 2015). Measurement – This method of evaluation intends to indirectly quantify the goodness of the interaction and interface design with a numerical score through task performance (quantitative) or quantified answers from a carefully prepared subjective surveys (qualitative). Both types of assessment or evaluation can optionally run over a long period, especially when memory performance and familiarity with the task are involved. These are some of the guidelines for a good survey according to Kim (2015): o Minimize the number of questions. o Make the questions compact and understandable. o Use an odd-level scale of five or seven. o Categorize the questions. _________________________________________________________________________________________________ References: Cognitive walkthrough. (n.d.). In usability.gov. Retrieved from https://www.usability.gov/what-and-why/glossary/cognitive- walkthrough.html on September 10, 2019 Design Kit. (n.d.). What is Human-Centered Design?. Retrieved from http://www.designkit.org/human-centered-design on September 9, 2019 Kim, G. (2015). Human-computer interaction fundamentals and practice. USA: CRC Press. University of Pittsburgh. (2018, November 2). Chapter 2 – Purpose of the Human Research Protection Office and Institutional Review Board. Retrieved from https://www.irb.pitt.edu/content/chapter-2-purpose-human-research-protection-office-and- institutional-review-board on September 9, 2019 08 Handout 1 *Property of STI  [email protected] Page 2 of 2

Use Quizgecko on...
Browser
Browser