🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Full Transcript

Chapter 1: Task-Artifact Cycle and User Needs Task-Artifact Cycle In technology development, the task-artifact cycle is the background pattern: task outcomes and human experiences implicity...

Chapter 1: Task-Artifact Cycle and User Needs Task-Artifact Cycle In technology development, the task-artifact cycle is the background pattern: task outcomes and human experiences implicity define the agenda for new technological artifacts, which modify subsequent task outcomes and experiences.  Foundation of the Task-Artifact Cycle: Humans have needs and preferences  Technologies are created to suit these needs  Humans then use the technologies  With the use, needs and preferences might change “Human activities implicitly articulate needs, preferences and design vision. Artifacts are designed in response, but inevitably do more than merely respond. Through the course of their adoption and apprpriation, new designs provide new possibilities for action and interaction. Ultimately, this activity articulates further human needs, preferences, and design vision” (Caroli 2013) HUMAN COMPUTER INTERACTION 1/2 User Needs In the task-artifact cycle it is essential to focus on user needs – however user needs are often very abstract and hence the guidance for a concrete implementation is often limited. HUMAN COMPUTER INTERACTION 2/2 Chapter 1: Introduction to Human Computer Interaction Use and Context The field of Human Computer Interaction looks at the intersection of Humans and Computers, such as the name suggests. This combines two very big and dominant research fields with their possibilities, challenges and con- straints and investigates how humans can interact with computers to their personal benefit. HCI: An Interdisciplinary Area Human-Computer Interaction is not a nice-research topic but rather a very broad field, which can be examined from different perspectives. Successful HCI research is characterized by communication between experts from Computer Science, Sociology & Anthropology, Design & Industrial Design and Psychology. Thus, people who under- stand, that the view of other fields than their own need to be included are one step ahead in building products, that are not only innovative but also easy to use. HUMAN COMPUTER INTERACTION 1/3 Utility, Usability, Likeability In order to communicate with each other about HCI concepts, we need to get the terminology right! Utility A product can be used to reach a certain goal or to perform a certain task. This is essential! Usability Relates to the question of quality and efficiency. E.g., how well does a product support the user to reach a certain goal or to perform a cer- tain task. Likeability This may be related to utility and usability but not necessarily. People may like a product for any other reason... What is Usability? “Usability is a quality attribute that assesses how easy user interfaces are to use. The word ‘usability’ also refers to methods for improving ease-of-use during the design process.” (Usability 101 by Jakob Nielsen ) HUMAN COMPUTER INTERACTION 2/3 Usability has five quality components Learnability How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency Once users have learned the design, how quickly can they perform tasks? Memorability When users return to the design after a period of not using it, how easily can they reestablish proficiency? Errors How many errors do users make, how severe are these errors, and how easily can they re- cover from the errors? Satisfaction How pleasant is it to use the design? References 1. Carroll, John M. (2013): Human Computer Interaction - brief intro. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "The Encyclopedia of Human-Computer Interaction, 2nd Ed.". Aarhus, Denmark: The Interaction Design Foundation. Available online at https://www.interaction-design.org/literature/book/the- encyclo- pedia-of-human-computer-interaction-2nd-ed/human- computer-interaction-brief-intro 2. ACM SIGCHI Curricula for Human-Computer Interaction http://www.acm.org/sigchi/cdg/ 3. Jennifer Preece, Yvonne Rogers, Helen Sharp (2002) Interaction Design, ISBN: 0471492787, http://www.id- book.com/, Chapter 9 4. B. Shneiderman. Leonardo's Laptop: Human Needs and the New Computing Technologies. https://mit- press.mit.edu/books/leonardos-laptop 5. Jakob Nielsen's Alertbox, August 25, 2003: Usability 101: Introduction to Usability http://www.useit.com/alertbox/20030825.html 6. ISO 13407, ISO 9241-210 HUMAN COMPUTER INTERACTION 3/3 Chapter 2: History of Human Computer Interaction Overview 1 Interactive Computing: People and Inventions 2 Timelines 3 The Evolution of Graphical User Interfaces HUMAN COMPUTER INTERACTION 1/4 Timeline In the video you just watched, you learned a lot about the most important people and inventions that shaped the beginning of Human Computer Interaction. Here you can see a summary of the most important milestones of related technological inventions in the 20th century. When we now change the perspective, from the technological side to the actual user, we can see how the interaction of humans with computers has changed over time. As you probably know, today we have computers everywhere. The standard and closed form of a computer we had for quite long time. So modern engineers will have to think about new concepts for displays and interaction technol- ogy in all kinds of settings. HUMAN COMPUTER INTERACTION 2/4 The Evolution of Graphical User Interfaces (GUI) In the early days of Human Computer Interaction, the research was scattered and many institutions created their own island solutions. In the book Pioneers and Settlers: Methods Used in Successful User Interface Design from 1995 the authors tried to analyse and summarize success stories, emerging methods and the real-world context of the research that has been conducted so far. As shown in the following figure, in the evolution of GUIs, the interfaces came from this scattered exploratory re- search to a development of a series of classic systems like the Xerox PARC. Based on that, initial products were cre- ated (Apple Lisa 1983) that finally went into standardization. While different GUIs evolved over time, they have specific characteristics in common: ▪ Replacement of command-language ▪ Direct manipulation of the objects of interest ▪ Continuous visibility of object and actions of interest ▪ Graphical metaphors (desktop, trash can) ▪ Windows, icons, menus and pointers ▪ Rapid, reversible, incremental actions HUMAN COMPUTER INTERACTION 3/4 References 1. Image of Douglas Engelbart: https://www.washingtonpost.com/business/douglas-engelbart-computer-vi- sionary-and-inventor-of-the-mouse-dies-at-88/2013/07/03/1439b508-0264-11e2-9b24- ff730c7f6312_story.html 2. Image of Vannevar Bush: https://mondediplo.com/outsidein/vannevar-bush-prophet-of-high-tech 3. Image of Ivan Sutherland: https://ethw.org/Ivan_E._Sutherland 4. Jef Raskin, The Humane Interface, ACM Press 2000 5. Brad A. Myers. "A Brief History of Human Computer Interaction Technology." ACM interactions. Vol. 5, no. 2, March, 1998. pp. 44-54. http://www.cs.cmu.edu/~amulet/papers/uihistory.tr.html HUMAN COMPUTER INTERACTION 4/4 Chapter 3: Humans – Excurse: Physiology Overview 1 Examples for physiological limitations 2 Relation to Computer Science 3 Hand, Motions & DOF 4 Gesture Input vs. Physiology? HUMAN COMPUTER INTERACTION 1/5 Examples for physiological limitations On the one hand humans have a lot of abilities through their specific physiology but on the other hand also lots of examples where you have physiological limitations: Size of objects one can grasp Weight of objects one can lift Reach while seated or while standing Optical resolution of the human vision system Frequencies humans can hear Conditions people live in When you think about Bergkirchweih in Erlangen. You get 1l Maßkrug: For some people this is easy to grab and lift. However, there are lots of users that are actually not able to do that. The sheer size and weight of the objects makes it hard for specific groups to use it as it is supposed to. Not everything that could be done technically can be used / perceived by humans. HUMAN COMPUTER INTERACTION 2/5 Relation to Computer Science Human physiology has also been explored in the context of computer science and HCI-related disciplines within the last decades. The possibilities and especially the limitations you just learned play an important role in the design and implementation of future interfaces. Existing devices and systems, that are intended for the interaction with a human have been widely investigated in research and industry. If we wouldn’t take the human physiology and human factors in general into account, people might not be able to use a certain device or might come up with suboptimal performance. One example for this is the computer keyboard as you know it. For physiology experts it is completely clear that writing in the position you have with a normal keyboard is unhealthy and ergonomic keyboards have been designed to overcome this issue. When we take a look back in history to the invention of the keyboard, we find ourselves in the time of typewriters. Due to the functional principle, the keys were placed in the layout that can still be found on almost all keyboards today. Today, of course, this is no longer necessary from a technical point of view, but no new design has been able to establish itself apart from some isolated solutions. but no new design has been able to establish itself apart from isolated solutions. Most people are used to type with the QWERTZ keyboard even though there might be more physiological and efficient ways for typing. HUMAN COMPUTER INTERACTION 3/5 Hand, Motions and DOF The human hand with its numerous bones, joints and muscles is an anatomically complex part of the human body. It consists of 17 active joints that provide 23 degrees of freedom (DOF) in total. Both easy and difficult movements are depending on the musculoskeletal system behind the hand. This defines what can and cannot be realised by us and might also be different for individual people. © AMBOSS GmbH, Berlin und Köln, Germany HUMAN COMPUTER INTERACTION 4/5 References 1 Goldstein, E. Bruce (2004). Cognitive Psychology: Connecting Mind, Research and Everyday Experience, ISBN: 0534577261 2 A. Maelicke (1990), Vom Reiz der Sinne, VCH 3 W. Spalteholz (1861), Hand-atlas of human anatomy 4 A. Vardy (1998), Articulated Human Hand Model with Inter-Joint Dependency Constraints. Computer Science 6752, Computer Graphics, Project Report, Pages 1-13 5 Hrabia, C. E., Wolf, K., & Wilhelm, M. (2013). Whole hand modeling using 8 wearable sensors: biomechanics for hand pose prediction. In Proceedings of the 4th Augmented Human International Conference (pp. 21-28). ACM. 6 Image Minority Report: https://www.insider.com/minority-report-interface-offices-2015-11 7 Jordan, Philipp & Auernheimer, Brent. (2015). Why HCI should care about Sci-Fi. https://www.researchgate.net/publication/307173212_Why_HCI_should_care_about_Sci-Fi HUMAN COMPUTER INTERACTION 5/5 Chapter 3: Humans: Stereo Vision, Reading, Hearing, Space, Territory and Emotions Overview 1 Stereo Vision 2 Reading 3 Hearing, Touch, Movement 4 Space and territory 5 Emotion HUMAN COMPUTER INTERACTION 1 / 11 1. Stereo Vision Everything on a 2D display is 2D! If we see it three dimensional, we imagine it. For example, when displaying a projection of a 3D model on a 2D screen. We can interpret this as 3D because we have experience with this. “Real” 3D, however, requires an image for each eye. This happens “naturally” when looking at 3D objects in physical space. But it can also be simulated by providing a separate image for each eye using technologies that can provide 3D content. The basis for this technology is the so-called parallax. It describes a displacement or difference in the apparent position of an object viewed along two different lines of sight and is measured by the angle or semi-angle of inclination between those two lines. Left Eye Object Right Eye Distant background Left Eye Right Eye To display these two different images for the eyes we want to present three different methods: Shutter systems Polarized systems Virtual reality headsets Shutter systems Shutter systems consist of glasses that are synchronized with a monitor. The glasses block alternately the left and the right eye. Synchronous to that, the monitor alternately displays the image for the left and right eye. This switching between left and right eye happens at a very high frequency so that the image is perceived as continuous. In that way, two different images can be delivered. HUMAN COMPUTER INTERACTION 2 / 11 Polarized systems Polarized systems are the most popular systems, since modern cinemas or 3D Monitors use this technology, too. The image for the left and right eye is decoded in only one visual wave direction/angle. The polarized glasses have filters that let the correct wave direction pass the filter. In that way, only the appropriate image is sent to the corresponding eye. This technology is cheapest; however, it has limitations. If the user does not look from the correct viewing angle and moves the head too much, the illusion of 3D fails. Virtual reality headsets Virtual Reality Headsets have two distinct displays for each eye. In that way, you can deliver two different images at very high frequencies and quite high resolution. Modern VR-headsets have resolutions up to 1832x1920 pixels per eye and even higher rendered at 90 Hz and higher. This leads to a huge workload. That is why some Headsets are still connected to a powerful PC with a good graphics card. 2. Reading Reading consists of several stages: Visual pattern perceived Decoded using internal representation of language Interpreted using knowledge of syntax, semantics, pragmatics Reading also involves saccades and fixations. Thus, our eyes fixate a word and when the eyes move to the next word, we perform a saccade. The perception of words occurs during fixations. For a proper recognition the word shape plays an important role. Sometimes words are capitalized to emphasize them. However, it was found that all capitalized words are harder to read and thus decrease your reading spead. Some basic facts about reading: Typical reading speeds are 100 (memorizing) to 1000 (scanning) words per minute Reading skills differ to a great extend (according to PISA more than 20% have difficulties in reading) Reading speed has for many tasks a significant impact on overall user performance Good readers “recognize” words (they do not read them letter by letter) Providing a visual presentation that supports reading is important (font, size, color, length of lines, structure, …) Reading from a computer screen is in general slower than from paper HUMAN COMPUTER INTERACTION 3 / 11 Interestingly the order of the letters in a word is not too important to read and understand a text. Read through the following text and try to understand it: I cnlduo't bvleiee taht I culod aulaclty uesdtannrd waht I was rdnaieg. Unisg the icndeblire pweor of the hmuan mnid, aocdcrnig to rseecrah at Cmabrigde Uinervtisy, it dseno't mttaer in waht oderr the lterets in a wrod are, the olny irpoamtnt tihng is taht the frsit and lsat ltteer be in the rhgit pclae. The rset can be a taotl mses and you can sitll raed it whoutit a pboerlm. Tihs is bucseae the huamn mnid deos not raed ervey ltteer by istlef, but the wrod as a wlohe. Aaznmig, huh? Yaeh and I awlyas tghhuot slelinpg was ipmorantt! See if yuor fdreins can raed tihs too. The basic facts and such phenomena are found by conducting studies and observing how people react. But especially when we want to know how people read, we would have to know where the person is looking at. This can be done using eye-tracking. Eye-Tracking allows to see where someone is looking at while reading a text. The principle of a video-based eye gaze tracking system is shown here: The picture below shows the camera with an infrared LED mounted below the tablet PC. The white pupil in the camera image comes from reflection of infrared light The infrared light also causes a reflection glint, which does not move as the eye moves The position of the gaze on the screen can be calculated by the distance from the glint to the pupil center When using eye-tracking for reading analysis, you can generate the following image overlays, that show where the person is/was looking at while reading the text: HUMAN COMPUTER INTERACTION 4 / 11 You can also generate heat maps that color code where people were looking at the most: From these images you can draw new conclusions like: Users first read in a horizontal movement Users move down the page a bit and then read across in a second horizontal movement Finally, users scan the content's left side in a vertical movement. Eye-tracking is of course not limited to reading analysis. Potential other application areas include: Clinical Research Marketing and Consumer Research Infant and Child Research Education Human Performance HUMAN COMPUTER INTERACTION 5 / 11 3. Hearing, Touch, Movement Hearing Some basic facts about our sense of hearing: We have two ears with which we collect information about environment, evaluate the type of sound source, evaluate the distance and direction The physical apparatus (ear) consists of: o Outer ear – protects inner ear, amplifies sound (3-12 kHz) o Middle ear – transmits sound waves as vibrations to inner ear o Inner ear – chemical transmitters are released and cause impulses in auditory nerve The sound that we can capture with our ears consist of: o Pitch – sound frequency o Loudness – amplitude o Timbre – type or quality From experience you could know that very high pitches are oftentimes uncomfortable and lead to pain. You automatically cover your ears with your hands to lower the loudness of the sound. This is also described by the threshold of pain. So, a certain loudness level above which the sound leads to pain. There is also a threshold which must be passed to hear a sound at all, the threshold of audibility. These two thresholds can also be seen in the Fletcher-Munson equal-loudness contours: These curves describe the perceived loudness of a generated sound. For example, the red curve in the middle shows a “60” at about 1000 Hz. So, it is a sin wave with intensity of 60 dB. When increasing or decreasing the frequency of that sine wave the perceived loudness of that sound changes. The loudness that would be required to have an equal impression of loudness can be read from the red curves. For example, for a low frequency of 30 Hz you need approximately 80 dB for the same 60 dB loudness impression. In that way it was possible to understand how people hear and define the thresholds of hearing and pain. https://blog.landr.com/fletcher-munson- curves/#:~:text=What%20are%20Fletcher%2DMunson%20curv es,its%20frequency%20for%20human%20listeners. HUMAN COMPUTER INTERACTION 6 / 11 Additional to that, our sense of hearing changes with increasing age and becomes worse. Especially the high frequency perception is affected. The following graph depicts this process. The older you get, the higher sound pressure is required to perceive high frequencies at all. The low frequency perception is not affected that severe. Another interesting phenomenon of our sense of hearing is the selective hearing often related to as cocktail party effect: You are in a noisy environment like a crowded underground train, and you can still have a conversation. You can even direct your attention to another conversation and “listen in”. You are in a conversation and somewhere else someone mentions your name. You realize this even if you have not been listening actively to this conversation. The auditory system filters incoming information and allows selective hearing of sounds in environment with background noise or keywords. This is a binaural effect. Thus, we need both ears for that to identify the sound source’s location. To locate the sound source, we can rely on three effects: Interaural time difference (ITD) Interaural intensity difference (IID) Head related transfer functions (HRTF) ITD describes the time difference of the sound on arrival at the different ears. IID describes the sound pressure difference on arrival at the different ears. HRTF describes how the head changes the sound because of masking. HUMAN COMPUTER INTERACTION 7 / 11 While ITD and IID can be calculated quite easily, HRTFs are more complex and must be measured in expensive experiments. The reason for measuring HRTFs is to create a better experience for 360° sound or stereo signals when listening to music or for other applications like in Virtual Reality. If you simply play the stereo sound you generated in earphones there is no body/head that changes the sound perception (masking, damping, …). Hence, it is pre-calculated in the signal played. This requires measurements with a dummy head (e.g. microphone at the position where typically the ear is). Based on the data a function can be developed. Touch and Movement Our sense for touch and movement provides important feedback about our environment. It is said to be the key sense for someone visually impaired. The environmental stimulus is received via receptors in the skin, which are: Thermoreceptors – heat and cold Nociceptors – pain Mechanoreceptors – pressure (instant / continuous) Some areas more sensitive than others (e.g. fingers compared to our back) because the density of those receptors is higher in fingers. Based on the “raw” data we collect through those receptors (and also other receptors inside muscles and the like), we are able to further process the information in our brain and get a feeling of our body in space. Thereby we can define two holistic body perceptions: Kinesthesis: feeling of limb and body movements Proprioception: unconscious perception of movement and spatial orientation arising from stimuli within the body itself. As we already mentioned the fingers are more sensitive than our back. This is also represented in the so-called Somatosensory Homunculus. It is shown where the body parts are mapped onto the surface of the brain. The larger the brain region the more sensitive the body region. Based on this mapping on the brain’s surface the homunculus was created to show the imbalance between real body size and sensitivity. http://tmww.blogspot.com/2011/05/homunculus-of-touch.html HUMAN COMPUTER INTERACTION 8 / 11 4. Space and territory Humans use space to ease tasks (simplify choices, perceptions, and internal computation). However, computer systems often do not support this well. ‘How we manage the spatial arrangement of items around us is not an afterthought: it is an integral part of the way we think, plan, and behave.’ David Kirsh. The Intelligent Use of Space. Artificial Intelligence (73) Elsevier, p31-68, 1995 When space is used efficiently than some effects are: Reduced cognitive load (space complexity) Reduced number of steps required (time complexity) Reduced probability of errors (unreliability) There are some general rules for the intelligent use of space: Utilize space as much as possible Use space in physical world / on screen Allow users to customize spatial arrangements Provide interactive means for manipulation of objects in space The physical space + spatial order: o Implies behavior o Eases categorization o Allows to make (internal human) computation easier Segment problems and tasks o Spatially o Temporally Especially the last two bullets (The physical space + spatial order as well as segment problems and tasks) can be seen with the invention of Ford’s first assembly line. One worker always has to do one thing. So the place in the line implies the behavior of that worker. Perhaps at the end of the line he/she simply has to mount the door handles and the car is finished. By dividing these task across the assembly line it was possible to speed up the process by implied behavior for a worker, eased categorization (crew for machine, crew for chassis, …), eased computation because crews only have one specific task and don’t have to think of the complete and more complex car. The assembly line also devides the task automatically in spatially and temporally different zones/phases. HUMAN COMPUTER INTERACTION 9 / 11 5. Emotions There are various theories of how emotion works: James-Lange: emotion is our interpretation of a physiological response to a stimuli “we are sad because we cry...” Cannon: emotion is a psychological response to a stimuli Schachter-Singer: emotion is the result of our evaluation of our physiological responses, in the light of the whole situation we are in Despite the various theories, one can say that emotion clearly involves both cognitive and physical responses to stimuli. The biological response to physical stimuli is called affect. The Affect influences how we respond to situations: Positive ➔ creative problem solving Negative ➔ narrows thinking “Negative affect can make it harder to do even easy tasks; positive affect can make it easier to do difficult tasks” (Donald Norman). Stress will increase the difficulty of problem solving Relaxed users will be more forgiving of shortcomings in design Aesthetically pleasing and rewarding interfaces will increase positive affect Especially the last point was tested with ATM machines. The experiment consisted of six ATM identical in function and operation. Some were aesthetically more attractive than others. The result of the experiment was that the nicer ATMs are easier to use… So, aesthetics can change the emotional state and emotions allow us to quickly assess situations. Positive emotions make us more creative and attractive things make feel people good. (D. Norman, Emotional Design, Chapter 1) Another theory is called the Affordance Theory. It describes the (perceived) possibility for action. The original idea was stated by Gibson. He said that objective properties imply action possibilities. So, how we can use things – independent of the individual person. Norman later added the perceived affordance also includes experience of an individual. A simple example is vandalism at a bus stop… Is the bus stop built out of concrete, you will surely find graffiti on it. Is the bus stop built out of glass, it will be smashed. Is the bus stop built out of wood, you’ll find carvings. These examples just give the impression that affordance is not only for HCI but can be found everywhere. The implication for affordance is to build natural and intuitive user interfaces that already by design imply how they must be used. HUMAN COMPUTER INTERACTION 10 / 11 A good example is a full body interaction mimicking a real action like tennis or swinging a sword or moving your hands. These interactions can also be used by elderly people that oftentimes are not used to working with PCs. HUMAN COMPUTER INTERACTION 11 / 11 Chapter 3: Visual Perception, Optical Illusions, and Gestalt Laws Overview 1 Visual Perception 2 Optical Illusions HUMAN COMPUTER INTERACTION 1 / 13 1. Visual Perception Visual perception is one of the most important sources of information. Approximately 60-80% of all information is perceived visually. We can define three terms that will clearly distinguish the processes in visual perception: Reception describes the transformation of the stimulus (light) into electrical energy. Cognition describes the “Understanding” in the brain. Perception describes the sensors (receptors) and signal processing happening in the eyes and in brain. Deng, Wei & Zhang, Xiujuan & Jia, Ruofei & Huang, Liming & Jie, Jiansheng. (2019). Organic molecular crystal-based photosynaptic devices for an artificial visual-perception system. NPG Asia Materials. 11. 77. 10.1038/s41427-019-0182-2. HUMAN COMPUTER INTERACTION 2 / 13 Sensory Organ – The Human Eye The human eye has some very basic attributes: Very high dynamic range Bad color vision in dark conditions Best contrast perception in red/green Limited temporal resolution (reaction speed) – The human is said to be blind when moving the eyes. Good resolution and color in central area (macula) Maximum resolution and color only in the very center (fovea) Retina contains rods for low light vision and cones for color vision (they transform light into electrical energy) → Receptor for light stimuli Ganglion cells inside the retina are already part of the brain and detect patterns and movements Pinhole camera (everything we see is upside-down projected on the retina) HUMAN COMPUTER INTERACTION 3 / 13 Interpreting the signal Here we will cover the basic and first signal interpretations: Size and depth Brightness Color Size and depth: Visual acuity (VA) is the ability to perceive details (smallest resolvable object size g at given distance d). This ability is limited and can change over time. The better, the more precise is our interpretation of size. The visual angle indicates how much of our field of view an object occupies (relates to size and distance from eye) Familiar objects are perceived as constant size (despite changes in visual angle when far away). This one is closely Visual angle of an object (ball) maps onto the retina. The same object spans a different visual angle based on the distance to the eye. related to the image above. The two balls are interpreted as having the same size. Our depth perception mainly relies on depth cues. Thereby we can distinguish two types of cues: monocular and binocular depth cues HUMAN COMPUTER INTERACTION 4 / 13 Monocular depth cues (depth cues we can perceive with one eye) Examples for monocular depth cues are: ▪ Accomodation is tension of the muscles for the lens of the eyes -> change focal length, helps to correctly map the image onto the retina ▪ Monocular Movement Parallax -> by moving your head you can perceive depth ▪ Retinal image size -> when object size is known, smaller objects are perceived farther away (see above) ▪ Linear perspective -> railroad tracks that meet in infinity ▪ Texture gradient -> closer means more detailed (standing at a tree and looking up -> rough bark of the tree loses details) -> relates to visual acuity ▪ Overlapping -> closer objects block objects that are further away ▪ Aerial perspective bluish fog or hazy ▪ Shadows give a hint when there is only one light source Binocular depth cues (depth cues we can only perceive with both eyes) Examples for binocular depth cues are: ▪ Convergence when the eyes are moved inward to focus on a close object ▪ Binocular parallax are differences in the perspective onto a scene or object caused by the distance between the eyes (different viewing locations) Brightness: We have a very subjective reaction to levels of light. However, our reaction is still affected by the luminance of an object. Our eyes “measure” the luminance by just noticeable differences. Interestingly our visual acuity increases with luminance as does flicker (fast changes in luminance). Rods have a lower density at the fovea but a higher density temporal and nasal to the fovea. Thus, they contribute more to the peripheral vision. They cannot detect color. Color: Our color perception is made up of: Hue Intensity Saturation Cones are sensitive to color wavelengths, whereby the acuity to blue color is lowest. HUMAN COMPUTER INTERACTION 5 / 13 The spectrum of visible light for humans is quite small: Image by Tatoute and Phrood (CC BY-SA 3.0) https://commons.wikimedia.org/wiki/File:Spectre.svg We only can perceive wavelengths between 400 and 700 nm. The Tristimulus theory (trichromaticity) describes that humans have three different cones that are differently sensitive to wavelengths: Red (Long) Green (Medium) Blue (Short) Rods: Dashed line → cannot detect color, sensitive to all wavelengths https://de.wikipedia.org/wiki/Datei:Cone-response-de.svg Bowmaker J.K. and Dartnall H.J.A., "Visual pigments of rods and cones in a human retina." J. Physiol. 298: pp501-511 (1980). HUMAN COMPUTER INTERACTION 6 / 13 Some people cannot detect colors in general. They are color blind or cannot detect or distinguish between specific colors. Color blindness is a hereditary disease and is more prevalent in male (8-10% males and just 1% females are color blind). The ype of color blindness can be detected with the following images: Left: Under normal color viewing condition: 8 Red/Green blind: 3 or nothing Middle: Under normal color viewing condition: 7 Color blind: nothing Right: Under normal color viewing condition: 35 Red blind: 5; Green blind: 3 Our perception can be changed by the environment. A very good example is the Adelson’s checkerboard illusion. This illusion makes use of a phenomenon in which the perceived brightness and color seems to play tricks on us. The brightness and color of tile A and B seem to be different. Are they different? Thus, using color keys can be difficult and misleading!! Another phenomenon is the afterglow of the opposite color. Following the link below, you will find an animation. You must fixate on the star in the middle. When you do that, a green dot will appear on the spots where the purple dot is missing. Based on this, we can also define rule of thumbs for good designs especially for- and background colors: HUMAN COMPUTER INTERACTION 7 / 13 1. Do not use opposite colors because… 2. Rule of thumb: lightness difference > 0.2 results in good contrast. HUMAN COMPUTER INTERACTION 8 / 13 2. Optical Illusions and Gestalt Laws Our visual system compensates for movements and changes in luminance. The Context is used to resolve ambiguity. However, optical illusions sometimes occur due to overcompensation of our visual system. We – for example – have a different perception in focus region and in peripheral view leading to motion artifacts. When you look at the following image in full screen you get the impression that the circles are moving although they’re not. This is caused by the difference in the focus and peripheral view. An illusion is also the Escher Waterfall. This picture shows a reality that cannot be real. The waterfall is driving a wheel and after that the water flows back “up” into the tower to again fall and drive the wheel. This is impossible. Our visual system tries to make sense out of the visual information we get. This behavior can also be used in HCI to create a special kind of interaction that can be interesting for many different applications. Good examples of this are the Gestalt Laws and surely all of us already got in touch with those… Gestalt Laws HUMAN COMPUTER INTERACTION 9 / 13 Look at the following captchas. It is simple for us to read the words or digits. However, for a computer it is not that easy. Here we can make use of our behavior to make sense out of given visual information. All these Captchas follow Gestalt Laws. Have a closer look at the warning signs before the staircases: The left one reads “Keep Red” and “Off Line” because we tend to group closer things together. The right one reads “Keep off Red Lines”. Just by placing the words differently the perception changes. There are many different Gestalt Laws, and we will stick to seven Laws namely: Law of Similarity Law of Proximity Law of Continuity Law of Closure Law of Pragnanz Law of common fate Law of Symmetry HUMAN COMPUTER INTERACTION 10 / 13 There are more Gestalt Laws like Figure and Ground or Smallness Area. However, we will only cover the seven Laws above. Law of Similarity Items that are similar tend to be grouped together In the image, most people see vertical columns of circles and squares Law of Proximity Objects near each other tend to be grouped together The circles on the left appear to be grouped in vertical columns, while those on the right appear to be grouped in horizontal rows Law of Continuity Lines are seen as following the smoothest path In the left image, the top branch is seen as continuing the first segment of the line. This allows us to see things as flowing smoothly without breaking lines up into multiple parts. HUMAN COMPUTER INTERACTION 11 / 13 Law of Closure Objects grouped together are seen as a whole We tend to ignore gaps and complete contour lines. In the left image, there are no triangles or circles, but our minds fill in the missing information to create familiar shapes and images. Law of Pragnanz (Law of Simplicity/ Law of good shape) Reality is organized or reduced to the simplest form possible E.g., we see the left image as a series of circles rather than a much more complicated shape HUMAN COMPUTER INTERACTION 12 / 13 Law of common fate Elements with the same moving directions are perceived as a collective or unit Law of symmetry Symmetrical images are perceived collectively, despite their distance to each other Other interesting Illusions There is one interesting phenomenon in perception called the Change Blindness. It describes that even large changes in a scene are not noticed. Reasons for this are short distractions caused by Mud splashes, brief flicker or cover boxes. Summary Why do we need to know about all the illusions and Gestalt Laws? We can better guide the Distribution of attention and perception. The Distribution depends on: Culture (cultural background) Custom / habit Perception Processing Experience If we know how to organize elements in a scene or we know which visual phenomenon could result in an illusion that breaks attention, we can account for this and create better User Interfaces or Products in general that can be used easily. HUMAN COMPUTER INTERACTION 13 / 13 Chapter 4: Principles for UI-Design by Shneiderman Overview 1 Excurse: UI vs. UX 2 Principle 1: Recognize User Diversity 3 Principle 2: Follow the Eight Golden Rules 4 Principles 3: Prevent Errors HUMAN COMPUTER INTERACTION 1 / 14 Excurse: User Interface Design (UI) vs. User Experience Design (UX) User Interface Design (UI) and User Experience Design are both crucial to a product and work closely together. But despite their professional relationship, the roles themselves are quite different, referring to distinct aspects of the product development process and the design discipline. User Interface Design User Experience Design This is how it looks like This is how it feels like User interface (UI) is anything a user may interact with ‘User experience’ encompasses all aspects of the end- to use a digital product or service. This includes user’s interaction with the company, its services, and everything from screens and touchscreens, keyboards, its products. sounds, and even lights. (Don Norman, Jakob Nielsen) UI is focused on how a product’s surfaces look and Is focused on the user’s journey to solve a problem by function and develops the more tangible elements (of looking at the users and the problems they encounter the application) User experience ≠ Usability Usability is a quality attribute of the UI, covering whether the system is easy to learn, efficient to use, pleasant, and so forth. HUMAN COMPUTER INTERACTION 2 / 14 Principle 1: Recognize User Diversity Great user experience starts with a good understanding of your users. Not only do you want to know who they are, but you want to dive deeper into understanding their motivations, mentality, and behavior. Accessibility and diversity ensure the usability of products by everyone. An inclusive UX design combines different perspectives to meet diverse user experiences. There is no “average” user and even one specific task will have various requirements based on who will want to perform it. To better describe your target user, you can use Persona profiles that characterize the typical user of your application (not the average!)1. Creating software that is appropriate for a specific target group (e.g. 0,1% of the population) may still find a large user base (in Europe and the US this may be more than half a million people!). Persona: user-centered design and human-computer-interaction technique that promotes immersion into end-users’ needs (Alan Cooper) What is the background of the user? Age Gender Education Cultural background Know your user What are their goals, motivation, personality? What is their experience? Different people have different requirements for Novice users their interaction with Knowledgeable intermittent users computers. Expert frequent users HUMAN COMPUTER INTERACTION 3 / 14 User vs. Customer Customer is often not the user! Creating awareness with the customer for the user Design for the user not to please the customer (this is a difficult one) Clear assessment of target group and personas will help to convince the customer In order to identify your users customers and other involved parties you need to take into account, you can conduct a Stakeholder analysis. How do you react to stakeholders of the product? Identify stakeholders Categorize stakeholders Interest in the project Influence on the team/project (Power) Attitude (positive / negative) Reasons for attitude See also: http://www.mindtools.com/pages/article/newPPM_07.htm Place people in the diagram Revisit throughout the project HUMAN COMPUTER INTERACTION 4 / 14 Make sure to focus on what is really necessary! Functionality should only be added if identified to help solving tasks Temptation: If additional functionality is cheap to include it is often done – this can seriously compromise the user interface concept (and potentially the whole software system)! HUMAN COMPUTER INTERACTION 5 / 14 Principle 2: Follow the Eight Golden Rules The Eight Golden Rules have been introduced by Ben Shneiderman and summarize what needs to be considered when creating user interfaces. The Book ‘Designing the User Interface: Strategies for Effective Human-Computer Interaction:’ 2 is a must-read for everyone who wants to learn more about the foundation of Human-Computer Interaction and how to apply it to everyday work. HUMAN COMPUTER INTERACTION 6 / 14 01: Strive for consistency Consistency means using the same design patterns and the same sequences of action for similar situations – this includes a color scheme, typography, and terminology. Consistency plays an important role by helping users become familiar with the digital landscape of your product so they can achieve their goals more easily. In a specific environment it is defined by guidelines (e.g., for GNOME, for KDE, for Mac OSX, for Win XP, for JAVA Swing). Note: In the WWW it gets pretty hard: No real guidelines and no authority How are links represented? Where is the navigation? Styles and “fashion” change quickly… Consistency is divided into the following levels: Lexical (“is found in the dictionary”), which means for example coding consistent with common usage (e.g., red = bad, green = good), using consistent abbreviation rules or character delete key is always the same Syntactic (“is spelled correctly”), which means that for example error messages are placed at the same (logical) place, commands are all given either first or last or, menu items can be found at the same place Semantic (“means the same”), which includes those global commands that are always available (Help, Abort, Undo), operations are valid on all reasonable objects Note: Semantic consistency is applicable to command line, user interfaces, to keyboard short cuts, to speech interfaces, to tool bars, to menus, to selection operation, to gestures 02: Enable frequent users to use shortcuts Shortcuts will be especially beneficial for tasks, that need to be completed very frequently. Expert users might find the following features helpful: Abbreviations Function keys Hidden commands HUMAN COMPUTER INTERACTION 7 / 14 Macro facilities However, do not forget the novice users and give explanations or more information to the functionalities (e.g., help menu for shortcuts) 03: Offer informative feedback Users need to be informed of what is happening at every stage of the process. Make sure, that the feedback is meaningful, relevant, clear, and fit the context. For frequent actions it should be modest, peripheral For infrequent action it should be more substantial A good example of applying this would be to indicate to the user where they are at in the process when working through a multi-page questionnaire. A bad example we often see is when an error message shows an error-code instead of a human-readable and meaningful message. 04: Design dialogues to yield closure Just like a good story, sequences of action need to have a beginning, middle and end. Don’t keep your users wondering! Tell them the outcome of their actions and if the task has been completed. This feedback should be considered in all possible levels: E.g.: in the large: Web shop - it should be clear when I am in the shop, and when I have successfully check-out Or: cash dispenser - forget your bank card? Or: in the small: a progress bar shows the status of the action 05: Error prevention/handling A good interface needs to be designed to avoid errors as much as possible. But when errors do happen, your system needs to make it easy for the user to understand the issue and know how to solve it. Simple ways to handle errors include displaying clear error notifications along with descriptive hints to solve the problem. ‘Anything that can go wrong, will go wrong.’ 3 Thus, you need to consider: If someone can interpret the interface wrong, they will HUMAN COMPUTER INTERACTION 8 / 14 If something can be used incorrectly, eventually it will If something is designed ideal user, then it’s not designed for users You are the expert of your system; therefore, you know much more than your users and some components might seem trivial. They are not. 06: Permit easy reversal of actions Users need to have an ‘undo’ option after a mistake is made. This will permit to feel less anxious and more likely to explore options if they know there’s an easy way to reverse any accidents. This rule can be applied to any action, group of actions, or data entry. It can range from a simple button to a whole history of actions. Note: Undo is not trivial if user is not going sequential → E.g., write a text, copy it into the clipboard, undo the writing. The text is still in the clipboard! In certain settings processes and basic physical laws prevent reversal of actions. Here an interaction layer (buffering user interaction) may be possible – but not always (e.g., breaks, emergency stop) 07: Support internal locus of control “In personality psychology, locus of control is the degree to which people believe that they have control over the outcome of events” 4 Users need to feel in charge of the system, not the other way round. Avoid surprises, interruptions, or anything that hasn’t be prompted by the users. The elements on an UI need to show, that users are the initiators of the actions rather than the responders. HUMAN COMPUTER INTERACTION 9 / 14 08: Reduce short-term memory load Human attention is limited, and we are only capable of maintaining around five items in our short-term memory at one time. As Nielsen already stated: Recognition is easier than recall. 5 So, if you keep your interfaces simple and consistent, obeying to patterns, standards, and conventions, you are already contributing to better recognition and ease of use. Like all rules of thumb or principles, the eight golden rules of interface design help with decisions and provide a basis for argumentation. The art lies in interpreting them to the problem at hand. And it is always particularly interesting when you must weigh up individual points against each other. Principle 3: Prevent Errors “Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action”. From: Nielsen’s usability heuristics5 Essentially, it involves alerting a user when they’re making an error, with the intention to make it easy for them to do whatever it is they are doing without making a mistake. The main reason this principle of error prevention is important is that we humans are prone to- and will always make mistakes. Reasons for mistake might differ from misinterpretations of the system, cognitive overload and lacking attention, distraction, intentional manipulation of the system and so forth. A good user interface design should prevent and limit the possible user errors and take according action. HUMAN COMPUTER INTERACTION 10 / 14 People can and will type in different formats of the The choices in a dropdown menu should strive for information, e.g., prefix of phone number, date consistency in their options and give clear format, etc. communication about possible consequences. Always give clear communication on the user’s options Good example if important data is to be deleted. To and next possible actions. There is a lack of prevent user’s from deleting something by accident, a information about what will happen if the user clicks confirmation step is included in the dialog. “Done”. HUMAN COMPUTER INTERACTION 11 / 14 Human Error may also be a starting point to look for design problems → Design implications Assume all possible errors will be made Minimize the chance to make errors (constraints) Minimize the effect that errors have (is difficult!) Include mechanism to detect errors Attempt to make actions reversible Forcing function (interlock, lockins, lockouts) Errors are routinely made. Communication and language are used between people to clarify – more often than one imagines. Common understanding of goals and intentions between people helps to overcome errors. Users are often distracted from the task at hand, so prevent unconscious errors by offering suggestions, utilizing constraints, and being flexible. There are two fundamental categories of errors: 1. Mistakes: are made when users have goals that are inappropriate for the current problem or task; even if they take the right steps to complete their goals, the steps will result in an error. 2. Slips: occur when users intend to perform one action but end up doing another (often similar) action. For example, typing an “i” instead of an “o” count as a slip; accidentally putting liquid hand soap on one’s toothbrush instead of toothpaste is also a slip. Slips are typically made when users are on autopilot, and when they do not fully devote their attention resources to the task at hand. What are the types of Slips users can make? Capture Two actions with common start point, the more familiar one captures the unusual (driving errors to work on Saturday instead of the supermarket) Description Performing an action that is close to the action that one wanted to perform (putting the errors cutlery in the bin instead of the sink) Data driven When users return to the design after a period of not using it, how easily can they errors reestablish proficiency? Associate You think of something and that influences your action. (e.g. saying come in after picking action errors up the phone) HUMAN COMPUTER INTERACTION 12 / 14 Loss-of- In each environment you decided to do something but when leaving then you forgot what Activation you wanted to do. Going back to the start place you remember. error ~ forgetting Mode error You forget that you are in a mode that does not allow a certain action or where an action has a different effect If something goes wrong, we attempt corrections on the lowest level HUMAN COMPUTER INTERACTION 13 / 14 References 1. Cooper A. The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity. 1 edition ed. Sams - Pearson Education; 1999. 2. Shneiderman B. Designing the User Interface: Strategies for Effective Human-Computer Interaction. 3rd ed. Addison-Wesley Longman Publishing Co., Inc.; 1997. 3. Roe A. The Making of a Scientist. :127. 4. Rotter JB. Generalized expectancies for internal versus external control of reinforcement. Psychol Monogr Gen Appl. 1966;80(1):1-28. doi:10.1037/h0092976 5. Nielsen J, Molich R. Heuristic evaluation of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Empowering People - CHI ’90. ACM Press; 1990:249-256. doi:10.1145/97243.97281 Further ressources for information: MacKenzie, I. S., Sellen, A., & Buxton, W. (1991). A comparison of input devices in elemental pointing and dragging tasks. Proceedings of the CHI `91 Conference on Human Factors in Computing Systems, pp. 161-166. New York: ACM. From: John, Bonnie and Kieras, David E., The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast, ACM Transactions on Computer-Human Interaction 3,4 (December 1996b), 320-351 Harold Thimbleby. User Interface Design With Matrix Algebra. ACM Transactions on Computer-Human Interaction, Vol. 11, No. 2, June 2004, Pages 181–236 Card, S. K., Moran, T. P., and Newell, A. 1980. The keystroke-level model for user performance time with interactive systems. Commun. ACM 23, 7 (Jul. 1980), 396-410. Kay, A. User Interface: A Personal View. In Brenda Laurel (ed.), The Art of Human-Computer Interface Design. New York: Addision-Wesley, 1990, pp.191-207. Cited accroding to Virtual Reality and Abstract Data: Virtualizing Information. By Michael B. Spring and Michael C. Jennings. University of Pittsburgh. (http://www2.sis.pitt.edu/~spring/papers/abstdat_vr.pdf) Urban Stress: Experiments on Noise and Social Stressors. DC Glass, JE Singer - 1972 - Academic Press Wilfred J. Hansen, User Engineering Principles for Interactive Systems, 1971 Jef Raskin, The Humane Interface, ACM Press 2000 HUMAN COMPUTER INTERACTION 14 / 14 Chapter 4: Principles to support Usability by Dix et al. Overview 1 Usability 101 (by Jakob Nielson) 2 Design Rules 3 Principle 1: Learnability 4 Principle 2: Flexibility 5 Principle 3: Robustness HUMAN COMPUTER INTERACTION 1 / 10 Usability 101 by Jakob Nielson Usability is a quality attribute that assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the design process. Usability has five quality components: Learnability How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency Once users have learned the design, how quickly can they perform tasks? Memorability When users return to the design after a period of not using it, how easily can they reestablish proficiency? Errors How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction How pleasant is it to use the design? On the Web, usability is a necessary condition for survival. If a website is difficult to use, people leave. If the homepage fails to clearly state what a company offers and what users can do on the site, people leave. If users get lost on a website, they leave. If a website's information is hard to read or doesn't answer users' key questions, they leave. Note a pattern here? Users do not spend much time over reading a website manual or trying to figure out an interface (unless it is obligatory). If there are plenty of alternatives available, leaving is the first line of defence when users encounter a difficulty. There are several evaluation methods for getting users’ feedback to improve usability. Some are based on evaluation by UX experts, but probably the most common is usability testing with users, i.e., the actual end users carry out typical tasks with a complete product or a prototype. Together with a usability test, other methods can be used to collect complementary data. Those are, for example, questionnaires, interviews and focus group discussions. HUMAN COMPUTER INTERACTION 2 / 10 Types of Design Rules There is a terminology to describe the different types of Design rules. Principles: Abstract design rules principles Golden rules and heuristics: More concrete than principles Golden rules Standards: standards (Very) detailed design rules Design patterns: design patterns Generic solution for a specific problem Style guides: Style guides Provided for devices, operating systems, widget libraries increasing authority Authority: whether a rule must be followed or whether it is just suggested Generality: applied to many design situations or focused on specific application situation For an example of a style guide, visit: https://design.google A style guide helps to ensure a continuous product experience. It means that no matter how, when or where a customer experiences a brand or a product, they are experiencing the same underlying traits. It’s this consistency across every touchpoint that helps to create an association with a brand or a product. The usage experiences feels ‘complete’ and consistent and helps building habits with the product. HUMAN COMPUTER INTERACTION 3 / 10 Principle 1: Learnability Learnability: the ease with which new users can begin effective interaction and achieve maximal performance. Learnability captures how well the user can start using the new system and which prior knowledge is required for this. Therefore, several aspects of learnability need to be considered, which include: Predictability Determining effect of future actions based on past interaction history and the visibility of operations Synthesizability Ability of the user to assess the effect of past operations on the current state – this means that the user should see the changes of an operation given through immediate vs. eventual feedback HUMAN COMPUTER INTERACTION 4 / 10 Familiarity To which extent can the user apply prior knowledge to new system – remember: affordance (guessability). Generalizability Can specific interaction knowledge be extended to new situations Image from the movie Star Trek IV: The Voyage Home Consistency Likeness in input/output behavior arising from similar situations or task objectives Excurse: The power of gestures Gestures allow direct changes to UI elements using touch and help users perform tasks rapidly and intuitively. Through the ubiquity of mobile smartphones, some gestures have become common synonyms for a specific action (e.g., zoom in through touching the surface with two fingers and moving them apart.) This previous knowledge can be used in new applications and there are common guidelines from big software companies how to implement these gestures. Image source: Julian Burford E.g.: Google Gesture Guide Microsoft Gesture Guidelines Apple Gesture Guidelines Android Gesture Guidelines HUMAN COMPUTER INTERACTION 5 / 10 Principle 2: Flexibility Flexibility: the multiplicity of ways the user and system exchange information. The flexibility of the interaction with a system is determined by several components, which will be introduced in this section. Dialogue initiative The dialogue initiative includes the freedom from system-imposed constraints on input dialog. There are two types of dialogue initiatives, which are user preemptiveness (the user initiates a dialog) and systems preemptiveness (the system initiates dialog) User preemptiveness System preemptiveness Describes the ability of system to support user interaction for several tasks at a Multithreading time. Two types of Multithreading in UX design are concurrent multimodality and interleaving multimodality. Concurrent multimodality: Interleaving multimodality: Multi-modal dialog Permits temporal overlap between Editing text and beep separate tasks, dialog is restricted to a (incoming mail) at the same single task time Window system, window = task Modal dialogs Interaction with just one window at a given time HUMAN COMPUTER INTERACTION 6 / 10 Task migrability Some responsibilities for a given task can be passed from a user to the system, e.g., spell check in a word processing program A system needs to allow equivalent values of input and output, that can be Substituitivity substituted for each other. → Representation of multiplicity e.g.: different currencies, cm or inch Customizability The user interface needs to be modifiable by the user (adaptability) or the system (adaptivity). Both terms describe “how” an intelligent mechanism used to describe the “how” an intelligent mechanism can achieve the goal of tailoring a UI to a specific user, e.g., through combinations of components and attributes Adaptability Adaptivity users’ ability to adjust the form of input and automatic customization of the user output.→ The user is actively and interface by the system. continuously involved in the adoption → The system collects user process of the UI information based on his or her activity. HUMAN COMPUTER INTERACTION 7 / 10 Principle 3: Robustness Robustness: the level of support provided to the user in determining successful achievement and assessment of goal-directed behaviour. Observability Ability of the user to evaluate the internal state of the system from its perceivable representation A system is considered “observable” if the current state can be estimated by only using information from outputs – in a visual interface, this is must be displayed information that is accessible for the user. Recoverability Ability of the user to correct a recognized error: Reachability (states): forward (redo) / backward (undo) recovery The effort for a given task should be adequate to the importance or consequences of it: e.g., more effort or steps should be necessary to deleting a file then just to move them. Task Degree to which system services support all tasks of a user. Conformance HUMAN COMPUTER INTERACTION 8 / 10 However, one should keep in mind, that by adding more functionalities to a interface, this will increase complexity and the ease of use. Thus, the balance between supporting tasks but not overloading users should be kept. Responsiveness Describes how the user perceives the rate of communication with the system. The preferred perception should be short and contain instant responses. HUMAN COMPUTER INTERACTION 9 / 10 Summary Poor design criteria are responsible for wasting computer users time and are a hindrance to effective interaction with human centred systems. It is important that before any design of an interface is attempted an in-depth analysis of task and user needs must be undertaken. Therefore, Alan Dix has proposed a taxonomy of three design principles which help to guide developers in the design process of user-friendly systems. The three principles comprise Learnability, Flexibility, and Robustness with respective sub-categories. References MacKenzie, I. S., Sellen, A., & Buxton, W. (1991). A comparison of input devices in elemental pointing and dragging tasks. Proceedings of the CHI `91 Conference on Human Factors in Computing Systems, pp. 161-166. New York: ACM. From: John, Bonnie and Kieras, David E., The GOMS Family of User Interface Analysis Techniques: Comparison and Contrast, ACM Transactions on Computer-Human Interaction 3,4 (December 1996b), 320-351 Harold Thimbleby. User Interface Design With Matrix Algebra. ACM Transactions on Computer-Human Interaction, Vol. 11, No. 2, June 2004, Pages 181–236 Card, S. K., Moran, T. P., and Newell, A. 1980. The keystroke-level model for user performance time with interactive systems. Commun. ACM 23, 7 (Jul. 1980), 396-410. Kay, A. User Interface: A Personal View. In Brenda Laurel (ed.), The Art of Human-Computer Interface Design. New York: Addision-Wesley, 1990, pp.191-207. Cited accroding to Virtual Reality and Abstract Data: Virtualizing Information. By Michael B. Spring and Michael C. Jennings. University of Pittsburgh. (http://www2.sis.pitt.edu/~spring/papers/abstdat_vr.pdf) Urban Stress: Experiments on Noise and Social Stressors. DC Glass, JE Singer - 1972 - Academic Press Wilfred J. Hansen, User Engineering Principles for Interactive Systems, 1971 Jef Raskin, The Humane Interface, ACM Press 2000 HUMAN COMPUTER INTERACTION 10 / 10 Chapter 5: Models of Human Computer Interaction Overview 1 Descriptive models 2 GOMS 3 Keystroke-Level Model (KLM) 4 Summary HUMAN COMPUTER INTERACTION 1/8 What are descriptive models? A descriptive model describes a system or other entity and its relationship to its environment. With descriptive models, it is possible to describe real-world events and the relationships between factors responsible for them. Descriptive models: Provide a basis for understanding, reflecting, and reasoning about certain facts and interactions Provide a conceptual framework that simplifies a, potentially real, system Are used to inspect an idea or a system and make statements about their probable characteristics Used to reflect on a certain subject Can reveal flaws in the design and style of interaction To understand which data analysis you perform, you can look at a chart created by Roger D. Peng and Jeff Leek 1: Further Examples are: Descriptions, statistics, performance measurements Taxonomies, user categories, interaction categories HUMAN COMPUTER INTERACTION 2/8 GOMS = Goals, Operators, Methods, Selection rules GOMS is an "engineering model" of human-computer interaction from the field of "cognitive modelling". The subject of cognitive modelling is the development of concepts, methods and tools for the analysis, modelling, and design of complex human-machine systems in which human information processing at higher cognitive levels plays a special role. The GOMS model describes a user’s cognitive structure on the following four components: Goals Goals are the goals the user wants to achieve by using a system. These should not be too abstract, creative, or problem-solving (e.g., performing "cut and paste" in a word processor). Operators Operators are the smallest, atomic actions of the user. These are definable at various levels of abstraction, but there is a tendency towards concrete actions. The operators have a context-free, parameterizable duration (e.g., a mouse click is assigned an execution time of 200msec) Methods Methods here are learned sequences of subgoals and operators to achieve a specific goal (e.g., mark a paragraph: 1. move the mouse to the beginning of the paragraph 2. press and hold the mouse button 3. move the mouse to the end of the paragraph → the method consists of 3 operators). Selection Under certain circumstances, several methods are possible for achieving a particular goal. For example, a paragraph can also be deleted character by character (cf. method above). rules Personal, individual selection rules of the user decide here, based on other factors, which method is used. Many HCI methods exploit a mental processing model in that the user achieves goals by solving subgoals in a User tasks are split into goals which are achieved by solving sub-goals in a divide-and-conquer fashion divide-and-conquer manner. This "simulated" human problem-solving process is also modelled in GOMS by decomposing the task into a hierarchy of goals. Following the Model Human Processor, the operators represent elementary perceptual, cognitive, or motor acts, each of which can be assigned mean execution times. HUMAN COMPUTER INTERACTION 3/8 The overall goal for User Experience Design of the GOMS model is to eliminate useless or unnecessary interactions and improve human-computer interaction efficiency. Principles of GOMS To improve the performance of a The operators involved in cognitive Task performance can be improved cognitive skill, eliminate skills are highly specific to the by providing a set of error-recovery unnecessary operators from the methods used for a given task. methods. method used to do the task (or use other methods). HUMAN COMPUTER INTERACTION 4/8 Remember: The Model Human Processor models how a series of information flows in a human from the viewpoint of information processing. Example of the connection between Model Human Processor and the GOMS model in early research 3. There are four different types of GOMS concepts: CMN-GOMS, Keystroke-Level Model (KLM), NGOMSL, and CPM- GOMS. Although all these concepts give predictive information about how individuals use computer systems, they all highlight different parts of the task completion process. In choosing a GOMS concept to assess a design, one must know “the type of task the users will be engaged in and the types of information gained by applying the technique”2. The following chapter introduces the Keystroke-Level Model in further detail. Keystroke-Level Model (KLM) The KLM is a simplified version of GOMS. By using the KLM, predictions can be made about the execution time of a task. Here, the analyst lists the operator sequence of the task and obtains the predicted execution time of the task by adding up the individual operator times. A KLM model is represented in a sequence form, i.e., like a special program run "on key level". No targets, methods and selection rules are specified. → Only operators on a keystroke level → No sub-goals → No methods → No selection rules HUMAN COMPUTER INTERACTION 5/8 There are several KLM operators in this model: Each operator is assigned a duration: Operator Description Associated Time K Keystroke, typing one letter, number, etc. or function Expert typist (90 wpm): 0.12 sec key like ‘CTRL’, ‘SHIFT’ Average skilled typist (55 wpm): 0.20 sec Average non-secretarial typist (40 wpm): 0.28 sec Worst typist (unfamiliar with keyboard): 1.2 sec H ‘Homing’, moving the hand between mouse and 0.4 sec keyboard B / BB Pressing / clicking a mouse button 0.1 sec / 2*0.1 sec P Pointing with the mouse to a target 0.8 to 1.5 sec with an average of 1.1 sec Can also use be calculated using Fitts’ Law D(nD, lD) Drawing nD straight line segments of length lD 0.9*nD + 0.16*lD M Subsumed time for mental acts; sometimes used as 1.35 sec (1.2 sec according to ‘look-at’ [Olson and Olson 1995]) R(t) or W(t) System response (or ‘work’) time, time during which Dependent on the system, to be the user cannot act determined on a system-by-system basis HUMAN COMPUTER INTERACTION 6/8 Reasons to use KLM: KLM can help evaluate UI designs, interaction methods and trade-offs If common tasks take too long or consist of too many statements, shortcuts can be provided Predictions are mostly remarkably accurate: +/- 20% Sometimes already the comparison of the number of occurrences of the different operators for different designs reveal the difference Extensions for novel (mobile, automotive, touch) interfaces exist (see references) Limitations of the KLM model: o Only measures one aspect of performance: time (= execution time, not the time to acquire or learn a task) o Only considers expert users (there is a broad variance of digital literacy and knowledge) o Only considers routine unit tasks o The method needs to be specified step by step. o The execution of the method must be error-free o The mental operator aggregates different mental operations and therefore cannot model a deeper representation of the user’s mental operations. If this is crucial, a GOMS model has to be used (e.g. model K2) Summary Like any tool for measuring human behavior, implementation of GOMS has its advantages and disadvantages. First, it gives both qualitative and quantitative measurements, which can provide powerful insight on how users will approach a design. Additionally, because it is a model, it does not require any actual users. Often, usability testing can be both expensive and difficult to produce accurate results with physical subjects. Additionally, once GOMS data is dissected and implemented to a change in the design, it is easy to modify the model to create future iterations until the design is optimal. (CMN-)GOMS KLM Pseudo-code (no formal syntax) Simplified version of GOMS Very flexible Only operators on keystroke-level Goals and sub-goals →Focus on very low-level tasks Methods are informal programs No multiple goals Selection rules No methods → Tree structure: use different branches for No selection rules different scenarios →Strictly sequential Time consuming to create Quick and easy HUMAN COMPUTER INTERACTION 7/8 Problem with GOMS / KLM in general Only for well-defined routine cognitive tasks Assumes statistical experts Does not consider slips or errors, fatigue, social surroundings,... The KLM and the GOMS models have in common that they only predict behaviour of experts without errors, but in contrast the KLM needs a specified method to predict the time because it does not predict the method like GOMS 3. References 1. Leek JT, Peng RD. What is the question? Science. 2015;347(6228):1314-1315. doi:10.1126/science.aaa6146 2. John BE, Kieras DE. Using GOMS for user interface design and evaluation: which technique? ACM Trans Comput-Hum Interact. 1996;3(4):287-319. doi:10.1145/235833.236050 3. Kieras D. A Guide to GOMS Model Usability Evaluation using GOMSL and GLEAN3. :62. 4. Card SK. The Psychology of Human-Computer Interaction. CRC Press; 2018. Further Resources: Card S. K., Newell A., Moran T. P. (1983). The Psychology of Human-Computer Interaction. Lawrence Erlbaum Associates Inc. Card S. K., Moran T. P., Newell A. (1980). The Keystroke-level Model for User Performance Time with Interactive Systems. Communication of the ACM 23(7). 396-410 John, B., Kieras, D. (1996). Using GOMS for user interface design and evaluation: which technique? ACM Transactions on Computer-Human Interaction, 3, 287-319. D. A. Norman. The Design of Everyday Things. Basic Books. 2002. ISBN: 978-0465067107 B. Shneiderman. Designing the User Interface: Strategies for Effective Human-Computer Interaction , 5th Edition. 2009. ISBN: 978- 0321537355 L. Suchman, Plans and Situated Action: - The Problem of Human- Machine Communication. 1987, ISBN 978- 0521337397 Alan Dix, Janet Finlay, Gregory Abowd and Russell Beale. (2003) Human Computer, Interaction (3rd edition), Prentice HUMAN COMPUTER INTERACTION 8/8 Chapter 5: Models: Predictive Models for Interaction Overview 1 Fitts’ law 2 Steering law 3 Hicks’ law HUMAN COMPUTER INTERACTION 1/8 Fitts’ Law: Introduction Fitts’ law is a robust model of human psychomotor behaviour. Paul Fitts was working at the intersection between technology and the motor abilities of humans. So, he was an early expert on human-machine interaction. We are using Fitts’ law when we want to predict the movement time for rapid and aimed pointing tasks, like clicking on buttons or touching icons. This model was developed in 1954 by Paul Fitts, and it describes the movement time in terms of distance and size of a target and a device. Generally said, this law predicts how long it takes us to interact with a specific interface and a particular device. While not described for Human- Computer Interaction in the first place, it was rediscovered in 1978 in this field. It “was a major factor leading to the mouse’s commercial introduction by Xerox” (Stuart Card). One of the reasons was that with this law, a precise optimisation of current solutions could be shown, so people were eager to find out more about the computer mouse and use it for their work. Subsequently, the potential and benefits of this model were getting more transparent, and today, it is widely spread and often discussed in the literature. Derivation from Signal Transmission Fitts’ law is derived from a formula well known in signal transmission; the Shannon-Hartley theorem. This theorem describes the maximum rate at which information can be transmitted over a communications channel with a certain bandwidth and the presence of noise: 𝑆 𝐶 = 𝐵 log 2 (1 + ) 𝑁 C: channel capacity (bits / second) B: bandwidth of the channel (Hertz) S: total signal power over the bandwidth (Volt) N: total noise power over the bandwidth (Volt) S/N: signal-to-noise ratio (SNR) of the communication signal to the Gaussian noise interference (as linear power ratio – SNR (dB)=10 log10(S/N)) Paul Fitts was well educated in electrical engineering. He knew this theorem very well, so that’s how he came up with the idea to use it to model his specific problem of analysing the time it takes to acquire a particular target when your pointing system (this can be anything) is currently not on the target. HUMAN COMPUTER INTERACTION 2/8 Fitts’ law – Equation As you will notice, the formula of Fitts’ law looks quite similar to the one of the Shannon-Harley theorem. The time to acquire a target is a function of the distance to and the size of the target, and it depends on the particular pointing system. 𝑫 𝑀𝑇 = 𝒂 + 𝒃 log 2 (1 + ) 𝑾 MT: Movement time a, b: constants dependent on the pointing system D: distance to the target area W: width of the target Fitts’ law – Index of difficulty (ID) One part of Fitts‘ law equation represents the Index of Difficulty (ID). It describes how difficult a task is independent of the device used or the method used to reach the target. 𝑀𝑇 = 𝑎 + 𝑏 ∗ 𝐼𝐷 𝑫 → 𝐼𝐷 = log 2 (1 + ) 𝑾 Taking a closer look at this equation shows that a small but close target is equally difficult to reach as a large target that is far away. This is also shown in the image on the right: for the targets applies: 𝐼𝐷𝑡𝑎𝑟𝑔𝑒𝑡1 = 𝐼𝐷𝑡𝑎𝑟𝑔𝑒𝑡2 Fitts’ law – T

Use Quizgecko on...
Browser
Browser