🎧 New: AI-Generated Podcasts Turn your study notes into engaging audio conversations. Learn more

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Transcript

Chapter 8 D ATA G AT H E R I N G 8.1 Introduction 8.2 Five Key Issues 8.3 Data Recording 8.4 Interviews 8.5 Questionnaires 8.6 Observation 8.7 Choosing and Combining Techniques Objectives The main goals of the chapter are to accomplish the following: • • • • Discuss how to plan and run a success...

Chapter 8 D ATA G AT H E R I N G 8.1 Introduction 8.2 Five Key Issues 8.3 Data Recording 8.4 Interviews 8.5 Questionnaires 8.6 Observation 8.7 Choosing and Combining Techniques Objectives The main goals of the chapter are to accomplish the following: • • • • Discuss how to plan and run a successful data gathering program. Enable you to plan and run an interview. Empower you to design a simple questionnaire. Enable you to plan and carry out an observation. 8.1 Introduction Data is everywhere. Indeed, it is common to hear people say that we are drowning in data because there is so much of it. So, what is data? Data can be numbers, words, measurements, descriptions, comments, photos, sketches, films, videos, or almost anything that is useful for understanding a particular design, user needs, and user behavior. Data can be quantitative or qualitative. For example, the time it takes a user to find information on a web page and the number of clicks to get to the information are forms of quantitative data. What the user says about the web page is a form of qualitative data. But what does it mean to collect these and other kinds of data? What techniques can be used, and how useful and reliable is the data that is collected? This chapter presents some techniques for data gathering that are commonly used in interaction design activities. In particular, data gathering is a central part of discovering requirements and evaluation. Within the requirements activity, data gathering is conducted 260 8 D ATA G AT H E R I N G to collect sufficient, accurate, and relevant data so that design can proceed. Within evaluation, data gathering captures user reactions and their performance with a system or prototype. All of the techniques that we will discuss can be done with little to no programming or technical skills. Recently, techniques for scraping large volumes of data from online activities, such as Twitter posts, have become available. These and other techniques for managing huge amounts of data, and the implications of their use, are discussed in Chapter 10, “Data at Scale.” Three main techniques for gathering data are introduced in this chapter: interviews, questionnaires, and observation. The next chapter discusses how to analyze and interpret the data collected. Interviews involve an interviewer asking one or more interviewees a set of questions, which may be highly structured or unstructured; interviews are usually synchronous and are often face-to-face, but they don’t have to be. Increasingly, interviews are conducted remotely using one of the many teleconferencing systems, such as Skype or Zoom, or on the phone. Questionnaires are a series of questions designed to be answered asynchronously, that is, without the presence of the investigator. These questionnaires may be paper-based or available online. Observation may be direct or indirect. Direct observation involves spending time with individuals observing their activities as they happen. Indirect observation involves making a record of the user’s activity as it happens, to be studied at a later date. All three techniques may be used to collect qualitative or quantitative data. Although this is a small set of basic techniques, they are flexible and can be combined and extended in many ways. Indeed, it is important not to focus on just one data gathering technique, if possible, but to use them in combination so as to avoid biases that are inherent in any one approach. 8.2 Five Key Issues Five key issues require attention for any data gathering session to be successful: goal setting, identifying participants, the relationship between the data collector and the data provider, triangulation, and pilot studies. 8.2.1 Setting Goals The main reason for gathering data is to glean information about users, their behavior, or their reaction to technology. Examples include understanding how technology fits into family life, identifying which of two icons representing “send message” is easier to use, and finding out whether the planned redesign for a handheld meter reader is headed in the right direction. There are many different reasons for gathering data, and before beginning, it is important to set specific goals for the study. These goals will influence the nature of data gathering sessions, the data gathering techniques to be used, and the analysis to be performed (Robson and McCartan, 2016). The goals may be expressed more or less formally, for instance, using some structured or even mathematical format or using a simple description such as the ones in the previous paragraph. Whatever the format, however, they should be clear and concise. In interaction design, it is more common to express goals for data gathering informally. 8.2 8.2.2 FIVE KEY ISSUES Identifying Participants The goals developed for the data gathering session will indicate the types of people from whom data is to be gathered. Those people who fit this profile are called the population or study population. In some cases, the people from whom to gather data may be clearly identifiable—maybe because there is a small group of users and access to each one is easy. However, it is more likely that the participants to be included in data gathering need to be chosen, and this is called sampling. The situation where all members of the target population are accessible is called saturation sampling, but this is quite rare. Assuming that only a portion of the population will be involved in data gathering, then there are two options: probability sampling or nonprobability sampling. In the former case, the most commonly used approaches are simple random sampling or stratified sampling; in the latter case, the most common approaches are convenience sampling or volunteer panels. Random sampling can be achieved by using a random number generator or by choosing every nth person in a list. Stratified sampling relies on being able to divide the population into groups (for example, classes in a secondary school) and then applying random sampling. Both convenience sampling and volunteer panels rely less on choosing the participants and more on the participants being prepared to take part. The term convenience sampling is used to describe a situation where the sample includes those who were available rather than those specifically selected. Another form of convenience sampling is snowball sampling, in which a current participant finds another participant and that participant finds another, and so on. Much like a snowball adds more snow as it gets bigger, the population is gathered up as the study progresses. The crucial difference between probability and nonprobability methods is that in the former you can apply statistical tests and generalize to the whole population, while in the latter such generalizations are not robust. Using statistics also requires a sufficient number of participants. Vera Toepoel (2016) provides a more detailed treatment of sampling, particularly in relation to survey data. BOX 8.1 How Many Participants Are Needed? A common question is, how many participants are needed for a study? In general, having more participants is better because interpretations of statistical test results can be stated with higher confidence. What this means is that any differences found among conditions are more likely to be caused by a genuine effect rather than being due to chance. More formally, there are many ways to determine how many participants are needed. Four of these are saturation, cost and feasibility analysis, guidelines, and prospective power analysis (Caine, 2016). • Saturation relies on data being collected until no new relevant information emerges, and so it is not possible to know the number in advance of the saturation point being reached. • Choosing the number of participants based on cost and feasibility constraints is a practical approach and is justifiable; this kind of pragmatic decision is common in industrial projects but rarely reported in academic research. (Continued) 261 262 8 D ATA G AT H E R I N G • Guidelines may come from experts or from “local standards,” for instance, from an accepted norm in the field. • Prospective power analysis is a rigorous method used in statistics that relies on existing quantitative data about the topic; in interaction design, this data is often unavailable, making this approach infeasible, such as when a new technology is being developed. Kelly Caine (2016) investigated the sample size (number of participants) for papers published at the international Computer-Human Interaction (CHI) conference in 2014. She found that several factors affected the sample size, including the method being used and whether the data was collected in person or remotely. In this set of papers, the sample size varied from 1 to 916,000, with the most common size being 12. So, a “local standard” for interaction design would therefore suggest 12 as a rule of thumb. 8.2.3 Relationship with Participants One significant aspect of any data gathering is the relationship between the person (people) doing the gathering and the person (people) providing the data. Making sure that this relationship is clear and professional will help to clarify the nature of the study. How this is achieved varies in different countries and different settings. In the United States and United Kingdom, for example, it is achieved by asking participants to sign an informed consent form, while in Scandinavia such a form is not required. The details of this form will vary, but it usually asks the participants to confirm that the purpose of the data gathering and how the data will be used has been explained to them and that they are willing to continue. It usually explains that their data will be private and kept in a secure place. It also often includes a statement that participants may withdraw at any time and that in this case none of their data will be used in the study. The informed consent form is intended to protect the interests of both the data gatherer and the data provider. The gatherer wants to know that the data they collect can be used in their analysis, presented to interested parties, and published in reports. The data provider wants reassurance that the information they give will not be used for other purposes or in any context that would be detrimental to them. For example, they want to be sure that personal contact information and other personal details are not made public. This is especially true when people with disabilities or children are being interviewed. In the case of children, using an informed consent form reassures parents that their children will not be asked threatening, inappropriate, or embarrassing questions, or be asked to look at disturbing or violent images. In these cases, parents are asked to sign the form. Figure 8.1 shows an example of a typical informed consent form. This kind of consent is also not generally required when gathering requirements data for a commercial company where a contract usually exists between the data collector and the data provider. An example is where a consultant is hired to gather data from company staff during the course of discovering requirements for a new interactive system to support timesheet entry. The employees of this company would be the users of the system, and the consultant would therefore expect to have access to the employees to gather data about the timesheet activity. In addition, the company would expect its employees to cooperate in this exercise. In this case, there is already a contract in place that covers the data 8.2 FIVE KEY ISSUES Crowdsourcing Design for Citizen Science Organizations SHORT VERSION OF CONSENT FORM for participants at the University of Maryland – 18 YEARS AND OLDER You are invited to participate in a research project being conducted by the researchers listed on the bottom of the page. In order for us to be allowed to use any data you wish to provide, we must have your consent. In the simplest terms, we hope you will use the mobile phone, tabletop, and project website at the University of Maryland to • • • • Take pictures Share observations about the sights you see on campus Share ideas that you have to improve the design of the phone or tabletop application or website Comment on pictures, observations, and design ideas of others The researchers and others using CampusNet will be able to look at your comments and pictures on the tabletop and/or website, and we may ask if you are willing to answer a few more questions (either on paper, by phone, or face-to-face) about your whole experience. You may stop participating at any time. A long version of this consent form is available for your review and signature, or you may opt to sign this shorter one by checking off all the boxes that reflect your wishes and signing and dating the form below. ___I agree that any photos I take using the CampusNet application may be uploaded to the tabletop at the University of Maryland and/or a website now under development. ___I agree to allow any comments, observations, and profile information that I choose to share with others via the online application to be visible to others who use the application at the same time or after me. ___I agree to be videotaped/audiotaped during my participation in this study. ___I agree to complete a short questionnaire during or after my participation in this study. NAME [Please print] SIGNATURE DATE [Contact information of Senior Researcher responsible for the project] Figure 8.1 Example of an informed consent form gathering activity, and therefore an informed consent form is less likely to be required. As with most ethical issues, the important thing is to consider the situation and make a judgment based on the specific circumstances. Increasingly, projects and organizations that collect personal data from people need to demonstrate that it is protected from unauthorized 263 264 8 D ATA G AT H E R I N G access. For example, the European Union’s General Data Protection Regulation (GDPR) came into force in May 2018. It applies to all EU organizations and offers the individual unprecedented control over their personal data. For more information about GDPR and data protection law in Europe and the United Kingdom, see: https://ico.org.uk/for-organisations/guide-to-the-general-data-protectionregulation-gdpr/ Incentives to take part in data gathering sessions may also be needed. For example, if there is no clear advantage to the respondents, incentives may persuade them to take part; in other circumstances, respondents may see it as part of their job or as a course requirement to take part. For example, if support sales executives are asked to complete a questionnaire about a new mobile sales application, then they are likely to agree if the new device will impact their day-to-day lives. In this case, the motivation for providing the required information is clear. However, when collecting data to understand how appealing a new interactive app is for school children, different incentives would be appropriate. Here, the advantage for individuals to take part is not so obvious. 8.2.4 Triangulation Triangulation is a term used to refer to the investigation of a phenomenon from (at least) two different perspectives (Denzin, 2006; Jupp, 2006). Four types of triangulation have been defined (Jupp, 2006). • Triangulation of data means that data is drawn from different sources at different times, in different places, or from different people (possibly by using a different sampling technique). • Investigator triangulation means that different researchers (observers, interviewers, and so on) have been involved in collecting and interpreting the data. • Triangulation of theories means the use of different theoretical frameworks through which to view the data or findings. • Methodological triangulation means to employ different data gathering techniques. The last of these is the most common form of triangulation—to validate the results of some inquiry by pointing to similar results yielded through different perspectives. However, validation through true triangulation is difficult to achieve. Different data gathering methods result in different kinds of data, which may or may not be compatible. Using different theoretical frameworks may or may not result in complementary findings, but to achieve theoretical triangulation would require the theories to have similar philosophical underpinnings. Using more than one data gathering technique, and more than one data analysis approach, is good practice because it leads to insights from the different approaches even though it may not be achieving true triangulation. Triangulation has sometimes been used to make up for the limitations of another type of data collection (Mackay and Anne-Laure Fayard, 1997). This is a different rationale from the original idea, which has more to do with the verification and reliability of data. Furthermore, 8.2 FIVE KEY ISSUES a kind of triangulation is being used increasingly in crowd sourcing and other studies involving large amounts of data to check that the data collected from the original study is real and reliable. This is known as checking for “ground truth.” For an example of methodological triangulation, see: https://medium.com/design-voices/the-power-of-triangulation-in-designresearch-64a0957d47d2 For more information about ground truth and how ground truth databases are used to check data obtained in autonomous driving, see “The HCI Bench Mark Suite: Stereo and Flow Ground Truth with Uncertainties for Urban Autonomous Driving” at https://ieeexplore.ieee.org/document/7789500/. 8.2.5 Pilot Studies A pilot study is a small trial run of the main study. The aim is to make sure that the proposed method is viable before embarking on the real study. For example, the equipment and instructions can be checked, the questions for an interview or in a questionnaire can be tested for clarity, and an experimental procedure can be confirmed as viable. This can identify potential problems in advance so that they can be corrected. Distributing 500 questionnaires and then being told that two of the questions were very confusing wastes time, annoys participants, and is an expensive error that could be avoided by doing a pilot study. If it is difficult to find participants or access to them is limited, asking colleagues or peers to participate can work as an alternative for a pilot study. Note that anyone involved in a pilot study cannot be involved in the main study itself. Why? Because they will know more about the study and this can distort the results. BOX 8.2 Data, Information, and Conclusions There is an important difference between raw data, information, and conclusions. Data is what you collect; this is then analyzed and interpreted and conclusions drawn. Information is gained from analyzing and interpreting the data and conclusions represent the actions to be taken based on the information. For example, consider a study to determine whether a new screen layout for a local leisure center has improved the user’s experience when booking a swimming lesson. In this case, the data collected might include a set of times to complete the booking, user comments regarding the new screen layout, biometric readings of the user’s (Continued) 265 266 8 D ATA G AT H E R I N G heart rate while booking a lesson, and so on. At this stage, the data is raw. Information will emerge once this raw data has been analyzed and the results interpreted. For example, analyzing the data might indicate that users who have been using the leisure center for more than five years find the new layout frustrating and take longer to book, while those who have been using it for less than two years find the new layout helpful and can book lessons more quickly. This indicates that the new layout is good for newcomers but not so good for long-term users of the leisure center; this is information. A conclusion from this might be that a more extensive help system is needed for more experienced users to become used to the changes. 8.3 Data Recording Capturing data is necessary so that the results of a data gathering session can be analyzed and shared. Some forms of data gathering, such as questionnaires, diaries, interaction logging, scraping, and collecting work artifacts, are self-documenting and no further recording is necessary. For other techniques, however, there is a choice in recording approaches. The most common of these are taking notes, photographs, or recording audio or video. Often, several data recording approaches are used together. For example, an interview may be voice recorded, and then to help the interviewer in later analysis, a photograph of the interviewee may be taken to remind the interviewer about the context of the discussion. Which data recording approaches are used will depend on the goal of the study and how the data will be used, the context, the time and resources available, and the sensitivity of the situation; the choice of data recording approach will affect the level of detail collected and how intrusive the data gathering will be. In most settings, audio recording, photographs, and notes will be sufficient. In others, it is essential to collect video data so as to record in detail the intricacies of the activity and its context. Three common data recording approaches are discussed next. 8.3.1 Notes Plus Photographs Taking notes (by hand or by typing) is the least technical and most flexible way of recording data, even if it seems old-fashioned. Handwritten notes may be transcribed in whole or in part, and while this may seem tedious, it is usually the first step in analysis, and it gives the analyst a good overview of the quality and contents of the data collected. Tools exist for supporting data collection and analysis, but the advantages of handwritten notes include that using pen and paper can be less intrusive than typing and is more flexible, for example, for drawing diagrams of work layouts. Furthermore, researchers often comment that writing notes helps them to focus on what is important and starts them thinking about what the data is telling them. The disadvantages of notes include that it can be difficult to capture the right highlights, and it can be tiring to write and listen or observe at the same time. It is easy to lose concentration, biases creep in, handwriting can be difficult to decipher, and the speed of writing is limited. Working with a colleague can reduce some of these problems while also providing another perspective. 8.3 D ATA R E C O R D I N G If appropriate, photograph(s) and short videos (captured via smartphones or other handheld devices) of artifacts, events, and the environment can supplement notes and hand-drawn sketches, providing that permission has been given to collect data using these approaches. 8.3.2 Audio Plus Photographs Audio recording is a useful alternative to note-taking and is less intrusive than video. During observation, it allows observers to focus on the activity rather than on trying to capture every spoken word. In an interview, it allows the interviewer to pay more attention to the interviewee rather than trying to take notes as well as listening. It isn’t always necessary to transcribe all of the data collected—often only sections are needed, depending on the goals of the study. Many studies do not need a great level of detail, and instead recordings are used as a reminder and as a source of anecdotes for reports. It is surprising how evocative audio recordings of people or places from the data session can be, and those memories provide added context to the analysis. If audio recording is the main or only data collection technique, then the quality needs to be good; performing interviews remotely, for example using Skype, can be compromised because of poor connections and acoustics. Audio recordings are often supplemented with photographs. 8.3.3 Video Smartphones can be used to collect short video clips of activity. They are easy to use and less obtrusive than setting up sophisticated cameras. But there are occasions when a video is needed for long periods of time or when holding a phone is unreliable, for example, recording how designers collaborate together in a workshop or how teens interact in a “makerspace,” in which people can work on projects while sharing ideas, equipment, and knowledge. For these kinds of sessions, more professional video equipment that clearly captures both visual and audio data is more appropriate. Other ways of recording facial expressions together with verbal comments are also being used, such as GoToMeeting, which can be operated both inperson and remotely. Using such systems can create additional planning issues that have to be addressed to minimize how intrusive the recording is, while at the same time making sure that the data is of good quality (Denzin and Lincoln, 2011). When considering whether to use a camera, Heath et al. (2010) suggest the following issues to consider: • Deciding whether to fix the camera’s position or use a roving recorder. This decision depends on the activity being recorded and the purpose to which the video data will be put, for example, for illustrative purposes only or for detailed data analysis. In some cases, such as pervasive games, a roving camera is the only way to capture the required action. For some studies, the video on a smartphone may be adequate and require less effort to set up. • Deciding where to point the camera in order to capture what is required. Heath and his colleagues suggest carrying out fieldwork for a short time before starting to video record in order to become familiar with the environment and be able to identify suitable recording locations. Involving the participants themselves in deciding what and where to record also helps to capture relevant action. • Understanding the impact of the recording on participants. It is often assumed that video recording will have an impact on participants and their behavior. However, it is worth taking an empirical approach to this issue and examining the data itself to see whether there is any evidence of people changing their behavior such as orienting themselves toward the camera. 267 268 8 D ATA G AT H E R I N G ACTIVITY 8.1 Imagine that you are a consultant who is employed to help develop a new augmented reality garden planning tool to be used by amateur and professional garden designers. The goal is to find out how garden designers use an early prototype as they walk around their clients’ gardens sketching design ideas, taking notes, and asking the clients about what they like and how they and their families use the garden. What are the advantages and disadvantages of the three approaches (note-taking, audio recording with photographs, and video) for data recording in this environment? Comment Handwritten notes do not require specialized equipment. They are unobtrusive and flexible but difficult to do while walking around a garden. If it starts to rain, there is no equipment to get wet, but notes may get soggy and difficult to read (and write!). Garden planning is a highly visual, aesthetic activity, so supplementing notes with photographs would be appropriate. Video captures more information, for example, continuous panoramas of the landscape, what the designers are seeing, sketches, comments, and so on, but it is more intrusive and will also be affected by the weather. Short video sequences recorded on a smartphone may be sufficient as the video is unlikely to be used for detailed analysis. Audio may be a good compromise, but synchronizing audio with activities such as looking at sketches and other artifacts later can be tricky and error prone. 8.4 Interviews Interviews can be thought of as a “conversation with a purpose” (Kahn and Cannell, 1957). How much like an ordinary conversation the interview will be depends on the type of interview. There are four main types of interviews: open-ended or unstructured, structured, semi-structured, and group interviews (Fontana and Frey, 2005). The first three types are named according to how much control the interviewer imposes on the conversation by following a predetermined set of questions. The fourth type, which is often called a focus group, involves a small group guided by a facilitator. The facilitation may be quite informal or follow a structured format. The most appropriate approach to interviewing depends on the purpose of the interview, the questions to be addressed, and the interaction design activity. For example, if the goal is first to gain impressions about users’ reactions to a new design concept, then an informal, openended interview is often the best approach. But if the goal is to get feedback about a particular design feature, such as the layout of a new web browser, then a structured interview or questionnaire is often better. This is because the goals and questions are more specific in the latter case. 8.4.1 Unstructured Interviews Open-ended or unstructured interviews are at one end of a spectrum of how much control the interviewer has over the interview process. They are exploratory and are similar to conversations around a particular topic; they often go into considerable depth. Questions posed by 8.4 INTERVIEWS the interviewer are open, meaning that there is no particular expectation about the format or content of answers. For example, the first question asked of all participants might be: “What are the pros and cons of having a wearable?” Here, the interviewee is free to answer as fully or as briefly as they want, and both the interviewer and interviewee can steer the interview. For example, often the interviewer will say: “Can you tell me a bit more about . . .” This is referred to as probing. Despite being unstructured and open, the interviewer needs a plan of the main topics to be covered so that they can make sure that all of the topics are discussed. Going into an interview without an agenda should not be confused with being open to hearing new ideas (see section 8.4.5, “Planning and Conducting an Interview”). One of the skills needed to conduct an unstructured interview is getting the balance right between obtaining answers to relevant questions and being prepared to follow unanticipated lines of inquiry. A benefit of unstructured interviews is that they generate rich data that is often interrelated and complex, that is, data that provides a deep understanding of the topic. In addition, interviewees may mention issues that the interviewer has not considered. A lot of unstructured data is generated, and the interviews will not be consistent across participants since each interview takes on its own format. Unstructured interviews can be time-consuming to analyze, but they can also produce rich insights. Themes can be identified across interviews using techniques from grounded theory and other analytic approaches, as discussed in Chapter 9, “Data Analysis, Interpretation, and Presentation.” 8.4.2 Structured Interviews In structured interviews, the interviewer asks predetermined questions similar to those in a questionnaire (see section 8.5, “Questionnaires”), and the same questions are used with each participant so that the study is standardized. The questions need to be short and clearly worded, and they are typically closed questions, which means that they require an answer from a predetermined set of alternatives. (This may include an “other” option, but ideally this would not be chosen often.) Closed questions work well if the range of possible answers is known or if participants don’t have much time. Structured interviews are useful only when the goals are clearly understood and specific questions can be identified. Example questions for a structured interview might be the following: • “Which of the following websites do you visit most frequently: Amazon.com, Google.com, or msn.com?” • “How often do you visit this website: every day, once a week, once a month, less often than once a month?” • “Do you ever purchase anything online: Yes/No? If your answer is Yes, how often do you purchase things online: every day, once a week, once a month, less frequently than once a month?” Questions in a structured interview are worded the same for each participant and are asked in the same order. 8.4.3 Semi-structured Interviews Semi-structured interviews combine features of structured and unstructured interviews and use both closed and open questions. The interviewer has a basic script for guidance so that the same topics are covered with each interviewee. The interviewer starts with preplanned 269 270 8 D ATA G AT H E R I N G questions and then probes the interviewee to say more until no new relevant information is forthcoming. Here’s an example: Which music websites do you visit most frequently? Answer: Mentions several but stresses that they prefer hottestmusic.com Why? Answer: Says that they like the site layout Tell me more about the site layout. Answer: Silence, followed by an answer describing the site’s layout Anything else that you like about the site? Answer: Describes the animations Thanks. Are there any other reasons for visiting this site so often that you haven’t mentioned? It is important not to pre-empt an answer by phrasing a question to suggest that a particular answer is expected. For example, “You seemed to like this use of color . . .” assumes that this is the case and will probably encourage the interviewee to answer that this is true so as not to offend the interviewer. Children are particularly prone to behave in this way (see Box 8.3, “Working with different kinds of users.”) The body language of the interviewer, for example whether they are smiling, scowling, looking disapproving, and so forth, can have a strong influence on whether the interviewee will agree with a question, and the interviewee needs to have time to speak and not be rushed. Probes are a useful device for getting more information, especially neutral probes such as “Do you want to tell me anything else?” and prompts that remind interviewees if they forget terms or names help to move the interview along. Semi-structured interviews are intended to be broadly replicable, so probing and prompting aim to move the interview along without introducing bias. BOX 8.3 Working with Different Kinds of Users Focusing on the needs of users and including users in the design process is a central theme of this book. But users vary considerably based on their age, educational, life, and cultural experiences, and physical and cognitive abilities. For example, children think and react to situations differently than adults. Therefore, if children are to be included in data gathering sessions, then child-friendly methods are needed to make them feel at ease so that they will communicate with you. For very young children of pre-reading or early reading age, data gathering sessions need to rely on images and chat rather than written instructions or questionnaires. Researchers who work with children have developed sets of “smileys,” such as those shown in Figure 8.2, so that children can select the one that most closely represents their feelings (see Read et al., 2002). Awful Not very good Good Figure 8.2 A smileyometer gauge for early readers Source: Read et al. (2002) Really good Brilliant 8.4 INTERVIEWS Similarly, different approaches are needed when working with users from different cultures (Winschiers-Theophilus et al., 2012). In their work with local communities in Namibia, Heike Winschiers-Theophilus and Nicola Bidwell (2013) had to find ways of communicating with local participants, which included developing a variety of visual and other techniques to communicate ideas and collect data about the collective understanding and feelings inherent in the local cultures of the people with whom they worked. Laurianne Sitbon and Shanjana Farhin (2017) report a study in which researchers interacted with people with intellectual disabilities, where they involved caregivers who knew each participant well and could appropriately make the researchers’ questions more concrete. This made it more understandable for the participants. An example of this was when the interviewer assumed that the participant understood the concept of a phone app to provide information about bus times. The caregiver made their questions more concrete for the participant by relating the concept of the phone app to familiar people and circumstances and bringing in a personal example (for instance, “So you don’t have to ring your mom to say ‘Mom, I am lost’”). Another group of technology users are studied by the field of Animal-Computer Interaction (Mancini et al., 2017). Data gathering with animals poses additional and different challenges. For example, in their study of dogs’ attention to TV screens, Ilyena Hirskyj-Douglas et al. (2017) used a combination of observation and tracking equipment to capture when a dog turns their head. But interpreting the data, or checking that the interpretation is accurate, requires animal behavior expertise. The examples in Box 8.3 demonstrate that technology developers need to adapt their data collection techniques to suit the participants with whom they work. As the saying goes, “One size doesn’t fit all.” 8.4.4 Focus Groups Interviews are often conducted with one interviewer and one interviewee, but it is also common to interview people in groups. One form of group interview that is sometimes used in interaction design activities is the focus group. Normally, three to ten people are involved, and the discussion is led by a trained facilitator. Participants are selected to provide a representative sample of the target population. For example, in the evaluation of a university website, a group of administrators, faculty, and students may form three separate focus groups because they use the web for different purposes. In requirements activities, a focus group may be held in order to identify conflicts in expectations or terminology from different stakeholders. Source: Mike Baldwin / Cartoon Stock 271 272 8 D ATA G AT H E R I N G The benefit of a focus group is that it allows diverse or sensitive issues to be raised that might otherwise be missed, for example in the requirements activity to understand multiple points within a collaborative process or to hear different user stories (Unger and Chandler, 2012). The method is more appropriate for investigating shared issues rather than individual experiences. Focus groups enable people to put forward their own perspectives. A preset agenda is developed to guide the discussion, but there is sufficient flexibility for the facilitator to follow unanticipated issues as they are raised. The facilitator guides and prompts discussion, encourages quiet people to participate, and stops verbose ones from dominating the discussion. The discussion is usually recorded for later analysis, and participants may be invited to explain their comments more fully at a later date. Focus groups can be useful, but only if used for the right kind of activities. For a discussion of when focus groups don’t work, see the following links: https://www.nomensa.com/blog/2016/are-focus-groups-useful-researchtechnique-ux http://gerrymcgovern.com/why-focus-groups-dont-work/ The format of focus groups can be adapted to fit within local cultural settings. For example, a study with the Mbeere people of Kenya aimed to find out how water was being used, any plans for future irrigation systems, and the possible role of technology in water management (Warrick et al., 2016). The researcher met with the elders from the community, and the focus group took the form of a traditional Kenyan “talking circle,” in which the elders sit in a circle and each person gives their opinions in turn. The researcher, who was from the Mbeere community, knew that it was impolite to interrupt or suggest that the conversation needed to move along, because traditionally each person speaks for as long as they want. 8.4.5 Planning and Conducting an Interview Planning an interview involves developing the set of questions or topics to be covered, collating any documentation to give to the interviewee (such as consent form or project description), checking that recording equipment works, structuring the interview, and organizing a suitable time and place. Developing Interview Questions Questions may be open-ended (or open) or closed-ended (or closed). Open questions are best suited where the goal of the session is exploratory; closed questions are best suited where the possible answers are known in advance. An unstructured interview will usually consist mainly of open questions, while a structured interview will usually consist of closed questions. A semi-structured interview may use a combination of both types. 8.4 INTERVIEWS DILEMMA What They Say and What They Do What users say isn’t always what they do. People sometimes give the answers that they think show them in the best light, they may have forgotten what happened, or they may want to please the interviewer by answering in the way they think will satisfy them. This may be problematic when the interviewer and interviewee don’t know each other, especially if the interview is being conducted remotely by Skype, Cisco Webex, or another digital conferencing system. For example, Yvonne Rogers et al. (2010) conducted a study to investigate whether a set of twinkly lights embedded in the floor of an office building could persuade people to take the stairs rather than the lift (or elevator). In interviews, participants told the researchers that they did not change their behavior but logged data showed that their behavior did, in fact, change significantly. So, can interviewers believe all of the responses they get? Are the respondents telling the truth, or are they simply giving the answers that they think the interviewer wants to hear? It isn’t possible to avoid this behavior, but an interviewer can be aware of it and reduce such biases by choosing questions carefully, by getting a large number of participants, or by using a combination of data gathering techniques. The following guidelines help in developing interview questions (Robson and McCartan, 2016): • Long or compound questions can be difficult to remember or confusing, so split them into two separate questions. For example, instead of “How do you like this smartphone app compared with previous ones that you have used?” say, “How do you like this smartphone app?” “Have you used other smartphone apps?” If so, “How did you like them?” This is easier for the interviewee to respond to and easier for the interviewer to record. • Interviewees may not understand jargon or complex language and might be too embarrassed to admit it, so explain things to them in straightforward ways. • Try to keep questions neutral, both when preparing the interview script and in conversation during the interview itself. For example, if you ask “Why do you like this style of interaction?” this question assumes that the person does like it and will discourage some interviewees from stating their real feelings. 273 274 8 D ATA G AT H E R I N G ACTIVITY 8.2 Several devices are available for reading ebooks, watching movies, and browsing photographs (see Figure 8.3). The design differs between makes and models, but they are all aimed at providing a comfortable user experience. An increasing number of people also read books and watch movies on their smartphones, and they may purchase phones with larger screens for this purpose. (a) (b) (c) (d) Figure 8.3 (a) Sony’s eReader, (b) Amazon’s Kindle, (c) Apple’s iPad, and (d) Apple’s iPhone Source: (a) Sony Europe Limited, (b) Martyn Landi / PA Archive / PA Images, (c) Mark Lennihan / AP Images, and (d) Helen Sharp The developers of a new device for reading books online want to find out how appealing it will be to young people aged 16–18, so they have decided to conduct some interviews. 1. What is the goal of this data gathering session? 2. Suggest ways of recording the interview data. 8.4 INTERVIEWS 3. Suggest a set of questions for use in an unstructured interview that seeks to understand the appeal of reading books online to young people in the 16–18 year old age group. 4. Based on the results of the unstructured interviews, the developers of the new device have found that an important acceptance factor is whether the device can be handled easily. Write a set of semi-structured interview questions to evaluate this aspect based on an initial prototype and run a pilot interview with two of your peers. Ask them to comment on your questions and refine them based on their comments. Comment 1. The goal is to understand what makes devices for reading books online appealing to people aged 16–18. 2. Audio recording will be less cumbersome and distracting than taking notes, and all important points will be captured. Video recording is not needed in this initial interview as it isn’t necessary to capture any detailed interactions. However, it would be useful to take photographs of any devices referred to by the interviewee. 3. Possible questions include the following: Why do you read books online? Do you ever read print-based books? If so, what makes you choose to read online versus a print-based format? Do you find reading a book online comfortable? In what way(s) does reading online versus reading from print affect your ability to become engrossed in the story you are reading? 4. Semi-structured interview questions may be open or closed-ended. Some closed-ended questions that you might ask include the following: • Have you used any kind of device for reading books online before? • Would you like to read a book online using this device? • In your opinion, is the device easy to handle? Some open-ended questions, with follow-on probes, include the following: • What do you like most about the device? Why? • What do you like least about the device? Why? • Please give me an example of where the device was uncomfortable or difficult to use. It is helpful when collecting answers to closed-ended questions to list possible responses together with boxes that can be checked. Here’s one way to convert some of the questions from Activity 8.2: 1. Have you used a device for reading books online before? (Explore previous knowledge.) Interviewer checks box: □ Yes □ No □ Don’t remember/know 2. Would you like to read a book using a device designed for reading online? (Explore initial reaction; then explore the response.) Interviewer checks box: □ Yes □ No □ Don’t know 3. Why? If response is “Yes” or “No,” interviewer asks, “Which of the following statements represents your feelings best?” 275 276 8 D ATA G AT H E R I N G For “Yes,” interviewer checks one of these boxes: ⬜ I don’t like carrying heavy books. ⬜ This is fun/cool. ⬜ My friend told me they are great. ⬜ It’s the way of the future. ⬜ Another reason (interviewer notes the reason). For “No,” interviewer checks one of these boxes: ⬜ I don’t like using gadgets if I can avoid it. ⬜ I can’t read the screen clearly. ⬜ I prefer the feel of paper. ⬜ Another reason (interviewer notes the reason). 4. In your opinion, is the device for reading online easy to handle or cumbersome? Interviewer checks one of these boxes: ⬜ Easy to handle ⬜ Cumbersome ⬜ Neither Running the Interview Before starting, make sure that the goals of the interview have been explained to the interviewee and that they are willing to proceed. Finding out about the interviewee and their environment before the interview will make it easier to put them at ease, especially if it is an unfamiliar setting. During the interview, it is better to listen more than to talk, to respond with sympathy but without bias, and to appear to enjoy the interview. The following is a common sequence for an interview (Robson and McCartan, 2016): 1. An introduction in which the interviewer introduces themselves and explains why they are doing the interview, reassures interviewees regarding any ethical issues, and asks if they mind being recorded, if appropriate. This should be exactly the same for each interviewee. 2. A warm-up session where easy, nonthreatening questions come first. These may include questions about demographic information, such as “What area of the country do you live in?” 3. A main session in which the questions are presented in a logical sequence, with the more probing ones at the end. In a semi-structured interview, the order of questions may vary between participants, depending on the course of the conversation, how much probing is done, and what seems more natural. 4. A cooling-off period consisting of a few easy questions (to defuse any tension that may have arisen). 5. A closing session in which the interviewer thanks the interviewee and switches off the recorder or puts their notebook away, signaling that the interview has ended. 8.4 8.4.6 INTERVIEWS Other Forms of Interview Conducting face-to-face interviews and focus groups can be impractical, but the prevalence of Skype, Cisco WebEx, Zoom, and other digital conferencing systems, email, and phonebased interactions (voice or chat), sometimes with screen-sharing software, make remote interviewing a good alternative. These are carried out in a similar fashion to face-to-face sessions, but poor connections and acoustics can cause different challenges, and participants may be tempted to multitask rather than focus on the session at hand. Advantages of remote focus groups and interviews, especially when done through audio-only channels, include the following: • • • • The participants are in their own environment and are more relaxed. Participants don’t have to travel. Participants don’t need to worry about what they wear. For interviews involving sensitive issues, interviewees can remain anonymous. In addition, participants can leave the conversation whenever they want to by just cutting the connection, which adds to their sense of security. From the interviewer’s perspective, a wider set of participants can be reached easily, but a potential disadvantage is that the facilitator does not have a good view of the interviewees’ body language. For more information and some interesting thoughts on remote usability testing, see http://www.uxbooth.com/articles/hidden-benefits-remote-research/. Retrospective interviews, that is, interviews that reflect on an activity or a data gathering session in the recent past, may be conducted with participants to check that the interviewer has correctly understood what was happening. This is a common practice in observational studies where it is sometimes referred to as member checking. 8.4.7 Enriching the Interview Experience Face-to-face interviews often take place in a neutral location away from the interviewee’s normal environment. This creates an artificial context, and it can be difficult for interviewees to give full answers to the questions posed. To help combat this, interviews can be enriched by using props such as personas prototypes or work artifacts that the interviewee or interviewer brings along, or descriptions of common tasks (examples of these kinds of props are scenarios and prototypes, which are covered in Chapter 11, “Discovering Requirements,” and Chapter 12, “Design, Prototyping, and Construction”). These props can be used to provide context for the interviewees and help to ground the data in a real setting. Figure 8.4 illustrates the use of personas in a focus group setting. 277 278 8 D ATA G AT H E R I N G Figure 8.4 Enriching a focus group with personas displayed on the wall for all participants to see As another example, Clara Mancini et al. (2009) used a combination of questionnaire prompts and deferred contextual interviews when investigating mobile privacy. A simple multiple-choice questionnaire was sent electronically to the participants’ smartphones, and they answered the questions using these devices. Interviews about the recorded events were conducted later, based on the questionnaire answers given at the time of the event. 8.5 Questionnaires Questionnaires are a well-established technique for collecting demographic data and users’ opinions. They are similar to interviews in that they can have closed or open-ended questions, but once a questionnaire is produced, it can be distributed to a large number of participants without requiring additional data gathering resources. Thus, more data can be collected than would normally be possible in an interview study. Furthermore, participants who are located in remote locations or those who cannot attend an interview at a particular time can be involved more easily. Often a message is sent electronically to potential participants directing them to an online questionnaire. Effort and skill are needed to ensure that questions are clearly worded and the data collected can be analyzed efficiently. Well-designed questionnaires are good for getting answers to specific questions from a large group of people. Questionnaires can be used on their own 8.5 QUESTIONNAIRES or in conjunction with other methods to clarify or deepen understanding. For example, information obtained through interviews with a small selection of interviewees might be corroborated by sending a questionnaire to a wider group to confirm the conclusions. Questionnaire questions and structured interview questions are similar, so which technique is used when? Essentially, the difference lies in the motivation of the respondent to answer the questions. If their motivation is high enough to complete a questionnaire without anyone else present, then a questionnaire will be appropriate. On the other hand, if the respondents need some persuasion to answer the questions, a structured interview format would be better. For example, structured interviews are easier and quicker to conduct if people will not stop to complete a questionnaire, such as at a train station or while walking to their next meeting. It can be harder to develop good questionnaire questions compared with structured interview questions because the interviewer is not available to explain them or to clarify any ambiguities. Because of this, it is important that questions are specific; when possible, ask closed-ended questions and offer a range of answers, including a “no opinion” or “none of these” option. Finally, use negative questions carefully, as they can be confusing and may lead to false information. Some questionnaire designers, however, use a mixture of negative and positive questions deliberately because it helps to check the users’ intentions. 8.5.1 Questionnaire Structure Many questionnaires start by asking for basic demographic information (gender, age, place of birth) and details of relevant experience (the number of hours a day spent searching on the Internet, the level of expertise within the domain under study, and so on). This background information is useful for putting the questionnaire responses into context. For example, if two responses conflict, these different perspectives may be because of their level of experience—a group of people who are using a social networking site for the first time are likely to express different opinions than another group with five years’ experience of using such sites. However, only contextual information that is relevant to the study goal needs to be collected. For example, it is unlikely that a person’s height will provide relevant context to their responses about Internet use, but it might be relevant for a study concerning wearables. Specific questions that contribute to the data-gathering goal usually follow these demographic questions. If the questionnaire is long, the questions may be subdivided into related topics to make it easier and more logical to complete. The following is a checklist of general advice for designing a questionnaire: • Think about the ordering of questions. The impact of a question can be influenced by question order. • Consider whether different versions of the questionnaire are needed for different populations. • Provide clear instructions on how to complete the questionnaire, for example, whether answers can be saved and completed later. Aim for both careful wording and good typography. • Think about the length of the questionnaire, and avoid questions that don’t address the study goals. • If the questionnaire has to be long, consider allowing respondents to opt out at different stages. It is usually better to get answers to some sections than no answers at all because of dropout. • Think about questionnaire layout and pacing; for instance, strike a balance between using white space, or individual web pages, and the need to keep the questionnaire as compact as possible. 279 280 8 D ATA G AT H E R I N G 8.5.2 Question and Response Format Different formats of question and response can be chosen. For example, with a closed-ended question, it may be appropriate to indicate only one response, or it may be appropriate to indicate several. Sometimes, it is better to ask users to locate their answer within a range. Selecting the most appropriate question and response format makes it easier for respondents to answer clearly. Some commonly used formats are described next. Check Boxes and Ranges The range of answers to demographic questions is predictable. Nationality, for example, has a finite number of alternatives, and asking respondents to choose a response from a predefined list makes sense for collecting this information. A similar approach can be adopted if details of age are needed. But since some people do not like to give their exact age, many questionnaires ask respondents to specify their age as a range. A common design error arises when the ranges overlap. For example, specifying two ranges as 15–20 and 20–25 will cause confusion; that is, which box do people who are 20 years old check? Making the ranges 15–19 and 20–24 avoids this problem. A frequently asked question about ranges is whether the interval must be equal in all cases. The answer is no—it depends on what you want to know. For example, people who might use a website about life insurance are likely to be employed individuals who are 21–65 years old. The question could, therefore, have just three ranges: under 21, 21–65, and over 65. In contrast, to see how the population’s political views vary across generations might require 10-year cohort groups for people over 21, in which case the following ranges would be appropriate: under 21, 21–30, 31–40, and so forth. Rating Scales There are a number of different types of rating scales, each with its own purpose (see Oppenheim, 2000). Two commonly used scales are the Likert and semantic differential scales. Their purpose is to elicit a range of responses to a question that can be compared across respondents. They are good for getting people to make judgments, such as how easy, how usable, and the like. Likert scales rely on identifying a set of statements representing a range of possible opinions, while semantic differential scales rely on choosing pairs of words that represent the range of possible opinions. Likert scales are more commonly used because identifying suitable statements that respondents will understand consist

Tags

data gathering interaction design research methods
Use Quizgecko on...
Browser
Browser