Week 5 - Data Collection Techniques.pptx
Document Details
Uploaded by BestPerformingArtePovera
Tags
Full Transcript
Data Collection Techniques Learning objective: To describe a range of data collection techniques to identify potential human factors with serious consequences. Data Collection Techniques Data collection is to collect the relevant information that will be used to provide description...
Data Collection Techniques Learning objective: To describe a range of data collection techniques to identify potential human factors with serious consequences. Data Collection Techniques Data collection is to collect the relevant information that will be used to provide description of the system or activity. The data collection methods available for human factors practitioner include: 1. Observation 2. Interview 3. Questionnaire 4. Simulation 2 1. Observation Observational method is to gather data regarding the physical and verbal aspects of a task or scenario. These include tasks catered for by the system, the individuals performing the tasks, the tasks themselves (task steps and sequence), errors made, communications between individuals, the technology used by the system in conducting the tasks (controls, displays, communication technology), the system environment and the organisational environment. Direct observation is widely used observational technique, whereby an analyst records a particular task or scenario in a visual format. Types of information that can be elicited from observational methods include: the sequence of activities, duration of activities, frequency of activities, fraction of time spent in states, and spatial movement. 3 1. Observation Procedures for conducting a direct observation Step 1: Define the objective of observation This step should include type of product or system, environment, user groups, and type of data. Step 2: Define the scenario The exact nature of the required scenario(s) should be clearly defined by the observation team. For instance, operator interaction and performance under emergency situations may be the focus of the observation. Step 3: Prepare observation plan The analysts should consider what to observe, and how to observe it. 4 1. Observation Step 4: Run pilot observation This step allows the analysts to assess any problems with the data collection, such as noise interference or problems with the recording equipment. The quality of data collected can also be tested as well as any effects upon task performance that may result from the presence of observers. If major problems are encountered, the observation may have to be re-designed. 5 1. Observation Step 5: Conduct the observation Data is recorded visually using video and audio recording equipment. An observation transcript is also created during the observation. Step 6: Analyse the data Analyse the data in the format that is required, such as frequency of tasks, verbal interactions, and sequence of tasks. Step 7: Give feedback to participants The participants should be provided with feedback. This could be in the form of a feedback session or a letter to each participant. 6 2. Interview Interviews is to collect a wide variety of data ranging from system usability, individual’s perceptions, attitudes and reactions, job analysis, cognitive task analysis to usability and error related data. Participants are interviewed on a one-to-one basis and the interviewer uses pre-determined probe questions to elicit the required information. There are three (3) types of interview; 1. Structured, 2. Semi-structured and 3. Unstructured or open interviews. 7 2. Interview (continue) 1. Structured interviews The interviewer probes the participant using a set of pre- defined questions designed to elicit specific information regarding the subject under analysis. The content of the interview is pre-determined and no scope for further discussion is permitted. Structured interviews are the least popular type because of their rigid nature. A structured interview is only used when the type of data required is rigidly defined, and no additional data is required. 8 2. Interview (continue) 2. Semi-structured interviews Semi-structured interviews are flexible. So the interviewer can direct the focus of the interview and also use further questions that were not originally part of the planned interview structure. As a result, information surrounding new or unexpected issues is often uncovered during semi-structured interviews. The semi-structured interview is the most commonly applied type of interview due to its flexibility. 9 2. Interview (continue) 3. Unstructured interviews There is no pre-defined structure or questions and the interviewer goes into the interview ‘blind’ so to speak. Unstructured interviews allow the interviewer to explore, on an ad-hoc basis, different aspects of the subject under analysis. Unstructured interviews are infrequently used because their unstructured nature may result in crucial information being neglected or ignored. 10 2. Interview (continue) Focus group The focus group should contain around five participants with similar backgrounds. The discussion should be managed at a fairly high-level rather than asking specific questions. The interviewer should introduce topics and facilitate participants’ discussion. 11 2. Interview (continue) Question Types An interviewer typically employs three different types of question during the interview process. 1. Closed questions, 2. Open-ended questions, and 3. Probing questions. 4. Closed questions Closed questions are used to gather specific information and require YES or NO answers. No elaboration or explanation is required. For example: Do you think that system X is usable? 12 2. Interview (continue) 2. Open-ended questions An open-ended question is used to elicit more than the simple yes/no information. The questions allow the participants to answer in whatever way they wish, and also elaborate on their answer. The questions gather more pertinent data than closed questions. However, open-ended question data requires more time to analyse than closed question data does, and so closed questions are more commonly used. Example of open-ended question is ‘What do you think about the usability of system X?’. 13 2. Interview (continue) 3. Probing questions A probing question is used after an open-ended or closed question to gather more specific data regarding the participant’s previous answer. Example of probing questions are ‘Why did you think that system X was not usable?’ or ‘How did it make you feel when you made that error with the system?’. Cycle of questions The interviewer should begin by focusing on a particular topic with an open-ended question. When the participant has answered, use a probing question to gather further information (Stanton and Young, 1999). A closed question should then be used to gather specific information regarding the topic. The cycle of open, probe and closed question should be maintained throughout the interview. 14 2. Interview (continue) Procedures for developing questions and conducting an interview Step 1: Define the interview objective Define the objective to set the focus of the interview. A clear definition of the interview objectives ensures that the interview questions used are relevant and that the data gathered is of optimum use. For example, interviewing a civil airline pilot for a study into design induced human error on the flight deck. The objective of the interview is to discover which errors the pilot had made or seen being made in the past, with which part of the interface, and during which task. 15 2. Interview (continue) Procedures for developing questions and conducting an interview Step 2: Develop questions In the design induced pilot error case, examples of pertinent questions would be, ‘What sort of design induced errors have you made in the past on the flight deck?’ This will be followed by a probing question such as, ‘Why do you think you make this error?’ or ‘What task were you performing when you made this error?’ An interview transcript or data collection sheet should then be created, containing the interview questions and spaces for demographic information (name, age, sex, occupation etc.) and interviewee responses. 16 2. Interview (continue) Procedures for developing questions and conducting an interview Step 3: Perform a pilot or trial run This step is useful in shaping the interview into its most efficient form and allows any potential problems in the data collection. The interviewer will see an indication of the type of data that the interview may gather, and can change the interview content if appropriate. Step 4: Redesign interview based upon pilot run This step may include the removal of redundant questions, the rewording of existing questions or the addition of new questions. 17 2. Interview (continue) Procedures for developing questions and conducting an interview Step 5: Select appropriate participants Use a representative sample from the population of interest. For instance, the participant sample would comprise airline pilots with varying levels of experience for the study of design induced human error on the flight deck. Step 6: Conduct and record the interview As general guidelines, the interviewer must confident and familiar with the topic in question, communicates clearly and establishes a good rapport with the participant/interviewee. The interview should last a minimum of 20 minutes and a maximum of 40 minutes (Kirwan and Ainsworth, 1992). 18 2. Interview (continue) Procedures for developing questions and conducting an interview Step 7: Transcribe the data This step involves replaying the initial recording of the interview and transcribing fully everything that is said during the interview, both by the interviewer and the interviewee. Step 8: Gather the data Look for the specific data that was required by the objective of the interview. The specific data is known as the ‘expected data’. Re-analyse the interview to gather any ‘unexpected data’, that is any extra data (not initially outlined in the objectives). Step 9: Analyse data Analyse the data using appropriate statistical tests or graphs. 19 3. Questionnaire Questionnaires offer a flexible way of collecting specific data from a large population sample. Questionnaire is to collect data regarding numerous issues within HF and design, including usability, user satisfaction, error, and user opinions and attitudes. Questionnaire can be used to evaluate concept and prototypical designs, to probe user perceptions, to evaluate existing system designs and to evaluate system usability or attitudes towards an operational system. 20 3. Questionnaire (continue) Procedures for developing questionnaire Step 1: Define the objective Define objective by focusing on type of information from the questionnaire data. For example, the objectives should contain precise descriptions of different usability problems already encountered and descriptions of the usability problems when designing a questionnaire to gather information on the usability of a system or product. The analysts should identify different tasks involved in the use of the system in question and categorize personnel performing the task. The analysts should specify the types of questions (closed, multiple choice, open, rating, ranking) and the expected results. 21 3. Questionnaire (continue) Procedures for developing questionnaire Step 2: Define the population Define the sample population. The population is the participants whom the questionnaire will be distributed to. Describe an area of personnel, such as ‘control room operators’. Define age groups, different job categories (control room supervisors, operators, management etc.) And different organisations. Step 3: Construct the questionnaire A questionnaire is comprised of four parts: an introduction, participant information section, the information section and an epilogue. 22 3. Questionnaire (continue) Procedures for developing questionnaire Step 3: Construct the questionnaire (continue) i. Introduction Contain information that informs the participant who you are, what the purpose of the questionnaire is and the purpose of results. Avoid putting information in the introduction that may bias the participant. For example, describing the purpose of the questionnaire as ‘determining usability problems with existing C4i interfaces’. ii. Participant information section Contain multiple choice questions requesting information about the participant, such as age, sex, occupation and experience. 23 3. Questionnaire (continue) Procedures for developing questionnaire Step 3: Construct the questionnaire (continue) iii. Information section Contains the questions designed to gather the required information related to the initial objectives. Type of question is dependent upon the analysis and the type of data required. The type of question used in the information section of the questionnaire should be consistent. Each question should be short in length, worded clearly and concisely, using relevant language. 24 3. Questionnaire (continue) Procedures for developing questionnaire Step 3: Construct the questionnaire (continue) iii. Information section (continue) Data analysis should be considered when constructing the questionnaire. For example, if time is limited, use closed questions because it provides a quick means to analyse specific data. Consider the size of questionnaire. Too large and participants will not complete the questionnaire, yet a very small questionnaire may seem worthless. Questionnaires should be no longer than two pages (Wilson and Corlett, 1995). 25 3. Questionnaire (continue) Table 1: Types of questions 26 3. Questionnaire (continue) Procedures for developing questionnaire Step 3: Construct the questionnaire (continue) iv. Epilogue Ask for the email address to contact for study in the future work. Appreciate the participant’s time and cooperation in answering the questions. 27 3. Questionnaire (continue) Procedures for developing questionnaire Step 4: Run the pilot study This step allows any problems with the questionnaire to be removed before questionnaire distribution and analysis. Common problems are errors within the questionnaire, redundant questions and the participants do not understand the questions or the participants find the questions confusing. Wilson and Corlett (1995) recommend that the pilot stage should comprise three stages: individual criticism, depth interviewing, and large sample administration. 28 3. Questionnaire (continue) Procedures for developing questionnaire Step 4: Run the pilot study (continue) i. Individual criticism The questionnaire should be distributed to several colleagues who are experienced in questionnaire construction, distribution and analysis. Colleagues should be encouraged to offer criticisms of the questionnaire. ii. Depth interviewing Distribute the questionnaire to a small sample of the intended population. The participants should be subjected to an interview regarding the 29 answers that they provided. 3. Questionnaire (continue) Procedures for developing questionnaire Step 4: Run the pilot study (continue) iii. Large sample administration Distribute the questionnaire to a large sample of the intended population. This allows the analyst to ensure that the correct data is being collected and also that sufficient time is available to analyse the data. Step 5: Distribute the questionnaire The questionnaire is distributed depending upon the aims and objectives of the analysis, and also the target population. Step 6: Analyse data 30 Step 7: Inform participants the outcome of study 3. Questionnaire (continue) 31 3. Questionnaire (continue) 32 3. Questionnaire (continue) 33 4. Simulation It is intended to observe performance of individual. Example of study: Virtual environment evaluation for a safety warning effectiveness study 34 References Kirwan, B., & Ainsworth, L.K. (1992). A Guide to Task Analysis. London: Taylor and Francis. Stanton, N.A., & Young, M.S. (1999). A Guide to Methodology in Ergonomics: Designing for Human Use. London: Taylor and Francis. Stanton, N.A., Salmon, P.M., Walker, G.H., Baber, C., & Jenkins, D.P. (2005). Human Factors Methods: A Practical Guide for Engineering and Design. Burlington, VT: Ashgate Publishing Ltd. Wilson, J.R., & Corlett, N.E. (1995). Evaluation of Human Work: A Practical Ergonomics Methodology. London: Taylor and Francis. 35