Conducting A Survey - Lecture 2 2024 PDF
Document Details
Uploaded by Deleted User
2024
Tags
Related
- Conducting Survey PDF
- Household Sample Surveys in Developing and Transition Countries PDF
- Conducting a Survey PDF
- Designing, Conducting & Gathering Info From Surveys & Writing the Report (EAPP, S1Q2) PDF
- Research Methods in Psychology: Survey Research ch 5 PDF
- Conducting A Survey Block 2, 2024 Lecture 1 PDF
Summary
This lecture document covers conducting a survey, focusing on the cognitive effects in questionnaires, and quality. It also discusses cognitive laboratories and includes a framework for Total Survey Error. The document is formatted as a lecture for a university course.
Full Transcript
Conducting A Survey Block 2, 2024 Week 2: Cognitive Effects in Questionnaires, Quality, and Cognitive Laboratories 1 Goals of week 2 ▪ Total Survey Error (TSE) ▪ Stages of answering questions (Tourangeau’s Respons...
Conducting A Survey Block 2, 2024 Week 2: Cognitive Effects in Questionnaires, Quality, and Cognitive Laboratories 1 Goals of week 2 ▪ Total Survey Error (TSE) ▪ Stages of answering questions (Tourangeau’s Response Model) ▪ Survey modes ▪ Fieldwork ▪ Incentives ▪ Survey Motivation 3 Total Survey Error (TSE) Framework (Groves et al. 2009, p.48) 4 Total survey error ‘True’ Population Mean 1. Coverage error 2. Sampling error 3. Nonresponse error 4. Measurement error 5. Post-survey processing error (6. Some other errors…) Sample Mean (Bias/Increased Variance) 5 Coverage error Sampling frame does not cover all population elements and may contain ineligible elements Example: Survey population: 18 to 80 years old citizens of Amsterdam Sampling frame: Amsterdam telephone book Coverage error can introduce bias. 6 Sampling error Estimates are based on samples, not the population. This introduces sampling error, captured in confidence intervals. IMPORTANT: Confidence intervals are only meaningful in case of probability samples. 7 Nonresponse error Occurs if respondents differ from nonrespondents in a systematic way High reponse rates - though desirable - do not guarantee absence of nonresponse error if specific group is not represented -> Compare samples to known population demographics (although these are often based on samples, too) Strategies to minimize nonresponse error: Refusal conversion, response weighting 8 Survey errors Overview so far: Coverage error Sampling error Nonresponse error 9 Measurement error Occurs when our instrument does not measure exactly what we want it to measure Random error -> reliability Systematic error -> validity Design issues: lay-out of response scales, question design Interviewer effects Social desirability etc 10 Processing error Error connected to the processing of survey data, such as data entry, handling of questionnaires and datasets, computer errors and the like 11 Stages of answering questions ▪ Psychology of survey response ▪ Model by Tourangeau et al. (2000): ▪ The process of reading the question and answering the question is an extension of the ‘stimulus- response’ model of survey response. ▪ Stimulus = questions ▪ Response = answer 12 13 A survey question ▪ How many times have you seen a doctor in the past year? ____ Retrieval Is this a good question? Why is this a “bad” question? 1. Demands ability to remember all occasions 2. Memory depends on answer (e.g. none vs. 63 times) 3. What doctor? Dentist? Only one or all? 4. Past 12 months? Only 2023? Or 2022? Interpretation / Judgment 14 Model of Tourangeau ▪ Four stages of cognitive information processing after reading question: − Interpretation − Retrieval − Judgment − Reporting 15 Model of Tourangeau ▪ Respondent is unaware of the model ▪ 4 Steps in the cognition phase happen more or less simultaneously in a very short time ▪ Model helps us to understand the process of questions → Answers and ‘errors’ that may occur And this helps you to design a questionnaire of high quality. 16 Model of Tourangeau Stage 1: Interpretation Respondent interprets the question. Interpretation can be affected by ▪ The clarity of the question ▪ Previous questions ▪ Respondent characteristics ▪ E.g. Intelligence, education/knowledge Bad Examples: Question in the slide: What is a doctor? (clarity in definition) “Are you in favour or against pre-birth genetic analyses for inheritable diseases?” (prior knowledge?) 17 Model of Tourangeau Stage 2: Retrieval The respondent has to retrieve relevant information from memory. Aspects of the questionnaire that influence retrieval Ordering Priming Framing Respondent characteristics (e.g. intelligence) 18 Influences on Retrieval and Implications for questionnaire design 1. Ordering ▪ Answer to one question depends on the previous one(s) ▪ Common in item grids/matrices, when respondents have to answer many questions General General question A: How satisfied are you with your life in general? Specific question B: How satisfied are you with your relationship? Specific Correlation between questions differs, when ordering differs. AB: r = 0.32 (questions interpreted as indicating to separate domains) BA: r = 0.67 (questions interpreted as indicating connected domains) 19 Influences on Retrieval and Implications for questionnaire design 2. Priming ▪ Priming is to activate parts of associations in memory just before carrying out a task or action ▪ Faster retrieval of the primed information from memory ▪ How? − Write small introduction − Ask extra questions on subject (e.g. income) − Item grids (respondent learns how to deal with answer options) 20 Influences on Retrieval and Implications for questionnaire design 3. Framing Frames are mental structures that are used to facilitate the thinking process. We use frames to provide categories and structure our thoughts. Framing is the process of selectively using frames to invoke a particular image or idea. In questionnaires frames can impact which information is retrieved ▪ Introductions and ▪ Lay-out (pictures) can impact response ▪ Leading questions are an example of framing → Be N-E-U-T-R-A-L → Be aware of it, it is very hard not to frame 21 Model of Tourangeau Stage 3: Judgment Can the retrieved information be used to answer the question? The different pieces of retrieved information result in an internal answer. What the respondent believes is the correct answer. (Need not be correct) (Need not be reported) 22 Model of Tourangeau Stage 4: Reporting ▪ The final response is a compromise between the internal answer and the response categories. ▪ Judgment may be changed if no category fits the internal answer. ▪ Willingness to report internal answer is strongly influenced by perceived response burden due to: − Sensitivity of Topic & perceived Social Desirability of answers − Cognitive effort − Time/Length 23 Causes of Measurement Error ▪ Demanding too much of respondents causes erroneous responses ▪ Intentional or unintentional ▪ Causes: ▪ Cognitive effort: Something goes wrong at one point in Tourangeau’s model – Burden: Time demands may be beyond acceptability (consider esp. length!) » Break Offs as alternative to measurement error – When a sensitive topic is addressed 24 Types of measurement error ▪ Effect I: Misreporting (over / under; systematic bias) ▪ Effect II: Loss in precision (loss in reliability; increase in variance) ▪ Effect III: Cognitive shortcuts (“Satisficing”, Krosnick 1991), e.g. – ‘Don‘t knows’ when internal answer is available – Choose first option that fits best (not consider ALL) – Mental ‘coin flipping’ – Straight Lining – Checking ‘pretty patterns’ in item grids → All undesirable, biasing noise in the data! ▪ Generally summed up by the term “Measurement Error” − The Respondent produces measurement error − But is not always causal of error (Design!) 25 Recruitment Offline – Face-to-face – Telephone – Paper-and-pencil Online – Email – Websites – Social network sites 26 Survey modes Face-to-face surveys: Print questionnaires Make plans for fieldwork: how many interviewers? When? Where? Who is going to collect the questionnaires? If you haven’t done so already: check locations. Will you be allowed to conduct your survey there? Web surveys: Create online questionnaire (Qualtrics) Test the questionnaire! Do this multiple times in order to check if you have chosen the correct question type (e.g. single vs. multiple answers) and routing (if applicable) Distribute link Mail surveys: Print questionnaires Envelope and stamps included? Send questionnaires to respondents Make plans for sending reminders Always test your questionnaires and let other people test it!!! 27 Modes of data collection Basic distinctions: Interviewer- versus self-administered Computerized versus paper-and-pencil Written versus audio Private versus group setting Interviewer Self-administered CATI, CAPI Web-based surveys, CASI Computerized PAPI Mail / Drop-off /Group questionnaires Not Computerized 28 Interviewer administered versus self-administered questionnaires Interviewer-administered questionnaires Classic: Face-to-face interview Later: Telephone interviews Advantages: - Interviewer can provide guidance & explanation - Interviewer can probe / follow-up - Well suited for qualitative research - Motivation/Rapport Disadvantages: - Relatively high cost - Potential interviewer effects 29 Potential interviewer effects ▪ Response rates - Interviewer’s expertise can influence response rates; rates increase with experience - Interviewer behavior can even influence cooperation rates in subsequent interviews ▪ Social desirability - Interviewer’s gender, age, and race can bias responses, especially if directly related to study topic - In some cases, outside interviewers preferred to ‘insiders’ (Crowley, Roff, & Lynch, 2007) - Interviewing style (e.g. personal versus formal) can influence data quality (see Dijkstra, 1987) ▪ More effects - Some interviewers are more thorough than others... 30 Computer assisted interviewing Advantages: - Data processing (no need for additional data entry) - Automatic routing (easier for interviewer and respondent) - Presentation of audiovisual stimuli Disadvantages - Development costs in terms of time and money (less so if software is used in multiple surveys) 31 Which method is the best? No clear answer Depends on target population, goal, budget, timeframe et cetera of your study Sources of error and magnitude of error differ between methods -> Total survey error approach 32 Face-to-face surveys Coverage error can be very small, if researcher has access to population database (such as provided by CBS, Statistics Netherlands) Non-response error: Response rates are usually high (but declining) in face-to-face surveys, but can depend on factors such as neighborhood /urbanity and on hours respondent is home (i.e. lower for respondents who work full-time) Measurement error? Interviewer can clarify concepts (+) and reduce item nonresponse (+) but can also introduce social desirability bias (-) 33 Telephone surveys Coverage: problems with cell phone numbers and unlisted numbers, landline coverage decreasing Cell-phone only respondents different from general population (see next slide) Nonresponse: used to be manageable, but is increasing -> e.g. ‘Don’t call me’-register (NL) Measurement error: see face-to-face mode Interviewer can clarify concepts (+) and reduce item nonresponse (+) but can also introduce social desirability bias (-) 34 Mail surveys Coverage error: relatively low, depends on accuracy of database used, excludes persons without known/steady address Nonresponse: response rates relatively low, though difference with telephone is decreasing - Less educated respondents less likely to return questionnaire - Item nonresponse relatively high Measurement error: problems with routing, no interviewer present to explain/clarify (-), high perceived anonymity (+) 35 Web surveys Main problems with web surveys are respondent selection errors Problem of internet non-coverage is decreasing, but no sampling frame for general population At this point, can only produce usable estimates for populations with known e-mail addresses (e.g. student populations, company employees) Only applicable in general population surveys if combined with other sampling method, for example if respondents are sent letters first, i.e. first contact not via internet. For an example see http://www.lisspanel.nl (in Dutch) 36 Web surveys (2) Response rates relatively low, but can be improved, e.g. by sending advance postcards Relatively high breakoff rates, depending on (perceived) length and clarity of the questionnaire Samples often large due to low cost of sending the survey to many potential respondents. Unfortunately, this can only decrease sampling error if probability sample is used. As in mail questionnaires, measurement is affected a lot by design criteria 37 Optimal strategy: tailored design Tailored Design Method (Dillman, 2007) Contact augmentation Multiple modes – Mail announcement letter – Email invitation – Telephone reminder – Face-to-face visit for non-contacts or refusers 39 Mixed-mode designs Why? – Balance for under-coverage, e.g. Dual-frame designs – Increase overall response rates – Save costs – how? How? 2 major differences (simplified): – Concurrent Design (Truly multiple mode): Let respondent choose preferred mode – Sequential Design (One main mode): » approach nonrespondents in first mode (e.g. Web) with second mode (e.g. CATI) » or a combination of modes (CATI and CAPI) 40 Combining modes Mixing modes has advantages, but – Answers can differ by mode – Can we combine data collected through different modes in one study? – Can data that are collected through different modes be compared over studies or countries? How should questionnaires be designed? 41 “Thoughtless” Mixing increases Measurement Errors – Different modes have tradition of different formats » Question format has effect on response distribution – Consequence: Designers may routinely enhance unwanted mode effects in mixed-mode survey » E.g. unfolding in one mode, full presentation of all response options in other mode – What to do? 42 Design for the Mix Two Situations: – One main method that accommodates the survey situation best » Main method is used to maximum potential » Other methods auxiliary Example: Nonresponse follow-up, Non-covered groups – Truly multiple mode design » Modes equally important Example: International surveys, Longitudinal studies, Respondent is offered choice 43 Example UNI Mode Design Mail, Telephone and Face-to-face interview Response options the same across modes Same descriptive labels for response categories Reduced number of response categories – Maximum 7 pushing the limit for phone – But, used show cards in face-to-face » Equivalent with visual presentation mail Used simple open questions Interviewer instructions and instructions in mail questionnaire equivalent 44 Example: Security Monitor (roughly) Web Mailed reminders Sequence! CAPI CATI (No telephone (Registered or non-contact) Telephone) Would you call the main mode or multiple mode design? 45 Fieldwork Take sufficient time to – Develop survey – Program survey – Test survey – Let other people test survey – Web surveys are normally filled out within a couple days (bulk within a single day)! – Reminder after 1 week 46 Incentives: important theories Heuristic rule for compliance (Groves et al., 1992) – Respondents should be more willing to comply with a survey request to the extent that the compliance constitutes the repayment of a perceived gift, favor, or concession. Social exchange theory (Dillman, 1978) – actions of individuals are motivated by the return these actions are expected to bring from others. Economic exchange theory (Biner and Kidd, 1994) – Respondents choose to participate in a survey after making a rational cost-benefit calculation. 47 Type of incentives Prepaid: before they take part in the survey – They are unconditional Postpaid: after the respondent finished the survey Prepaid incentives are more powerful than postpaid incentives Other classification Monetary (more powerful) Non-monetary 48 Top-3 motives for inactivity 1. Personal reason (64%): too busy, tired of it, health reasons, family situation 2. Technical reason (14%), functioning of computer, losing password, not receiving emails 3. Questionnaires (13%): too long, too many, uninteresting, repetitions ▪ Based on open answers of sleepers in LISS Panel 52 ▪ Same top-3 in TNS NIPO base 52 Top-3 motives for becoming a panel member 1. Personal interest/curiosity 2. (Financial) compensation 3. Contributing to science. ▪ Based on open answers of sleepers ▪ Note: Compensation is often mentioned as the second reason, after probing further by the interviewer. ▪ Can we make participation more interesting? 53 Adding fun questions: Study by Marije Oudejans (CentERdata) ▪ “Fun” questions, tailored to the specific interest of respondents ▪ At the end of a questionnaire ▪ Recency memory effects on: ▪ Appreciation of the questionnaire ▪ Participation in the following month 54 Outlook Choose mode pragmatically for your objective – E.g. Sensitive issue – maybe not personal interviews – E.g. Budget low – web survey There is no flawless design only optimal designs! If you work on larger projects/ for organisations, designs can become more ambitious 57