Week 9 & 10 Exam Notes PDF
Document Details
Tags
Related
Summary
These notes cover different aspects of research methods, specifically online surveys and their uses in psychology and other fields.
Full Transcript
Week 9 & 10 Week 9: Research and the Internet Week 10: Survey Methods Week 9 Publishing psychology studies Computer scientists archiving their work online since 1970s Physicists archiving their work since 1990s in a journ...
Week 9 & 10 Week 9: Research and the Internet Week 10: Survey Methods Week 9 Publishing psychology studies Computer scientists archiving their work online since 1970s Physicists archiving their work since 1990s in a journal called “arXiv” Psycoloquy (APA, 1990-2002) Journal of Vision (ARVO, 2001 to date) Types of Data collection Online surveys Perception/Cognition Experiments, etc. Automatic test scoring stimulus presentation response timing Examples Qualtrics, Automatic test scoring LimeSurvey, SurveyMonkey Examples PsychoPy Gorilla Applied research Evaluation of therapeutic programs e.g., MindSpot Advantages of using the internet for research Historical records Digital transactions Quantitative & qualitative material Can make recruitment/data collection much easier Large, diverse sample at low cost Can focus on specific groups Different countries/cultures Potentially less experimenter bias (Online is always standardised, experimenter does not influence the participant’s answer) Faster data collection Global participation = 24-hour data collection Internet more familiar & interesting (younger Ss); increased motivation Ethical benefits ❖ Anonymity & increased self disclosure via online research ❖ Sometimes less effort/cost for Ss (e.g., completing surveys at home); more likely to participate ❖ Decreased social pressure Ethical disadvantages Facebook “emotional manipulation” study — sent people either happy/positive OR sad/negative posts Harm resulting from direct participation What if online Ss become distressed? Breaches of confidentiality Disadvantages of using the internet Sample biases & lack of generalisability drop out No control over data collection setting Participants may invest less energy (cf. social facilitation theory) Data may get leaked – security issues Technology failures Multiple submissions (repeat participants) Researchers need to become greater technology experts Threats to ‘data quality’ solutions Pilot & pretest Collect data from ‘trustworthy’ source Good management of Ss (e.g., recording IP addresses to avoid multiple submission) Establish objective exclusion criteria (e.g., timing, attention checks) Week 10 Survey Methods in Research Practice 1. Find the appropriate measures to answer your research question 2. Determine the quality of a measure for your purposes 3. Determine whether to adapt an existing measure or create a new one 4. Decide on the type of survey to use and where to create it 5. Distribute your survey and collect responses 6. Determine whether the survey responses are valid 7. Handle invalid responses in your dataset Where to find measures? Using electronic databases Example — APA PsycTEST website 2. Determine the quality of the measure for your purposes. Is the memory questionnaire appropriate for research questions and design? Target demographic Method of delivery Does it have good psychometric properties? Validity (e.g., construct, content, criterion) Reliability (e.g., internal consistency, test-retest reliability) Is the questionnaire appropriate from a practical sense? Is it free? Is there restricted access? Determine whether to adapt an existing measure or create a new one. items are a double negative and a bit confusing to answer — may need to adapt items keep the response scales consistent across measures where possible Such as: Measure 1 scored strongly disagree (1) to strongly agree (5) measure 2 scored strongly agree (1) to strongly disagree (7). This is not always possible Measure 1 = Strongly disagree, Disagree, Neutral, Agree, Strongly agree – attitudes Measure 2 = Never, Rarely, Sometimes, Often, Always — frequency Minimize confusion by grouping frequency scales together and attitude scales together Benefit of using/adapting an existing measure Save time Save on resources (e.g., cost of running a pilot) easier to compare results with previous studies only give an overview in method section Potentially make it easier to publish your study — Reviewers more likely to support Create your own measure Research: Existing measures Theories on the research topic Focus groups Other resources – e.g., DSM, ICD Step 1 Step 2 Draft your survey items Pilot and edit your items: to maximize Reliability 1. Ask for feedback on your items from colleagues, supervisor etc. items be interpreted the same way by all respondents (e.g. male/female), on 2. Edit items different days, and in the way that you 3. Pilot your draft survey with a small intended? group and ask for feedback 4. Edit items Validity items tap into the information you need are they accurate and relevant Decide on the type of survey to use and where to create it Mail survey: access to target population, Poor response rate (3 – 15%), inexpensive and convenient not suitable for certain populations (e.g., young children) increased honesty of responses, no control over administration avoids technical issues. or return of the survey manual data entry. Interview (F2F or phone): Suitable for all populations High cost captures non-verbal cues time consuming keeps the interviewee focused can limit sample size minimizes missing data. anonymity compromised. Online survey: Convenient Cannot access populations that have no internet or don’t know how to use it time and cost effective (remote locations, the elderly) automatic collection and storing of data Increased chance of survey fraud provides access to large samples (patterned responding) anywhere in the world. incentives may actually encourage falsification Creating a new online survey The first step is determining the sensitivity level of the data you intend to collect can be grouped into 3 categories depending upon the sensitivity of its information Identifiable/re-identifiable Highly Sensitive Contains information about data Health Criminality Genetics Race & religion Finances political opinion Identifiable/re-identifiable Sensitive Data Contains: Personal information NOT related to factors above cultural heritage location information ecological or environmental data about threatened or endangered species NOT Identifiable/re-identifiable General Data has been anonymized and/or is publicly available data Is in aggregated form Decide on the type of survey to use and where to create it Data sensitivity calculator Interactive guide to help you determine the sensitivity level of a dataset. Research Platforms Browser Suitable when: E.g. data on illegal activity from drug users includes participant's names and date of birth. safely store data online Suitable for: Collecting survey data Sensitive NOT for data storage Suitable for: Data storage Archiving data Sensitive Archiving data Access to datasets Distribute your survey and collect responses relevant organisations – schools, hospitals, workplaces etc. Personal or professional email Research participant pools – MQ SONA, Qualtrics etc. Social media “Word of mouth” Determine whether the survey responses are valid Participant inattentive or careless responding has adverse effects on data quality 3–9% of respondents engage in highly inattentive/careless responding Decreases statistical power Obscures meaningful results or creates spurious results, fails to replicate previous findings Other types of invalidity: faking good faking bad social desirability whereby the respondent aims to present themselves in a particular manner. Identify responses that are: Slackers lack the proper motivation to fully engage with your study Straight liners select the same answer Speeders Quickly and often skim questions Survey Bots Scripts that can automatically fill in question bubbles detecting invalid responding: Inclusion of specific items (directly or indirectly) designed to detect inattention e.g., response consistency indices formed from survey items (split-half reliabilities measured within respondents across scales) Multivariate outlier analysis assessing statistically unlikely response patterns Survey response time completing the survey in a suspiciously short amount of time (e.g., 30 minute survey completed in 5-10 minutes) Long string analyses measuring the tendency of participants to repeatedly choose the same answer within a block regardless of item content Self-reported diligence – include self-report response style measures e.g., “how often do you: read each question carefully, pay attention to every question, take as much time as you need to answer the questions honestly” or “how often do you: answer quickly without thinking, answer impulsively without thinking, rush through the survey). How to deal with invalid responses in your dataset? 1. Delete all cases with inappropriate responses However this reduces the sample size Reduces power Take time to build sample size again 2. Create a cut-off score (> 80% incorrect response on attention items) 3. Conduct a sensitivity analyses (conduct the analyses with and without suspected invalid responses and see if the results change substantially) Recommended 4. As research shows that participants are unlikely to respond randomly across the entire survey, it may also be appropriate to code their invalid responses as missing and then apply the most appropriate method for handling missing data (e.g., pairwise deletion, single imputation, multiple imputation etc.). This way you retain more of your sample and keep their valid survey responses