Podcast
Questions and Answers
What is the primary purpose of a questionnaire or survey instrument?
What is the primary purpose of a questionnaire or survey instrument?
- To systematically gather data from study participants. (correct)
- To analyze existing datasets.
- To perform statistical analyses.
- To conduct laboratory experiments.
Questionnaire design is most effective when it starts with selecting question types and then determining the content to be covered.
Questionnaire design is most effective when it starts with selecting question types and then determining the content to be covered.
False (B)
In questionnaire design, what is the purpose of the initial set of questions?
In questionnaire design, what is the purpose of the initial set of questions?
- To introduce the main topics of the study.
- To gather demographic information.
- To assess the participants' detailed medical history.
- To confirm that participants meet the study eligibility criteria. (correct)
What is the term for external factors impacting health, including physical and social aspects?
What is the term for external factors impacting health, including physical and social aspects?
What is the risk of using a survey that is too long?
What is the risk of using a survey that is too long?
Statistically analyzing open-ended questions is generally easier than analyzing closed-ended questions.
Statistically analyzing open-ended questions is generally easier than analyzing closed-ended questions.
Questions with only two response options are known as ______ questions.
Questions with only two response options are known as ______ questions.
Which of the following is a key consideration when using numeric response options in a questionnaire?
Which of the following is a key consideration when using numeric response options in a questionnaire?
When designing categorical questions, it is essential to have fewer response options to reduce participant confusion.
When designing categorical questions, it is essential to have fewer response options to reduce participant confusion.
What type of scale presents ordered responses to a questionnaire item, allowing participants to rank their preferences numerically?
What type of scale presents ordered responses to a questionnaire item, allowing participants to rank their preferences numerically?
Why is it important to consider adding an 'I do not know' option in a survey question?
Why is it important to consider adding an 'I do not know' option in a survey question?
Ensuring anonymity is crucial because it can lead to dishonest answers to sensitive questions.
Ensuring anonymity is crucial because it can lead to dishonest answers to sensitive questions.
[Blank] is the inability of a participant's identity to be discerned from their survey responses.
[Blank] is the inability of a participant's identity to be discerned from their survey responses.
When is it acceptable to ask for a participant's birthdate in a survey?
When is it acceptable to ask for a participant's birthdate in a survey?
It is always better to use complex language in questionnaires to ensure accuracy and precision.
It is always better to use complex language in questionnaires to ensure accuracy and precision.
What type of error occurs when participants become accustomed to giving a particular response repeatedly?
What type of error occurs when participants become accustomed to giving a particular response repeatedly?
What is the primary goal of ordering questions effectively in a questionnaire?
What is the primary goal of ordering questions effectively in a questionnaire?
It is generally better to place sensitive questions at the beginning of a questionnaire.
It is generally better to place sensitive questions at the beginning of a questionnaire.
In questionnaire design, blank spaces between printed content are referred to as ______.
In questionnaire design, blank spaces between printed content are referred to as ______.
What is the purpose of a filter or contingency question?
What is the purpose of a filter or contingency question?
Skip logic is used in paper-based surveys to automatically hide irrelevant questions based on participants' responses.
Skip logic is used in paper-based surveys to automatically hide irrelevant questions based on participants' responses.
In computer-based surveys, what term is used for the coding that automatically hides irrelevant questions?
In computer-based surveys, what term is used for the coding that automatically hides irrelevant questions?
What does 'reliability' refer to in the context of a questionnaire?
What does 'reliability' refer to in the context of a questionnaire?
Validity refers to the precision of a survey instrument.
Validity refers to the precision of a survey instrument.
[Blank] is present when items in a survey instrument measure various aspects of the same concept.
[Blank] is present when items in a survey instrument measure various aspects of the same concept.
What is Cronbach's alpha used for?
What is Cronbach's alpha used for?
Test-retest reliability is confirmed by multiple independent raters assessing the same participants and achieving a high degree of concordance.
Test-retest reliability is confirmed by multiple independent raters assessing the same participants and achieving a high degree of concordance.
What is the name of the statistic that determines whether two assessors agreed more often than expected by chance?
What is the name of the statistic that determines whether two assessors agreed more often than expected by chance?
What is 'content validity' in the context of questionnaire design?
What is 'content validity' in the context of questionnaire design?
'Face validity' is present when a survey instrument measures the theoretical construct that it is intended to assess.
'Face validity' is present when a survey instrument measures the theoretical construct that it is intended to assess.
Flashcards
Questionnaire or survey instrument
Questionnaire or survey instrument
A series of questions used to systematically gather data from study participants.
First step in designing a questionnaire
First step in designing a questionnaire
Listing the topics the survey instrument covers; include eligibility criteria, exposure, disease, and population demographics.
Agent
Agent
A pathogen or chemical/physical cause of disease/injury.
Host
Host
Signup and view all the flashcards
Environment
Environment
Signup and view all the flashcards
Closed-ended questions
Closed-ended questions
Signup and view all the flashcards
Open-ended questions
Open-ended questions
Signup and view all the flashcards
For closed-ended questionnaire items
For closed-ended questionnaire items
Signup and view all the flashcards
Anonymity
Anonymity
Signup and view all the flashcards
Question Order
Question Order
Signup and view all the flashcards
Habituation
Habituation
Signup and view all the flashcards
White space
White space
Signup and view all the flashcards
Filter/contingency question
Filter/contingency question
Signup and view all the flashcards
Reliability
Reliability
Signup and view all the flashcards
Validity
Validity
Signup and view all the flashcards
Internal consistency
Internal consistency
Signup and view all the flashcards
Cronbach's alpha
Cronbach's alpha
Signup and view all the flashcards
Kuder-Richardson Formula 20 (KR-20)
Kuder-Richardson Formula 20 (KR-20)
Signup and view all the flashcards
Test-retest reliability
Test-retest reliability
Signup and view all the flashcards
Interobserver agreement
Interobserver agreement
Signup and view all the flashcards
Kappa statistic
Kappa statistic
Signup and view all the flashcards
Content validity
Content validity
Signup and view all the flashcards
Face validity
Face validity
Signup and view all the flashcards
Principal Component Analysis (PCA)
Principal Component Analysis (PCA)
Signup and view all the flashcards
Construct validity
Construct validity
Signup and view all the flashcards
Factor analysis
Factor analysis
Signup and view all the flashcards
Convergent validity
Convergent validity
Signup and view all the flashcards
Criterion validity
Criterion validity
Signup and view all the flashcards
Concurrent validity
Concurrent validity
Signup and view all the flashcards
Predictive validity
Predictive validity
Signup and view all the flashcards
Study Notes
Questionnaire Development
- A questionnaire, also known as a survey instrument, systematically gathers data from study participants using a series of questions.
Questionnaire Design Overview
- A well-designed questionnaire is carefully crafted for a specific research purpose, starting with identifying the content to be covered and choosing appropriate question and response types.
- While new data collection instruments are often necessary, validated question banks can be used for some topics.
- Once drafted, the questionnaire's wording and response items get checked, and sections/questions get logically ordered.
- Visually appealing and readable formatting of questionnaires or data entry forms is required.
- The questionnaire needs to be pretested/revised for content and ease of use before data collection.
- Questionnaire Design Plan: Identify question categories, select specific topics, choose question/answer types, check wording, choose order, format layout, pretest, revise, and then use.
Questionnaire Content
- Designing a questionnaire starts with listing the topics the survey instrument must cover.
- The initial questions confirm participants' eligibility for the study.
- Remaining questions should cover exposure, disease, and demographic areas relevant to the study question.
- Case-control studies require questions verifying cases meet the case definition and controls meet the control definition.
- Prospective cohort studies need questions about both exposure/disease status so participants can be classified.
- Research/literature can broaden topics in the questionnaire.
- Example sections for a breast cancer risk factors study includes: sociodemographics, family health history, personal health history, reproductive history, and lifestyle factors.
- Systems thinking identifies underlying causes of complex problems, including questions influencing the relationships between exposures/outcomes.
- For example, a study on smoking/liver disease should consider alcohol use as a confounder by asking about both tobacco/alcohol use.
- Theoretical frameworks inform relevant questions like infectious disease epidemiology triad, specifically agent, host, and environment.
- Agent: a pathogen or physical/chemical cause of disease like contagious, drug-resistant infectious agents.
- Host: a human or animal susceptible to an infection, described by factors influencing vulnerability like age, genetics, or behaviors.
- Environment: external factors facilitating/inhibiting health, such as physical characteristics or the social/political context.
- Researchers gather data about AHE components during an epidemic, but some agent characteristics require laboratory testing.
- Survey length needs to be manageable, and a survey that is too long risks low response rates, but being too short may miss crucial data.
Types of Questions
- Determining broad categories/topics is followed by deciding the most appropriate types of questions.
- A survey item should be assigned date, or yes/no question, and types of questions determine statistical tests for data analyses.
- Numeric data uses t-tests, ANOVA, and linear regression; categorical data uses chi-square/logistic regression.
- Consulting a statistician early can ensure a strong/valid data analysis plan.
Types of Responses
- Researchers must decide appropriate response options for closed-ended questionnaire items, allowing participants to record accurately/completely.
- Numeric responses need to specify the level of precision such as reporting height to the nearest inch or centimeter.
- The decision should accommodate the likely preference of participants as well.
- Categorical questions should consider all possible responses, determining how many options are needed.
- Ordinal questions should prevent responses that don't have the capacity to record the full scope of response.
- Nominal questions may need an "Other" category with a follow-up question for specification.
- Series of yes/no questions can be used if multiple responses to a single categorical question are possible.
- Researchers should choose how many entries to include on a scale and whether there is a neutral option for ranked questions.
- A Likert scale presents ordered responses to a questionnaire and is typically marked by 5 to 7 categories, using a scale for which 1 indicates strong disagreement and 5 indicates strong agreement.
- Researchers should decide whether to include "not applicable" or "I do not know" for self-report items.
- Questions essential for determining study eligibility must be answered by all participants.
- Including "I do not know" or "I do not remember" may be needed, because neglecting to list them may preclude respondents from revealing important information.
- Allowing the recording of responses like "no opinion," “not sure,” etc. can increase questionnaire completion rates.
Anonymity
- Researchers must ensure responses do not reveal participant identities to avoid identity disclosure to researchers or others.
- Anonymity, or the inability to discern a participant's identity from their responses, protects participants and encourages honest answers.
- The researcher needs to choose the question type and precision level that is appropriate for the study goal and population.
- To reduce fears about anonymity, the researcher can ask for each participant's current age in years rather than asking for a birthdate.
- Making decisions about each questionnaire component can maintains anonymity and protects privacy/confidentiality of information.
Wording of Questions
- Check each question after drafting the questionnaire for clarity, and to confirm that the responses are also carefully worded.
- It's important that each question asks what it is intended to ask, is clear/neutral, and that the study population can understand the language.
- Sensitive topics must use language acceptable to the source population
- Response options must be clearly presented and for scaled ones, the rank order must be clear.
Order of Questions
- Questionnaires typically start with easy/general questions, leading to more difficult/sensitive questions.
- Questions should be ordered to flow naturally from one topic to another, grouping similar questions with similar response types consecutively.
- Mixing questions can prevent habituation where participants give the same response because they are accustomed to it.
- Questionnaire developers must consider how previous questions could taint answers to later ones.
- Order questions about impressions by first asking an open-ended question to garner initial impressions, then a series of yes/no questions to clarify beliefs and finish with an open-ended question to allow final impressions.
Layout and Formatting
- You must format the data-collection form so it's organized, easy to read, and easy to record answers on.
- Paper-based and electronic survey pages must be meticulously checked for errors.
- It may be helpful to identify/remove less important questions if the questionnaire seems too long.
- Paper-based surveys need sufficient white space for visual appeal.
- Internet-based/computer-based surveys must have high contrast/readable fonts.
- Researchers need to decide whether to show all questions or sequentially present them.
- Computer-based data entry programs allow this sequence to be implemented.
- Filter or contingency questions determine whether respondents are eligible to answer subsequent questions.
- Computer-based surveys use skip logic codes, while paper-based forms have written instructions for skips.
- Self-response surveys need cover letters/instructions for clear answer recording.
- For interview-based data collection, the interviewer reads questions and records answers.
Reliability and Validity
- A valid questionnaire measures what it was intended to measure in the population.
- Reliability or precision, means getting consistent answers to similar questions and same outcome with repeated assessments.
- Validity measures when responses/measurements are correct.
- Internal consistency, a reliability aspect, means survey items measure aspects of same concept.
- Rephrasing the questions allows for the stability of participant responses.
- Intercorrelation tests include Cronbach's alpha for ordered responses, Kuder-Richardson Formula 20 (KR-20) for binary variables.
- Both statistics range from 0-1, values near one indicates minimal random error.
- Test-retest reliability is demonstrated from people who retake the assessment later, and it matches their previous scores.
- Interobserver agreement/inter-rater agreement is the degree of concordance among independent raters assessing study participants.
- Cohen's kappa or kappa statistic (represented with the Greek letter к), determines whether two assessors assessing the same study participants agreed more often than expected by chance.
- Its value can indicate whether two radiologists examining the same set of x-ray images reach same conclusion/ K = 0 if they agree as often as expected.
- K = 1 if they agree 100% when shown images, and a valid study will have a value of kappa that is close to 1.
- Self-reporting assessment tools must evaluate accuracy correctly, such as psychometric tests/surveys often considered proxy measures.
- For example, design survey instruments that measure happiness and intelligence cannot be directly measured with physical or chemical tests.
- Content validity or local validity, is present when subject-matter experts agree that survey items capture the study domain's information.
- Consideration need be given to the technical quality and representativeness of the survey items.
- Face validity occurs with content/users agree that the survey instrument is easy to understand and complete.
- Statistical methods can provide information on the validity of assessment tools, and to determine what might be redundant or unnecessary for removal.
- Principal component analysis (PCA) generates index variables or components from measured variables linearly combined to find optimal # of variables.
- Construct validity occurs when survey questions measure the tool's intended theoretical construct that requires an explicit construct examination of how well it represents it.
- Empirical tests can be used, and various variable measures of correlation can be used for examining the interrelationship process.
- Factor analysis models a latent variable that represents a construct unable to be directly measured with just one question to have a probable causal relationship.
- Convergent validity occurs when underlying theory says two items should be related are correlated, and discriminant says the two items are shown not to be related.
- Criterion validity/concrete validity, uses established tests as a standard to confirm the utility of a new theoretical construct test.
- For example, a new intelligence test can be validated against standard IQ.
- Concurrent validity is evaluated when a pilot study completes existing/new tests and their correlation calculated, with strong correlation being evidence of a new valid test.
- Predictive validity is appraised when the new test is correlated with subsequent measures of performance and used to create a new test to predict school success.
Commercial Research Tools
- Including identical survey questions/modules to prior research improves validity.
- Tests that are widely used/validated include Beck Depression Inventory(GHQ) or Mini-Mental State Examination (MMSE).
- SF-36 and SF-12 measure health-related quality of life, which capture social well-being and the impact of health on quality of daily life. Mental Measurements Yearbook by Buros Center for Testing provides reviews of available educational/psychological assessment.
Translation
- Translating a survey instrument into additional languages is required when the source population speaks multiple languages.
- The translated information must express the same meaning as the original survey, and accuracy needs rephrasing (not direct word-for-word translations).
- Back translation/double translation can be used to ensure correct meaning: one person translates to new language/ a second person translates back to original language.
- Comparing the original/back-translated versions reveals where the second-language translation doesn't match.
- Having two translators independently translate the survey instrument from original to new language can come to consensus.
Pilot Testing
- A pilot test or pretest, is a small-scale study to evaluate full-scale research project feasibility.
- A pilot tests what to check in a questionnaire such as wording/clarity, questions order, participant ability/willingness, or surveying intended responses and timeframe.
- The researcher should ask volunteers from target population, who are not members of the sample population, to complete test surveys with feedback about aspects/factors.
- Feedback is provided individually/as a focus group; the survey instrument gets revised by the data.
Studying That Suits You
Use AI to generate personalized quizzes and flashcards to suit your learning preferences.