Questionnaire Design

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What is the primary purpose of a questionnaire or survey instrument?

  • To systematically gather data from study participants. (correct)
  • To analyze existing datasets.
  • To perform statistical analyses.
  • To conduct laboratory experiments.

Questionnaire design is most effective when it starts with selecting question types and then determining the content to be covered.

False (B)

In questionnaire design, what is the purpose of the initial set of questions?

  • To introduce the main topics of the study.
  • To gather demographic information.
  • To assess the participants' detailed medical history.
  • To confirm that participants meet the study eligibility criteria. (correct)

What is the term for external factors impacting health, including physical and social aspects?

<p>environment</p> Signup and view all the answers

What is the risk of using a survey that is too long?

<p>It may yield a low response rate. (C)</p> Signup and view all the answers

Statistically analyzing open-ended questions is generally easier than analyzing closed-ended questions.

<p>False (B)</p> Signup and view all the answers

Questions with only two response options are known as ______ questions.

<p>dichotomous</p> Signup and view all the answers

Which of the following is a key consideration when using numeric response options in a questionnaire?

<p>How specific the answers should be reported. (B)</p> Signup and view all the answers

When designing categorical questions, it is essential to have fewer response options to reduce participant confusion.

<p>False (B)</p> Signup and view all the answers

What type of scale presents ordered responses to a questionnaire item, allowing participants to rank their preferences numerically?

<p>Likert</p> Signup and view all the answers

Why is it important to consider adding an 'I do not know' option in a survey question?

<p>To avoid potential systematic inaccuracies in the data. (C)</p> Signup and view all the answers

Ensuring anonymity is crucial because it can lead to dishonest answers to sensitive questions.

<p>False (B)</p> Signup and view all the answers

[Blank] is the inability of a participant's identity to be discerned from their survey responses.

<p>anonymity</p> Signup and view all the answers

When is it acceptable to ask for a participant's birthdate in a survey?

<p>When the benefits of having the data outweigh the risks to privacy. (C)</p> Signup and view all the answers

It is always better to use complex language in questionnaires to ensure accuracy and precision.

<p>False (B)</p> Signup and view all the answers

What type of error occurs when participants become accustomed to giving a particular response repeatedly?

<p>habituation</p> Signup and view all the answers

What is the primary goal of ordering questions effectively in a questionnaire?

<p>To ensure the questions flow naturally from one topic to another. (A)</p> Signup and view all the answers

It is generally better to place sensitive questions at the beginning of a questionnaire.

<p>False (B)</p> Signup and view all the answers

In questionnaire design, blank spaces between printed content are referred to as ______.

<p>white space</p> Signup and view all the answers

What is the purpose of a filter or contingency question?

<p>To determine if a respondent is eligible to answer subsequent questions. (C)</p> Signup and view all the answers

Skip logic is used in paper-based surveys to automatically hide irrelevant questions based on participants' responses.

<p>False (B)</p> Signup and view all the answers

In computer-based surveys, what term is used for the coding that automatically hides irrelevant questions?

<p>skip logic</p> Signup and view all the answers

What does 'reliability' refer to in the context of a questionnaire?

<p>Whether consistent answers are given to similar questions. (B)</p> Signup and view all the answers

Validity refers to the precision of a survey instrument.

<p>False (B)</p> Signup and view all the answers

[Blank] is present when items in a survey instrument measure various aspects of the same concept.

<p>internal consistency</p> Signup and view all the answers

What is Cronbach's alpha used for?

<p>To assess internal consistency with variables that have ordered responses. (D)</p> Signup and view all the answers

Test-retest reliability is confirmed by multiple independent raters assessing the same participants and achieving a high degree of concordance.

<p>False (B)</p> Signup and view all the answers

What is the name of the statistic that determines whether two assessors agreed more often than expected by chance?

<p>kappa statistic</p> Signup and view all the answers

What is 'content validity' in the context of questionnaire design?

<p>When subject-matter experts agree that a set of survey items captures the most relevant information about the study domain. (B)</p> Signup and view all the answers

'Face validity' is present when a survey instrument measures the theoretical construct that it is intended to assess.

<p>False (B)</p> Signup and view all the answers

Flashcards

Questionnaire or survey instrument

A series of questions used to systematically gather data from study participants.

First step in designing a questionnaire

Listing the topics the survey instrument covers; include eligibility criteria, exposure, disease, and population demographics.

Agent

A pathogen or chemical/physical cause of disease/injury.

Host

A human (or animal) who is susceptible to an infection or another type of disease or injury.

Signup and view all the flashcards

Environment

External factors that either facilitate or inhibit health.

Signup and view all the flashcards

Closed-ended questions

Allow a limited number of possible responses.

Signup and view all the flashcards

Open-ended questions

Allow an unlimited number of possible responses.

Signup and view all the flashcards

For closed-ended questionnaire items

Researchers must decide the type of response options that are appropriate. The responses should be ones that participants can record accurately and completely

Signup and view all the flashcards

Anonymity

The inability of a participant's identity to be discerned from their responses.

Signup and view all the flashcards

Question Order

Easy/general questions first, then difficult/sensitive. Group similar questions with similar response types.

Signup and view all the flashcards

Habituation

Error when participants become accustomed to giving a specific response.

Signup and view all the flashcards

White space

The blank areas between printed content on a page.

Signup and view all the flashcards

Filter/contingency question

Determines whether the respondent is eligible to answer a subsequent question/set of questions.

Signup and view all the flashcards

Reliability

Demonstrated when consistent answers are given to similar questions.

Signup and view all the flashcards

Validity

Established when responses are shown to be correct.

Signup and view all the flashcards

Internal consistency

When items in a survey instrument measure various aspects of the same concept.

Signup and view all the flashcards

Cronbach's alpha

Assesses consistency with ordered response variables.

Signup and view all the flashcards

Kuder-Richardson Formula 20 (KR-20)

Assesses consistency with binary variables.

Signup and view all the flashcards

Test-retest reliability

People get about the same scores each time they take the test.

Signup and view all the flashcards

Interobserver agreement

Concordance among independent raters assessing the same participants.

Signup and view all the flashcards

Kappa statistic

Determines if two assessors agreed more than expected by chance of the same participants.

Signup and view all the flashcards

Content validity

Also called Logical validity, is present when subject-matter experts agree that a set of survey items captures the most relevant information about the study domain.

Signup and view all the flashcards

Face validity

Is present when content experts and users agree that a survey instrument will be easy for study participants to understand and correctly complete.

Signup and view all the flashcards

Principal Component Analysis (PCA)

Creates one/more index variables from larger set, weighted average of contributing variables.

Signup and view all the flashcards

Construct validity

Set of survey questions measures the theoretical construct the tool intends to assess.

Signup and view all the flashcards

Factor analysis

Uses measured variables to model a latent variable that represents a construct that cannot be directly measured with one question but appears to have a causal relationship with a set of measured variables

Signup and view all the flashcards

Convergent validity

Present when two items that the underlying theory says should be related are shown to be correlated.

Signup and view all the flashcards

Criterion validity

Called concrete, uses an established test or outcome as a standard/criterion for confirming the utility of a new test.

Signup and view all the flashcards

Concurrent validity

Evaluated when pilot participants complete both existing and new tests, correlation between results is calculated.

Signup and view all the flashcards

Predictive validity

Appraised when the new test is correlated with subsequent measures of performance in related domains.

Signup and view all the flashcards

Study Notes

Questionnaire Development

  • A questionnaire, also known as a survey instrument, systematically gathers data from study participants using a series of questions.

Questionnaire Design Overview

  • A well-designed questionnaire is carefully crafted for a specific research purpose, starting with identifying the content to be covered and choosing appropriate question and response types.
  • While new data collection instruments are often necessary, validated question banks can be used for some topics.
  • Once drafted, the questionnaire's wording and response items get checked, and sections/questions get logically ordered.
  • Visually appealing and readable formatting of questionnaires or data entry forms is required.
  • The questionnaire needs to be pretested/revised for content and ease of use before data collection.
  • Questionnaire Design Plan: Identify question categories, select specific topics, choose question/answer types, check wording, choose order, format layout, pretest, revise, and then use.

Questionnaire Content

  • Designing a questionnaire starts with listing the topics the survey instrument must cover.
  • The initial questions confirm participants' eligibility for the study.
  • Remaining questions should cover exposure, disease, and demographic areas relevant to the study question.
  • Case-control studies require questions verifying cases meet the case definition and controls meet the control definition.
  • Prospective cohort studies need questions about both exposure/disease status so participants can be classified.
  • Research/literature can broaden topics in the questionnaire.
  • Example sections for a breast cancer risk factors study includes: sociodemographics, family health history, personal health history, reproductive history, and lifestyle factors.
  • Systems thinking identifies underlying causes of complex problems, including questions influencing the relationships between exposures/outcomes.
  • For example, a study on smoking/liver disease should consider alcohol use as a confounder by asking about both tobacco/alcohol use.
  • Theoretical frameworks inform relevant questions like infectious disease epidemiology triad, specifically agent, host, and environment.
  • Agent: a pathogen or physical/chemical cause of disease like contagious, drug-resistant infectious agents.
  • Host: a human or animal susceptible to an infection, described by factors influencing vulnerability like age, genetics, or behaviors.
  • Environment: external factors facilitating/inhibiting health, such as physical characteristics or the social/political context.
  • Researchers gather data about AHE components during an epidemic, but some agent characteristics require laboratory testing.
  • Survey length needs to be manageable, and a survey that is too long risks low response rates, but being too short may miss crucial data.

Types of Questions

  • Determining broad categories/topics is followed by deciding the most appropriate types of questions.
  • A survey item should be assigned date, or yes/no question, and types of questions determine statistical tests for data analyses.
  • Numeric data uses t-tests, ANOVA, and linear regression; categorical data uses chi-square/logistic regression.
  • Consulting a statistician early can ensure a strong/valid data analysis plan.

Types of Responses

  • Researchers must decide appropriate response options for closed-ended questionnaire items, allowing participants to record accurately/completely.
  • Numeric responses need to specify the level of precision such as reporting height to the nearest inch or centimeter.
  • The decision should accommodate the likely preference of participants as well.
  • Categorical questions should consider all possible responses, determining how many options are needed.
  • Ordinal questions should prevent responses that don't have the capacity to record the full scope of response.
  • Nominal questions may need an "Other" category with a follow-up question for specification.
  • Series of yes/no questions can be used if multiple responses to a single categorical question are possible.
  • Researchers should choose how many entries to include on a scale and whether there is a neutral option for ranked questions.
  • A Likert scale presents ordered responses to a questionnaire and is typically marked by 5 to 7 categories, using a scale for which 1 indicates strong disagreement and 5 indicates strong agreement.
  • Researchers should decide whether to include "not applicable" or "I do not know" for self-report items.
  • Questions essential for determining study eligibility must be answered by all participants.
  • Including "I do not know" or "I do not remember" may be needed, because neglecting to list them may preclude respondents from revealing important information.
  • Allowing the recording of responses like "no opinion," “not sure,” etc. can increase questionnaire completion rates.

Anonymity

  • Researchers must ensure responses do not reveal participant identities to avoid identity disclosure to researchers or others.
  • Anonymity, or the inability to discern a participant's identity from their responses, protects participants and encourages honest answers.
  • The researcher needs to choose the question type and precision level that is appropriate for the study goal and population.
  • To reduce fears about anonymity, the researcher can ask for each participant's current age in years rather than asking for a birthdate.
  • Making decisions about each questionnaire component can maintains anonymity and protects privacy/confidentiality of information.

Wording of Questions

  • Check each question after drafting the questionnaire for clarity, and to confirm that the responses are also carefully worded.
  • It's important that each question asks what it is intended to ask, is clear/neutral, and that the study population can understand the language.
  • Sensitive topics must use language acceptable to the source population
  • Response options must be clearly presented and for scaled ones, the rank order must be clear.

Order of Questions

  • Questionnaires typically start with easy/general questions, leading to more difficult/sensitive questions.
  • Questions should be ordered to flow naturally from one topic to another, grouping similar questions with similar response types consecutively.
  • Mixing questions can prevent habituation where participants give the same response because they are accustomed to it.
  • Questionnaire developers must consider how previous questions could taint answers to later ones.
  • Order questions about impressions by first asking an open-ended question to garner initial impressions, then a series of yes/no questions to clarify beliefs and finish with an open-ended question to allow final impressions.

Layout and Formatting

  • You must format the data-collection form so it's organized, easy to read, and easy to record answers on.
  • Paper-based and electronic survey pages must be meticulously checked for errors.
  • It may be helpful to identify/remove less important questions if the questionnaire seems too long.
  • Paper-based surveys need sufficient white space for visual appeal.
  • Internet-based/computer-based surveys must have high contrast/readable fonts.
  • Researchers need to decide whether to show all questions or sequentially present them.
  • Computer-based data entry programs allow this sequence to be implemented.
  • Filter or contingency questions determine whether respondents are eligible to answer subsequent questions.
  • Computer-based surveys use skip logic codes, while paper-based forms have written instructions for skips.
  • Self-response surveys need cover letters/instructions for clear answer recording.
  • For interview-based data collection, the interviewer reads questions and records answers.

Reliability and Validity

  • A valid questionnaire measures what it was intended to measure in the population.
  • Reliability or precision, means getting consistent answers to similar questions and same outcome with repeated assessments.
  • Validity measures when responses/measurements are correct.
  • Internal consistency, a reliability aspect, means survey items measure aspects of same concept.
  • Rephrasing the questions allows for the stability of participant responses.
  • Intercorrelation tests include Cronbach's alpha for ordered responses, Kuder-Richardson Formula 20 (KR-20) for binary variables.
  • Both statistics range from 0-1, values near one indicates minimal random error.
  • Test-retest reliability is demonstrated from people who retake the assessment later, and it matches their previous scores.
  • Interobserver agreement/inter-rater agreement is the degree of concordance among independent raters assessing study participants.
  • Cohen's kappa or kappa statistic (represented with the Greek letter к), determines whether two assessors assessing the same study participants agreed more often than expected by chance.
    • Its value can indicate whether two radiologists examining the same set of x-ray images reach same conclusion/ K = 0 if they agree as often as expected.
    • K = 1 if they agree 100% when shown images, and a valid study will have a value of kappa that is close to 1.
  • Self-reporting assessment tools must evaluate accuracy correctly, such as psychometric tests/surveys often considered proxy measures.
    • For example, design survey instruments that measure happiness and intelligence cannot be directly measured with physical or chemical tests.
  • Content validity or local validity, is present when subject-matter experts agree that survey items capture the study domain's information.
  • Consideration need be given to the technical quality and representativeness of the survey items.
  • Face validity occurs with content/users agree that the survey instrument is easy to understand and complete.
  • Statistical methods can provide information on the validity of assessment tools, and to determine what might be redundant or unnecessary for removal.
  • Principal component analysis (PCA) generates index variables or components from measured variables linearly combined to find optimal # of variables.
  • Construct validity occurs when survey questions measure the tool's intended theoretical construct that requires an explicit construct examination of how well it represents it.
  • Empirical tests can be used, and various variable measures of correlation can be used for examining the interrelationship process.
  • Factor analysis models a latent variable that represents a construct unable to be directly measured with just one question to have a probable causal relationship.
  • Convergent validity occurs when underlying theory says two items should be related are correlated, and discriminant says the two items are shown not to be related.
  • Criterion validity/concrete validity, uses established tests as a standard to confirm the utility of a new theoretical construct test.
  • For example, a new intelligence test can be validated against standard IQ.
  • Concurrent validity is evaluated when a pilot study completes existing/new tests and their correlation calculated, with strong correlation being evidence of a new valid test.
  • Predictive validity is appraised when the new test is correlated with subsequent measures of performance and used to create a new test to predict school success.

Commercial Research Tools

  • Including identical survey questions/modules to prior research improves validity.
  • Tests that are widely used/validated include Beck Depression Inventory(GHQ) or Mini-Mental State Examination (MMSE).
  • SF-36 and SF-12 measure health-related quality of life, which capture social well-being and the impact of health on quality of daily life. Mental Measurements Yearbook by Buros Center for Testing provides reviews of available educational/psychological assessment.

Translation

  • Translating a survey instrument into additional languages is required when the source population speaks multiple languages.
  • The translated information must express the same meaning as the original survey, and accuracy needs rephrasing (not direct word-for-word translations).
  • Back translation/double translation can be used to ensure correct meaning: one person translates to new language/ a second person translates back to original language.
  • Comparing the original/back-translated versions reveals where the second-language translation doesn't match.
  • Having two translators independently translate the survey instrument from original to new language can come to consensus.

Pilot Testing

  • A pilot test or pretest, is a small-scale study to evaluate full-scale research project feasibility.
  • A pilot tests what to check in a questionnaire such as wording/clarity, questions order, participant ability/willingness, or surveying intended responses and timeframe.
  • The researcher should ask volunteers from target population, who are not members of the sample population, to complete test surveys with feedback about aspects/factors.
  • Feedback is provided individually/as a focus group; the survey instrument gets revised by the data.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser