NUR2 612 All Notes PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document is a set of lecture notes for a nursing course (NUR2 612) focusing on evidence-based practice in nursing. It covers various topics, including the importance of research in informing EBP, different research methodologies, and how to translate research into practice. The document also includes a combined evidence pyramid, outlining the levels of evidence and their importance.
Full Transcript
**Lecture 1: Introduction** - Evidence-based practice (EBP) in nursing - A problem-solving approach in which clinicians make a **clinical decision** based on research evidence, patient/family preferences, and clinical expertise - Provides clinicians with the tools an...
**Lecture 1: Introduction** - Evidence-based practice (EBP) in nursing - A problem-solving approach in which clinicians make a **clinical decision** based on research evidence, patient/family preferences, and clinical expertise - Provides clinicians with the tools and processes to translate **research generated knowledge** into practice - best research evidence, clinical expertise, patient preferences and values - **Adopting an EBP:** - Improves quality of care - Improves patient outcomes - Reduces costs of care - Reduces geographic variations in delivery of care (if done well in one setting, it can be upscaled to a larger geographical scale and done well) - EBP Models - **5-step evidence-based MEDICINE** - 1- ask a clinical question - 2-acquire best evidence - 3-appraise the evidence - 4-apply the evidence - 5- assess the performance - **7 Steps evidence-based practice in NURSING:** - 0- cultivate a spirit of inquiry - 1- ask a clinical question in PICOT format - 2- search for the most relevant and best evidence - 3- critically appraise the evidence you gather - 4- integrate all evidence with your clinical expertise and patient preferences and values - 5- evaluate the outcomes of practice decisions or changes using evidence - 6- share the outcomes of EBP changes with others - important bc apart of our roles and responsibilities of nurses - this continuum links research to practice - the RM course is based on these 7 steps - focus on step 1-3 for this RM course - 612 learning outcomes - explain the role of research in informing EBP in nursing - describe paradigms in which research methodologies are situated - put into practice effective literature search skills - explain the essential components and critically appraise research using quantitative, qualitative, mixed methods, and knowledge synthesis methodologies - summarize key strategies to translate research evidence into nursing practice and policy, to evaluate the outcomes of practice change and disseminate results - Combined evidence pyramid - Research that is higher up in the pyramid can change practices in nursing (this has more value, less volume) - studies, at the bottom, more volumes, less value - 1-studies: have a hierarchy as well, check image - 2-synopses: EB abstracts of research studies - 3-syntheses: reviews, systematic, meta-analysis. group various research together and assess the strength and provide recommendation for best practice (always check if these are available when starting to research something) - 4-synopses of syntheses: analyze the syntheses and do summaries of them, tend to inform health care decision making - 5-summaries: practice guidelines, based on all the lower levels of the pyramid - 6-systems: computerized decision support, tech integrated into decision making, taking all other levels in the pyramid and compute decisions, AI included in this - Syntheses and studies will be the main focus of the course (esp. the appraisal of individual studies) ![](media/image2.png) +-----------------------+-----------------------+-----------------------+ | Level of Evidence | Description | Examples | +-----------------------+-----------------------+-----------------------+ | Systems | - clinical | - integrated with | | | information | patient records | | *secondary has been | system | | | critically appraised* | | - not readily | | | - computerized | available | | | decision support | | | | systems | | +-----------------------+-----------------------+-----------------------+ | Summaries | - point of care | - best practice | | | tools | guidelines- | | *secondary has been | | registered nurses | | critically appraised* | - regularly updated | | | | clinical | - association of | | | guidelines that | ontario | | | integrate EB info | | | | about specific | - UpToDate | | | clinical problems | (foreground | | | | question) | | | | | | | | - dynamed plus | +-----------------------+-----------------------+-----------------------+ | Synopses of Syntheses | - summarize | - BMJ Journals | | | information found | | | *secondary has been | in systematic | - Ortho Evidence | | critically appraised* | reviews | | +-----------------------+-----------------------+-----------------------+ | Syntheses | - systematic | - cochrane library | | | reviews | | | *secondary has been | | - TRIPS | | critically appraised* | - scoping reviews | | | | | - CINAHL, Medline, | | | - methodology | Scopus, Google | | | section present | Scholar, | | | | PsychINFO | +-----------------------+-----------------------+-----------------------+ | Synopses of Studies | - summarize | - evidence based | | | evidence from | abstract journals | | *secondary has been | high quality | | | critically appraised* | studies | - evidenced based | | | | nursing | | | - methodology | | | | section present | - cancer treatment | | | | reviews | +-----------------------+-----------------------+-----------------------+ | Studies | | | | | | | | *primary- original* | | | +-----------------------+-----------------------+-----------------------+ | Expert opinion - not | | | | a study | | | +-----------------------+-----------------------+-----------------------+ | Narrative Reviews - | | | | not a study | | | +-----------------------+-----------------------+-----------------------+ | Background resources | | | +-----------------------+-----------------------+-----------------------+ - How much evidence is needed to change a practice? - **Level of evidence** + **quality of evidence** = **Strength of evidence** - Step 0: cultivating curiosity, \[insert quote\] - What fosters/impedes curiosity? - fosters: confident with skills and knowledge and to life-long learning - fosters: confident in not knowing everything and how research and best practice is always changing or having curiosity - impedes: burnout, overworked, understaffing, not respectful/open relationships, having time and place to integrate this into our practice - How can I bring curiosity into my practices? - asking questions - providing safe space for others - lunch and learns (in services)/sharing of research, nurse educator, build it into the culture and system of health care - Research - What is research? - "Research is an undertaking intended to extend knowledge through a disciplined inquiry and/or systematic investigation" - "Research is - collecting, reviewing, - inquiry (asking Qs and finding answers) - outcome (having a goal) - Evaluation - What is nursing research? - "reseearch that provides evidence to develop knowledge about health and the promotion of ghealth over the full lifespan, carre of persons with health problwms abd disbilities.... - practice oriented - patient oriented - **Research consumer/ producer continuum** - consumers: nurses who read and appraise research studies - producersL nurses who design and undertake research studies - research activities nurses commonly participate in include: - contribute ideas towards studies - collect data for a study - advise pt about participating in a study -.... - Types of research questions - Therapy/intervention questions - how well something works - ex: Among current smokers, what is the effect of a newly developed virtual intervention (compared to usual care) on smoking cessation at 2-year follow-up? - among individuals with COPD, what is the effect of O2 treatment as an intervention (compared to no O2 treatment) on COPD outcomes at a 3- month follow-up? - Diagnosis/assessment questions - what the problem is - ex: Among adults aged over 50 y, does blood test xyzyield a more accurate diagnosis than chest x-rays about lung cancer? - ex: what is the best method to diagnose colon cancer? - Prognosis questions - morbidity and mortality - ex: What are the 10-year outcomes (e.g. metastasis, mortality, etc) of stage A colon cancer? - what is the 5-year outcome of COPD pts after being diagnosed? - Etiology (prevention) questions - ex: Among adults aged over 50 y, is coffee consumption a risk factor for development of lung cancer? - Among adults aged over 50 y, is physical activity level a protective factor against the development of lung cancer? - Among children aged 10-14, is 8-9 hour of sleep a protective factor against the development of mental illness? - Description questions - i.e. scale of a certain disease or problem, could be more philosophical, background questions, demographics - describing a disease amongst a population - ex: What is the prevalence of lung cancer among adults living in Canada? - Meaning questions - Qs that dig deep, not just a scale, look how it affects pts, qualitative, get the experiences of the pt, rich details - ex: what is it like for women to experience a new diagnosis of lung cancer? - ex: What Is the process by which men cope with a recent diagnosis of lung cancer? - How is QoL impacted by a recent diagnosis of cancer in the children population? - **Ways of knowing** - **Paradigms (that guide the quest of knowledge):** A world view, a perspective on the complexities of the real world, with certain assumption about reality (includes assumption, what is real and what is not real) - **Ontology:** What is the real world and what can be known from what is real? - **Epistemology:** What is the relationship between the inquirer (researcher) and what can be known (the thing that is being researched)? - **Methodology:** How do you go about discovering what there is to be known (what methods we are going to use to answer the questions)? - **Research Paradigms** - **Positivism (quantitative research)** - 19^th^ century philosophers: Locke, Newton - There is a reality out there that can be studied and known - This **reality exists independently of human observations**, hence the importance of being **objective** as a researcher - Focused on understanding underlying causes - **Post-positivism (quantitative research)** - Although objectivity is the goal, it is recognised that **being totally objective is impossible** - Probabilistic approach to causality through **hypothesis testing (instead of assuming)** - **Constructivism (qualitative research)** - A countermovement to positivism (Weber, Kant) - Postmodernism era of deconstruction and reconstruction - **Reality exists within a context**, **many constructions of reality exist (not a universal or ultimate truth)** - The **ultimate truth does not exist** - Knowledge is generated through the interaction between the researcher and those 'being researched' (the subject) - **Pragmatism** - Advocate for multiple paradigms - Both deduction and induction are important - Favor a **practical approach**: whatever works best to arrive at good evidence - It is the research question that should drive the inquiry and methods used - **Mixed methods research** A table with text on it Description automatically generated - **Inductive reasoning** = Hypothesis **generating (developing theories)** - A type of logic in which generalizations are based on a large number of specific observations. - Specific 🡪 general - Observations 🡪 Analysis 🡪 Theory - **Deductive reasoning** = Hypothesis testing (**testing theories)** - Reasoning in which a conclusion is reached by stating a general principle and then applying that principle to a specific case - General 🡪 Specific - Idea 🡪 Observations 🡪 Conclusion - **Inductive vs Deductive Reasoning** - induction: - **Observation:** observation of anxiety behaviors in hospitalized children -\> **analysis:** anxiety behaviors in children are symptomatic of separation anxiety-\> **theory:** theory on separation anxiety **= hypothesis generating** - deduction: - idea -\> Observation -\> conclusion - **Participatory research** - Actively involving **stakeholders** (citizens, patients, decision makers, experts, etc.) in a collaborative decision-making on all aspects related to the research being undertaken - Planning, implementation and evaluation, etc. - Encourages openness and equity in the **sharing of knowledge, experiences, expertise and ideas** (let them say what their needs are, what do you think are the strengths, what do you think is lacking, etc..) - As a nurse/nursing student, where do you situate yourself amongst these paradigms? - Are you drawn to one approach more than another? - pragmatism and constructivism **Lecture 2: NUR2 612 Finding Research Evidence Workshop** **Workshop Objectives** 1.Understand the difference between background and foreground questions 2.Formulate searchable foreground questions 3.Understand when to search CINAHL and MEDLINE databases 4.Become comfortable with searching in both CINAHL and MEDLINE using keywords and subject heading **Workshop Outline** 1.Review background and foreground questions 2.Review submitted foreground questions 3.Compare CINAHL and MEDLINE databases 4.Demo a foreground question search in CINAHL and MEDLINE **7 Step Evidence Based Practice (EBP)** ![](media/image4.png) **Recap: Different Types of Questions** Clinical Questions (2 types) 1. **BACKGROUND QUESTIONS** 2. **FOREGROUND QUESTIONS** Not all questions will fit perfectly into one category, and that's okay! Be comfortable with moving between resources. If one doesn't work in answering your question, try another +-----------------------------------+-----------------------------------+ | **CINAHL** | **MEDLINE via Ovid** | +-----------------------------------+-----------------------------------+ | Main nursing & allied health | Main biomedical database | | database | | +-----------------------------------+-----------------------------------+ | Uses CINAHL Subject Headings (aka | Uses MeSH(Medical Subject | | controlled vocabulary) | Headings, aka controlled | | | vocabulary) | | **\*more relevant to nursing!\*** | | | | | | (subject headings: list of terms | | | that database created that create | | | narrow topics from broad topics? | | | sort of like a hashtag on social | | | media, the main topics of an | | | article used to group articles of | | | the same topic) | | +-----------------------------------+-----------------------------------+ | Keyword searching →manually | Keyword searching →add.ti | | select Title and Abstract (cannot | (title), ab (abstract), kf | | select in the full text) | (author provided keywords)to your | | | keyword searches | +-----------------------------------+-----------------------------------+ | Allows for truncation, Boolean, | Allows for truncation, Boolean, | | nesting, adjacency | nesting, adjacency | +-----------------------------------+-----------------------------------+ \*\* While there is overlap in the two databases, they both contain unique content as well, which is why it's good to search in both when conducting research\*\* **EX: CINAHL** - *Prevalence of cognitive impairment and its relation to mental health in Danish lymphoma survivors* - *SEARCH FOR BOTH KEYWORDS AND SUBJECT HEADINGS* **article record FOR CINAHL:** includes title, author, affiliations, language of the article, major subjects, minor subjects, abstract (this is all we can search, not the whole article) **Subject headings:** hodgkin's disease--Complications, executive function, hodgkin\'s disease-- psychosocial factors, mental health **-- standards (dash is the subheading, so only mental health standards)** - if you just search a word on CINAHL and press search, it will give you options for s**ubject headings, it will broaden your search** - - **explode feature**: you want to capture everything beneath the term you selected i.e. not just neoplasms, but everything that fits under that, therefore you get more results - **Major concept feature:** - **or select each subheading manually** - **subheadings** are what's on the left in the blue, this narrows the search, THEREFORE usually we do not select subheadings - ![](media/image6.png) Keywords: title and the abstracts, that's where the keywords will be ![](media/image8.png) - search the term in the title **(TI Title)** ***OR*** abstract **(AB abstract)** - for the same concepts, that you like both results, select them and search results with OR, S1 OR S2, it will create another search including both i.e S3 - search key concepts separately, then select the search results i.e. S3 AND S4 **NOTE: Google Scholar is very biased, will base all the results on anything you have ever looked up before, even just on google** Ex: MEDLINE Keywords: title, abstract and KEYWORD heading (the authors adds them in too) **For CINAHL and MEDLINE, always search subject both heading and keywords, do not use subheadings** **Library Nursing GUIDE** [**[https://libraryguides.mcgill.ca/nursing]**](https://libraryguides.mcgill.ca/nursing) - includes FAQ - the modules - databases - level of evidence, etc **Scenario** You are a nurse working on the oncology ward and have noticed that many, but not all, of your patients are experiencing depression, with or without anxiety. Some patients seem to be more engaged in their care than others. This makes you wonder what the possible risk factors for depression or anxiety in cancer patients are. You also wonder what effect this might have on patients. Are risks or outcomes different for patients with a good support system? As well, you have heard from colleagues that Mindfulness-Based Stress Reduction (MBSR) can be helpful in reducing stress and/or anxiety. You would like to explore whether MBSR is effective in reducing depression. You would like to know what nursing interventions are effective in improving psychological well-being for these patients and at what point in the patient trajectory these work best. 1. What are effective nursing interventions that improve psychological well-being of in-patient cancer patients? 2. When are nursing interventions that improve psychological well-being most effective with in-patient cancer patients? 3. How does engagement of care with in-patient cancer patients affect the risk of developing depression or anxiety? **Do not include outcomes in your search, like improve, can be in your foreground question but not the search, since we only have access to the title and abstract, not the outcome, mention population and (don\'t know what she said)** **Your great foreground questions!** - In cancer patients undergoing treatment, what are the potential risk factors for developing depression or anxiety? - What nursing interventions are effective to improve psychological well-being for oncology patients? - What diagnostic or screening tools are most effective in detecting depression in oncology patients? - Does the presence of a support system improve psychological well-being in hospitalized patient on an oncology ward? - Is Mindfulness-Based Stress Reduction (MBSR) effective in reducing depression in cancer patient - In cancer patients undergoing treatment, what are the potential risk factors for developing depression or anxiety? - What nursing interventions are effective to improve psychological well-being for oncology patients? - What diagnostic or screening tools are most effective in detecting depression in oncology patients? - Does the presence of a support system improve psychological well-being in hospitalized patient on an oncology ward? - **Is Mindfulness-Based Stress Reduction (MBSR) effective in reducing depression in cancer patients?** **Is mindfulness-based stress reduction (MBSR) effective in reducing depression in cancer patients?** 1\. Choose the database you based your search worksheet on (CINAHL or MEDLINE) 2\. Find a partner who used the same database 3\. Complete the search worksheet for the above question 4\. Use your completed worksheet to search in CINAHL or MEDLINE **\*\* Use subject headings, keywords, truncation, Boolean operators\*\*** **TO FIND THE SEARCH WORKSHEET GO TO [[LIBRARYGUIDES.MCGILL.CA/NURSING]](http://libraryguides.mcgill.ca/NURSING)** ![](media/image10.png) **covidence: to then put all research results from different databases and then get rid of duplicates (endnote can also be used, more confusing though)** **Lecture 3: Quantitative Research 1** - **EBP model** - Step 3: Appraising the Evidence - If there is no evidence or insufficient evidence: Conduct research - **Research process** - 1\. Conceptual Phase - 1.Formulate a problem - 2.Review the literature - 3.Identify the gaps - 4.Formulate a research question (and hypothesis) - 5.Identify a theoretical framework - 6.(Develop the intervention protocol) - Design Phase - 7\. Select a research design - 8\. Identify a population - 9\. Design a sampling plan - 10\. Specify methods to operationalise research variables - Empirical Phase - 11\. Collect the data - 12\. Prepare the data for analysis - Analytic Phase - 13\. Analyse the data - 14\. Interpret the results - Dissemination Phase - 15\. Communicate the findings to different audiences16. If warranted, change/adapt practice - **PICOT** - Exposure (Independent variable) and outcome (dependent variable) - Population (N) - Intervention: Independent variable, exposure - Comparison group - Outcome: Dependent variable - Time - Examples: - Among caregivers of people with dementia, what is the effect of participating in an in-person caregiver intervention compared to an online caregiver intervention on quality of life 6 months after enrollment? - exposure is the intervention in this case (IV) - Among cancer patients, what is the effect of a MBSR intervention on depression 1 year after intervention completion? - Among advanced nursing students, what is the process by which students integrate evidence in their practice? - Cote des neige smoke exposure vs plateau smoke exposure, which group develops lung cancer? (when intervention is not the exposure) - **Association** - Exposure: characteristic that may explain or predict the presence of a study outcome. - Outcome: characteristic that is being predicted - **Research questions/objectives** - Can be worded as a question or as a statement - What is the relationship between the functional dependence level of renal transplant recipients and their rate of recovery? - The objective of this study is to assess the relationship between the functional dependence level of renal transplant recipients and their rate of recovery - Types of question - Therapy/intervention questions - Diagnosis/assessment questions - Prognosis questions - Etiology (prevention) questions - Description question - Problem statement -\> Research question - A problem statement usually **precedes** the research question - A statement articulates the research problem and indicates the need for a study **(problem statement)** - The research question is the specific queries the research wants to answer in addressing the research problem (research question) - **Research hypotheses** - Must suggest a **predicted relationship** between the independent variable and the dependent variable - Must contain terms that indicate a **relationship** (e.g., more than, different from, associated with) - Can be **simple** (i.e. predicted relation between one independent variable and one dependent variable) or complex (i.e. predicted relationship between two or more independent variables and/or two or more dependent variables, multiple outcomes, multiples exposures) - Hypotheses can be **directional** (predicts the direction of a relation) or **non-directional** (predicts the existence of a relation but not its direction) - In statistics we talk about **null** (H~0~) vs **alternate** hypotheses (H~1~) - null = there is no difference between the groups we are trying to assess - alternative hypothesis = there is a difference - **Jiang et al study** - Identify the problem statement in the article (not always the same) - Poorly controlled dia-betes increases the risk of kidney disease, small fibre neuropathy, diabetic retinopathy and macrovascular disease (Tan et al., 2015). Nevertheless, studies have found that reaching HbA1c targets remains a challenge, with an average of only 43% of people with diabetes worldwide achieving it (Jalving et al., 2018). Locally, Tan et al. (2015) reported that more than half of Singaporeans with dia-betes have HbA1c values greater than 8%, - The existing diabetes service is labour intensive as nurses are required to deliver education, follow-up telephone calls to trace blood sugar monitoring and pro-vide therapeutic consultations and necessary referrals. The outbreak of COVID-19 pandemic has added further strain on the overworked professionals. NSSMP pro-vides an alternative programme that is just as effective, to reduce nurses\' workload by delegating them back to the individuals through self- management strategies. - \| 1155JIANGetal.1 \| INTRODUCTIONWorldwide, the number of people diagnosed with type 2 diabetes is rising rapidly in both developing and developed countries. The World Health organization (WHO) has predicted that diabetes will be the seventh leading cause of death by 2030 (WHO, 2017). With the increasing prevalence of diabetes and the serious complications associated with it, it places a huge economic burden on societies around the world (Islam et al., 2017). In - What is the research question and which PICOT elements are provided? - Among people with poorly controlled type 2 diabetes, how does the effectiveness of NSSP and NDS compare in regards to health-related QoL and HbA1c, acute complications and unplanned medical consultations? - P = individuals with poorly controlled type 2 diabetes - I = NSSP - O = effectiveness of NSSP compared to NDS - C = individuals with poorly controlled type 2 diabetes on NDS? - T= 0, 3 months, 6 months will measure the outcomes - What type of research question is it? - Therapy/Intervention questions - What is the hypothesis? - null hypothesis = no change - alternative hypothesis = significant change - we hypothesize that NSSP will be more effective compared to NDS - Research gap: - there\'s evidence on these apps in western countries, but not in Singapore, thus it could be different (should be found in background) - **Descriptive vs. analytic studies** - **Descriptive studies** - Primary purpose is the accurate portrayal of **people's characteristics** or circumstances - **Who**, **where**, and **when** (person, place, time) - Types: - Case study/series - Cross-sectional or survey (sometimes called prevalence study) - Longitudinal descriptive - **Analytic studies** - Test hypotheses about independent (exposure) and dependent (outcome) variable relations - Relationship between IV and DV - **Why** and **how** questions - Types: - Experimental study designs - RCT - Factorial design - Crossover - Quasi-experimental study designs - Non-equivalent control group - Pre-Post intervention design - Interrupted time series - Observational study designs - Cohort (prospective and retrospective) - Case-control - Cross-sectional or survey (sometimes called correlational study) - Prevalence vs. Incidence - **Prevalence** - The proportion of a population with a given condition at a given point in time - Prevalence tells you about the 'burden' of the condition - how many people at a given point have X illness - The number of existing cases / population count - Prevalence increases as incident cases are added / Prevalence decreases as cases die or recover - **Incidence** - The rate of new cases with a given condition over a specified period of **time** - The number of new cases over a given time period / population count of condition free individuals (at risk of becoming a new case) - **Time** component - Ex: turn a tap in a sink - **Incidence and prevalence of HIV in the U.S. from 1986 to 2003** - Incidence - why did the Incidence decrease = death, health promotion - Assessment of disease etiology, identification of risk factors - Prevalence - Reflects disease burden, can be used for planning of health care resources - why did Prevalence increase during this time = people can now survive with HIV and not developing AIDS (antivirals discovery), prevalence cannot decrease bc it this is an incurable disease, if curable than it could decrease - **Overview of quantitative designs** - Descriptive studies - 1.Case study/series - **ex: Case Series --a descriptive study:** - Covid-19 in Critically Ill Patients in the Seattle Region --Case Series - Describe in-depth the characteristics of one or a limited number of cases (e.g. patients, health centres, villages), don't necessarily have controls - When little is known about a disease or population - understand how a disease presents, how the disease is transmitted to inform preventative measures, no hypothesis - **could be used for** Hypothesis generating, **but not always** - 2.Cross-sectional or survey (sometimes called prevalence study) - 3.Longitudinal descriptive - Analytic studies - A. Experimental study designs - 1.RCT - 2.Factorial design - 3.Crossover - B. Quasi-experimental study designs - 1.Non-equivalent control group - 2.Pre-Post intervention design - 3.Interrupted time series - C. Observational study designs - 1.Cohort (prospective and retrospective) - 2.Case-control - 3.Cross-sectional or survey (sometimes called correlational study)![](media/image14.png) - **Sampling** - Sampling: How participants are identified/selected and how many are included - **Target population** (Group researcher would like to generalize findings to) - **Source/study population** (Restricted group of interest from which we draw our sample) - **Sample** (Study participants) - different to do bc tough to represent a population in a sample, thus sampling is extremely important, if not represented then the study is going to be skewed, no longer generalizable to the target population - Types: - **Probability sampling** 🡪 Most often used in quantitative research and Finally, with regards to other information that was not provided in the article, we find no mention of sources of funding and other support provided. - **Random sampling:** all individuals in the population have equal chance of being selected - **Stratified random sampling:** the population is divided into strata (subgroups) and simple randomization is applied within subgroups - **Systematic sampling:** every n^th^ unit is selected - **Multi-stage sampling:** groups are randomly selected (e.g., schools), and then individuals within groups are randomly selected - **Non-probability sampling** - **Convenience sampling:** group of people who are readily accessible to the researcher - Used a lot because researchers are lazy - **Quota sampling:** variant of convenience sampling with the selection of a predetermined number of units according to strata - Purposive sampling 🡪 Generally not used in quantitative research - Snowball sampling 🡪 Generally not used in quantitative research Questions: - Which sampling method is more likely to produce a representative sample - probability sampling - which sampling method is most commonly in clinical research - convenience - **Sample size** - Factors that affect the sample size requirements in quantitative designs - **Effect size** (estimated effect of the IV on DV) - **Homogeneity** of the study population/sample (smaller sample may be needed for more homogeneous samples) - Whether **subgroup** **analyses** are planned (e.g. female vs male) sometimes need to overrepresent a group to make sure it is included - Extent to which **missing data** are expected (including due to loss to follow up) - **Sample size calculation** - Studies should provide information on how the sample size was determined, usually using a sample size calculation including the following: - **Estimated effect size:** Authors expect to find a **medium effect size** - **Type 1 error:** finding an association when there is none -- rejecting null hypothesis when should not (false positive) - Ex. Alpha = 0.05 - **Type 2 error:** finding no association when in fact there is one -- accepting null hypothesis when we should not (false negative) - Ex. Beta = (1-Power) = (1-0.80) = 0.20 - - **Variables and variable definitions** - Variable = any factor being studied, and which can be measured - A **dependent** variable is the outcome being studied - An **independent** variable is the characteristic being observed or manipulated which is hypothesized to cause (or contribute to) the outcome being studied - **Conceptual definition:** Abstract/theoretical meaning of a factor/concept of interest - Ex. Obesity is excessive accumulation of adipose tissue in the human body - **Operational definition:** The procedures by which a factor/concept is measured - Ex. Weight is measured to the nearest 0.1 kg using a calibrated scale. Height is measured to the nearest 0.1 cm using a standiometer. Obesity is measured as a body mass index (weight in kg divided by height in m^2^) greater than 30 kg/m^2^ for adults aged 19 years and over. - **Data collection methods** - **Interviewer administered questionnaire** - **Advantages** - Face to face, by telephone, by video conferencing - Can collect data on **past and current** exposures - Interviewer can clarify meaning of questions - **Disadvantages** - Subject to **recall bias** (someone who has had an event will remember it accurately or not and influence the individual) and **social desirability bias** (participant is responding to the interview and answers in a way that the participant thinks the interviewer wants you to answer) - Cost and time - Introduction of measurement error by interviewer (ex. asking your questions differently based on who you are talking to) - **Self-administered questionnaire** - **Advantages** - Can be done on paper or electronically - Can collect data on past and current exposures - Lower cost - Can sometimes be combined with supervision whereby missing or unclear answers are verified - **Disadvantages** - Low response rate (especially with long questionnaires) - Missing data - Difficult to get detailed or complex data - Participant response fatigue can induce measurement error (i.e. if the questionnaire is too long) - **Diaries** - **Advantages** - Can be done on paper or electronically - Good for experiences that are transient or of low impact - Minimize recall bias (i.e. log when it happens) - i.e. symptoms after administering a drug - **Disadvantages** - Can only be used to collect **present** behaviors or experiences (**not the past)** - Large amount of data that needs to be processed, may be more difficult to analyse - **Direct in-person observation** - **Advantages** - More objective - Can be used for low-impact behaviors - A lot of details can be captured - **Disadvantages** - Can only measure current behaviors or exposures - Requires extensive training of observers - Observers' subjectivity can lead to measurement error - Time consuming and expensive - **Physical, biological measure on participants** - **Advantages** - Objective - **Disadvantages** - Cost - Not available for many measures - Attributes that are very variable may need to be measured multiple times - **How do researchers decide which data collection methods to use?** - Choice depends on: - Study design (i.e. qualitative vs quantitative) - Amount and detail of data required - Impact of what you are looking to measure on participants lives (e.g. major surgery vs eating carrots) - Sensitivity of the information sought (i.e. researching w vulnerable populations need adequately trained individuals) - Cost and time - Choice often based on **practicality** rather than theoretical considerations - **Jiang et al study** - Is it a descriptive or an analytic study? - analytic - What is the target population? The study population? - target: individuals with poorly controlled type 2 diabetes in Sinapore - study: Participants were recruited at a diabetes **outpatient clinic of a re-structured hospital** in Singapore by convenience sampling. They were screened at the clinic during their medical consultation and invited to participate - What type of sampling was used? - convenience - What are the independent (exposure) and dependent (outcome) variables? - IV: NSSMP exposure - DP: HrQoL, HbA1, acute complications and unplanned medical consultations - What data collection methods are used? - general-ized self-efficacy scale (GSE) -\> self-efficacy (**questionnaire)** - Summary of Diabetes Self-Care Activities (SDSCA) -\> diabetes self-care (**questionnaire)** - Audit of Diabetes-Dependent Quality of Life (DDQoL) -\> HRQoL **questionnaire** - number of acute diabetes complications and unplanned medical consultation **(biological measure)** - participant\'s blood test reports at the diabetes -\> The clinical outcomes included HbA1c which was retrieved from **(biological measure)** - Select one study variable and identify its conceptual definition and its operational definition. - conceptual: control of type 2 diabetes, operational: measure of HbA1c - **Internal and external validity** - **Internal validity** - How confident can I be that the estimated effect (association) is a valid causal effect? - Three threats: - Confounding bias - Selection bias - Information (measurement) bias - **External validity** (aka generalizability) - To what extent can the findings of my study be generalized to (to what extent are they applicable to): - Target population - Other populations - Features that increase the internal validity of a study can hamper its external validity - Ex. well-defined inclusion and exclusion criteria, blinding, controlled environment - **Bias** - Any **systematic** sources of error in the design, **conduct** or **analysis** of a study that results in a mistaken estimate of an exposure's effect on an outcome (DV) - Can preclude finding a true effect or can lead to an inaccurate estimate (underestimation or overestimation) of the true association between exposure and an outcome - ex: A study finds that adults who drink alcohol are 2.5 times more likely to develop lung disease(Risk ratio = 2.5 with a 95% confidence interval of 1.9 to 3.1) - **Confounding bias** - Bias of the estimated effect of IV on DV due to the association of the exposure with other factors that influence the occurrence of the outcome - i.e., when there is a third factor involved that is associated with the IV and DV - Also known as confounding factor, lurking variable, extraneous variable, or confounder![](media/image16.png) - Results in a biased estimate of the effect of the exposure on the outcome - **What are the conditions to be met for a variable to be considered confounding?** - A confounder cannot be an intermediate between the exposure and the outcome. - A confounder is also associated with the exposure being studied but is not a proxy or surrogate for the exposure. - A confounder is predictive of the outcome even in the absence of the exposure. - smoking is not an intermediate step, it's that smoking has a relationship with lung disease and drinking habits separately ![](media/image18.png) - **How to *eliminate*** (minimize) **confounding bias in experimental studies?** - **Before data collection** (study design strategies): - **Randomization** (creates equal groups on known and unknown confounders) - **Restrict your population** (e.g., exclude participants who are smokers in the coffee -- CVD example) - Homogeneity - **Matching** (case-control study) (e.g., recruit participants as 'pairs' matched by smoking status) - After data collection (data analysis strategies): - **Statistical control** (must have a measure of the confounding variable) - **Intention to treat analysis** (in RCT) - **Selection bias (Selection threat)** ![](media/image20.png) - Bias in an effect estimate (association) due to the manner in which subjects are selected for the study and which results in pre-existing differences between groups under study - Selection bias occurs at: - The **stage of recruitment of participants** and/or **during the process of retaining them in the study** - Occurs when the exposed and diseased group has a **lower probability** of being - included in the study - **How to address selection bias?** - Not easy to fix - Best avoided at the **design stage** - e.g., retaining participants in the study - Can collect data to 'estimate' magnitude/direction of selection bias and do **sensitivity analysis** - e.g., collect data from a sample of non-respondents, and use this to do sensitivity analysis - you want a representative samples, - also in cases with minorities, we need to adjust for them, to not continue the biases of the majority - **Information (measurement) bias** - Bias in the **estimation of an effect** (association) arising from **measurement errors** - The quality of the information is different between comparison groups on the independent and dependent variables - usually investigators fault - Examples: - Outcome is measured/reported differently among intervention and control groups in RCT - How can this be best addressed? - Must be addressed when **designing study** and through strict adherence to study **protocol**, e.g., Standard Operating Procedures (SOPs) - **Blinding/masking in RCT** - **Statistical vs clinical significance** - Statistical significance tells us whether an effect estimate (association) is **not due to chance alone** - Clinical significance tells us whether an effect estimate (association) is **big enough to make a difference for patients/populations** - What if: risk ratio = 1.2 with a 95% confidence interval of 1.1 to 1.3 - There is no equivalent to the p-value of statistical significance to determine clinical significance - Depends on many factors, including the population, the cost of the intervention/treatment, how prevalent the exposure if, etc. - **Causality** - Causality is never based on statistics - Causality must be inferred - Based on evidence - Can rely on a set of criteria - Subjective - See video on The Question of Causation (14 mins)- watched - **Hill's criteria to assess if an estimated effect (association) is a causal effect:** - Temporality = exposure precedes outcome - Strength = size of the association - Consistency = results are replicated - Experimental evidence = removing/changing exposure has an effect on outcome - Biologic gradient = increased exposure increases the risk - Coherence = consistent with existing knowledge - Plausibility = common sense - Alternate explanations considered - Specificity = it's the only cause - **Reporting guidelines** - EQUATOR network, an international initiative that seeks to improve the reliability and value of published health research literature - Reporting guidelines promote clear reporting of methods and results to allow critical appraisal of the manuscript - Specific set of reporting guidelines for different kinds of research designs: - Ex. Randomised controlled trials: CONSORT guidelines - Ex. Observational studies in epidemiology: STROBE guidelines - Ex. Systematic reviews and meta-analyses: PRISMA guidelines - Ex. Qualitative research: SRQR and COREQ guidelines - When appraising a study, you can rely on the study design specific guidelines to ensure all elements are reported - **Critical Appraisal Skills Program (CASP) Checklists** - Checklists to help healthcare professionals more easily and accurately perform critical appraisal across different study designs - For each checklist, questions help you think about aspects of a study in a structured way, and organized in 3 sections: - Are the results of the study valid? (Section A) - What are the results? (Section B) - Will the results help locally? (Section C) - Questions you can record as 'Yes', 'No' or 'Can't tell'. There are prompts below the questions that highlight the issues to consider, as well as a space to record the reasons for your answers. **Lecture 4: Experimental study designs** - **The Powerful Placebo (a story from WWII)** - henry beecher, nurse injected saline instead of morphine, pt quieted down - need to control for the placebo then - **Analytic studies** - **Experimental study designs** - RCT - Factorial design - Crossover - Counterfactual - What would have happened to the *same people* exposed to a causal factor if they *simultaneously* were *not exposed* to that causal factor? - do the intervention, without giving it - The difference between what happened and what would have happened is the effect. - impossible to tell - RCTs attempt to mimic counterfactual - **Key features of an RCT** ![](media/image23.png) - Process: - Baseline data collection - 🡪 Randomisation - 🡪 Intervention + Control group 🡪 Follow up data collection - Careful selection of participants - Randomization - If no randomisation = Quasi-experimental with non-equivalent control group - Concealment of group allocation (blinded) - Manipulation (intervention and control group) - There can be more than one intervention group - Outcomes assessed at baseline and follow up - Note **synonyms:** - Randomized controlled trial - Randomized clinical trial - True experiment - **Selection of participants (very much clinical setting)** - Selection of participants that represent the target population (the population that would benefit from the experiment, not necessarily the general population, therefore it should be homogenous) - Strict inclusion/exclusion criteria - **Inclusion criteria** typically include demographic, clinical (e.g. comorbidities), and geographic characteristic *established at the beginning of the study to minimize any ambiguity* - **Exclusion criteria**: characteristics that could interfere with the success of the study or increase risk for an unfavorable outcome - E.g., characteristics that make potential participants highly likely to be lost to follow-up (e.g. drop out), comorbidities that could increase the risk for adverse events, treatments that could interfere with the intervention under study - Creates a more **homogenous** sample 🡪 addresses confounding bias and will decrease the requirements for the sample size (bc more heterogeneity = greater sample size) because how strict the criteria is - **Randomization** - All participants have **equal chance** of being in intervention or control group - Removes **systematic bias** between groups on pre-intervention characteristics - Participants are assigned to groups based on **chance** only - Prevents **selection bias:** eliminates interference with participant allocation to treatment groups - Prevents **confounding bias:** most powerful way to control for known and unknown confounders (never a perfect process though) - The likelihood of creating equal groups increases with larger sample size because there is a greater likelihood that characteristics will balance - Done after informed consent is obtained and after baseline data are collected - **Randomization methods** - **Complete:** equivalent to flipping a coin for each successive participant -- could result in unequal groups - **Simple:** equivalent to flipping a coin but with prespecified sample for each group - *But then have an issue if you have 50 in one group and 45 in the other, remaining participants not randomized, just automatically go to group that is missing participants* - (**Permuted) block:** participants are allocated to blocks of a given size, ensures balanced distribution in each group - **Stratified:** two-stage procedure, participants are first grouped into strata (e.g., female/male, end-stage/not end-stage), then participants are assigned to a treatment group using a randomization method. - **Note:** - Simple and block are the most likely methods that we will see in research - Randomization (random assignment and random allocation) are not the same as random sampling - **Concealment of group allocation** (type needs to be clear in consent form) - **Allocation concealment** - Ensures that the research staff enrolling participants does not know what the upcoming assignment is - Sequentially numbered opaque sealed envelopes - Randomization schedule done by study biostatiscian not involved in recruitment (should be blinded to the experiment) - **Blinding or masking** - Concealing information regarding RCT arm from participants, data collectors, staff administering the intervention, care providers, data analysts - Single-blind study, double-blind study, triple-blind study - Not always possible, especially in nursing interventions (e.g. start seeing side effects of the "actual" intervention, clinician and participant may figure it out) - May be more or less important (e.g. patient reported outcomes vs death) - **Manipulation** - **Intervention condition/arm** - Consistent with theoretical rationale, and of sufficient intensity and duration - Developed using rigorous methods (ideally pilot tested) - Implementation follows a standardized protocol - Described with sufficient details so **as to be replicated** - **Control condition/arm** - Standard methods of care - A placebo or pseudo-intervention - An alternative intervention (e.g. already existing drug) - The intervention but at a different dose - Wait-list control group - **Key considerations** - What is ideal in terms of conceptual and methodological considerations? - What is ethical? - there's already going to be people who could benefit from the actual intervention who will not be getting it, but once efficacy proven, it could help more people, so outweighs this - **Prospective follow up** - Participants are followed up **over time** after randomization and delivery of intervention/control conditions - Outcomes are measured at the same intervals for **intervention and control groups**. This can be at: - Baseline - During the intervention - At the end of the intervention (a minimal requirement) - For a given period following the end of the intervention - Outcomes need to be carefully thought of - Consequences of the intervention - Clear conceptual and theoretical definitions - Acceptable psychometric properties - not only a primary bc not get a significant association, so have secondary outcomes as well but chosen carefully - **Hypotheses in RCT** - **Superiority trial** - Is the new intervention A *better than* (superior to) the control condition B? - higher threshold - **Non-inferiority trial** - Is the new intervention A *no worse* than the standard treatment B? - then decide if you advocate for that intervention - Usually for questions such as: Is it less costly, less invasive, does it have fewer side effects? - **Equivalence trial** - Is the new intervention A *similar to* (i.e. not better and not worse) the standard treatment B? - *Similar* within a predetermined range of effect - **Clinical trials and study phases** - - **Preclinical:** in animals first, could be many years, many experiments - Phase 1: looks for **safety**, morbidity, mortality, what are the side effects - Phase 2: if safe, go into phase 2, bigger participants, how do they tolerate the drug, what are the side effects, find safe dose in a big enough sample size - Phase 3: efficacy trial, is the drug effective at preventing XYZ? even more participants - FDA Review: after phase 3 considered safe and effective and go through a review to be approved and then licensed to go on market - Phase 4: after approved by the FDA you do this phase - **Factorial design -- variant of the RCT** - Used when there is **more than one intervention** component the researcher wants to manipulate ![A diagram of different colors Description automatically generated](media/image25.png) - want to compare people with both components vs. only one component, vs. no component - **Crossover design -- variant of the RCT** - Participants are randomly assigned to different orderings of treatment groups - **Participants serve as their own control** (within subject design) - **Wash out period** (not under the effects of intervention) between treatments used when carry-over effects are of concern - ex: asked to meditate and measure stress hormone levels, then wait an hour and measure this stress hormones again (control), so you are your own conrol ![A diagram of a condition Description automatically generated with medium confidence](media/image27.png) - **Cluster randomised trial -- variant of the RCT** - Study sites (e.g. clinics, schools) are randomized (rather than individuals), participants within sites receive the allocated treatment - the site is randomized, not the individuals, all the individuals in a site will receive the same intervention (e.g. intervention or control) A blue rectangle with black text Description automatically generated - **What can go wrong in RCT?** - Recruiting participants - Ensuring proper randomisation (e.g., allocation is truly random, allocation is concealed) ppl need to be blind - Determining an adequate control condition - Keeping participants, staff administering intervention and data collectors **blinded** to the treatment allocation - Keeping participants in the study (avoiding loss to follow-up) and ensuring the protocol is strictly followed - Attrition (to counter this: over recruit, choose a good study design that ppl are more likely to continue) (if a lot of people leave, the power is affected) - in the real world, very hard to have these controls, so hard to always extrapolate a study to the real world - **Analysis by intention to treat (ITT) vs per protocol in RCTs** - Two follow up problems can arise in RCTs: - Participants may not stay within assigned (intervention vs control) group or may not get 'full intervention' - Control participants may get 'some' intervention - Intervention participants may get no intervention or only part of the intervention - **Contamination** -- ppl in different groups are in contact with one another thus 'contaminating' the results - **Attrition** (loss to follow up) - participants don't show up for follow up assessments or decide they no longer want to take part in the study or participants that die - Missing outcome data - Creates an imbalance in the two equally created groups at baseline - When analysis is conducted by **intention-to-treat (ITT):** - Includes all participants **according to randomized treatment assignment** - even participants who drop out - Ignore noncompliance, protocol deviations (i.e., anything that happens after randomization) - Use of **data imputation** techniques for missing outcome data - Instead of kicking them out of the study, we try to figure out what their outcome data would look like - Helps with avoiding confounding bias by trying to keep the groups as equal as possible - **Should always be reported in an RCT** - Estimates of treatment effects are generally more conservative - **Per-protocol analysis (PPA)** is conducted on a subset of participants who strictly adhered to the study protocol - Estimates the true efficacy on an intervention - Can be presented in addition to the ITT analysis - May require adjustment for baseline differences (i.e., potential confounders) - **ITT versus PPA** - CONSORT recommends both - ITT and PPA are complementary - ITT: effect of offering the treatment - PPA: effect of actually receiving the treatment - if ITT and PPA are similar then it would should that it is good, though usually ITT will be lower and PPA a bit high effect - **Example: Safety and Efficacy of Typhoid Conjugate Vaccine in Malawian Children** - ![](media/image29.png) - ITT: 80.7% - PPA: 83.7% - **Why was PPA higher than ITT?** - PPA is likely to overestimate the treatment effect by excluding those who are: - Not compliant with study protocol - Drop out of the study - **In the real world, people may not follow a recommended or ideal protocol when receiving vaccines (real world can only be seen when do population based research)** - E.g. not taking all doses of a recommended vaccination schedule - **GROUP WORK:** CASP checklist for Jiang et al. study - **Number needed to treat (NNT)** - The number of people who would need to receive a given intervention/treatment to prevent one undesirable event - We want number to treat to be **LOW** -- that would mean that for every person that gets the new treatment the undesirable event would be avoided - ![](media/image31.png) - **outcome here is an undesirable outcome** - do not need to know the equation for exam purposes - Ex. For every 25 patients receiving the treatment **one** emergency visit will be avoided - ideal would be a 1:1 ratio - **Number needed to harm (NNH)** - The number of people who would receive a given intervention/treatment for one person to experience an adverse outcome - We want this number to be **HIGH** -- we want to be able to treat more people without a lot of people getting the undesirable side effect - Ex. NNH: For every 34 people treated with the intervention, 1 case will develop the adverse effect - Ex. NNH=123, for every 123 people treated with the intervention, 1 case will develop the adverse outcome/side effect intervention, 1 case - **Note: NNH** and **NNT** are used together to evaluate the benefits/risks of\ a specific treatment - NNT: For every 26 people treated with the intervention, 1 case of the primary outcome will be prevented (acute limb ischemia, major amputation, MI, stroke or cardiovascular death) - NNH: And for every 34 people treated with the intervention, 1 case will develop the adverse outcome/side effect - **RCT** ![](media/image35.png) - Key features of a RCT - Careful selection of participants - Randomization - Concealment of group allocation - Manipulation (intervention and control group) - Outcomes assessed at baseline and follow up - Synonyms: Randomized controlled trialRandomized clinical trialTrue experiment - **if no randomisation = Quasi-experimental with non-equivalent control group** - **Quasi-experimental studies** - Note: - No CASP checklist specifically for quasi-experimental studies - No EQUATOR reporting guidelines specific for non-RCT experimental designs - When we can't do a full experiment, but we still want to manipulate an independent variable - 2 key features - No randomization - May or may not have a control group (non-equivalent) - **Pretest-posttest design (Pre-post design):** - Groups at baseline are not similar -- there can be potential confounders so differences at follow-up might be due to baseline differences - all the same people, not particular control group A blue squares with black dots Description automatically generated - **Interrupted time series** - Data collected from multiple time points before and after an intervention are compared - May or may not have an external comparison group - Commonly used in quality improvement interventions and for evaluations of policies and practice change - Single interrupted time series - controlled interrupted time series - **Effects of a 'Baby Friendly Hospital Initiative' on exclusive breastfeeding rates at a private hospital in Lebanon: an interrupted time series analysis** - ![](media/image39.png) - time series analysis - lead in (before any intervention), intervention (during the intervention measurements) and follow-up period (after the intervention) - always measure the same outcomes in all these times - is there an improvement? - in this case, yes for this specific population **Lecture 5: Observational study designs** - **Analytic studies** - **Observational study designs** - Cohort (prospective and retrospective) - Case-control - Cross-sectional or survey (sometimes called correlational study) - **Note:** Observational studies are done at times when it is unethical to manipulate an IV (e.g. you can't make people start smoking for your experiment) - **Cohort study** - Process: - A sample of participants who do not have the health outcome of interest is selected - At baseline, participants are classified according to exposure status - Participants are followed over time to determine their outcome status - Incidence of the outcome among exposed and non-exposed is compared - **Prospective** - Individuals from a given study population are sampled and followed concurrently over time to determine exposure and outcome status - **Retrospective** - Individuals from a given study population are sampled and the researcher relies on existing data (e.g., medical records, RAMQ data, etc.) to determine exposure and outcome status - A retrospective cohort study is not the same as a case-control study - **General vs Special population cohorts** - **General population** cohorts are more common in nursing research - Also called 'single cohort study': one group is followed over time, those not exposed serve as the internal comparison group - Ideal for common exposures - E.g.: Determinants of new onset cardiometabolic risk among normal weight children - **Special population** cohorts are used to study **rare** **exposures** - Two cohorts are followed over time, one with the exposure of interest and one without (i.e., external comparison group) - Commonly used for work related exposures that are rare in the general population - E.g.: September 9/11 World Trade Center and risk of cancer among first responders - Exposed cohort: First responders on site \-\-- prospective f/u to determine cancer incidence - Non-exposed cohort: First responders not on site \-\-- prospective f/u to determine cancer incidence - **Advantages:** - Prospective cohort study, least prone to temporal ambiguity (vs other observational studies) \-\-- We know that the exposure (cause) precedes the outcome (consequence) - Can study different outcomes in the same cohort study, provided the outcome is not rare - Retrospective cohort study, low cost and quick - Can be used to estimate the effects of rare exposures (special population cohorts) - Can examine the natural history of disease (prognosis) - **Limitations:** - Costly and time-consuming (prospective cohort studies) - Loss to follow-up/attrition can lead to selection bias - Not useful for rare outcomes - Confounding bias can hamper causal claims - **Case-control studies** - Notes: - Identify subjects based on their **outcome status** - Cases -- have the outcome of interest - Controls -- don't have the outcome - Should be representative of the population from which cases were selected - Often matched to cases on characteristics such as age and sex - Look back in time (retrospective) for differences in **exposure status** - Questionnaires - Medical records - Compare exposure among cases and controls - **Advantages** - Cheaper and faster than cohort study - Useful for rare diseases/outcomes - Can estimate associations for multiple exposures - **Limitations** - Selection of inappropriate control group can lead to bias - Information bias possible (recall bias) - Not good for rare exposures - **Cross-sectional (Survey) design** - **Notes:** - Data collected at a **fixed point** or within **short period of time** - Snapshot of health experiences of a population at a certain point in time - Sample may or may not be representative of the study population, depending on the sampling method used - Can be **descriptive** or **analytic** - Ex. Prevalence study (for descriptive purposes) - Ex. Correlational study (for analytic purposes) - Describe characteristics/disease occurrence 🡪 Prevalence study - Compare outcome prevalence between exposed and non-exposed 🡪 Correlational study - **Advantages** - Can be quick and relatively inexpensive - Large amounts of data can be collected at once - Useful to determine disease prevalence and for other descriptive purposes - Can be used to generate hypotheses on the relation between 2 concepts/variables - **Disadvantages** - In prevalence studies: - Generalizability of findings when relying on non-probability samples (or when sample is not representative of target population) - Cannot be used to determine disease incidence - In correlational studies: - Temporal ambiguity between 'cause' and 'effect' - Data only collected at one point in time 🡪 cannot differentiate between cause and effect (temporal ambiguity) - Prone to confounding bias - **How to *eliminate* confounding bias in observational studies?** - Before data collection (study design strategies): - **Restrict your population** (e.g., exclude participants with comorbidities) 🡪 Homogeneity - **Matching** (e.g., recruit participants as 'pairs' matched by sex such that a female control is recruited for every female case) - Typically used in case-control studies - After data collection (data analysis strategies): - **Statistical control** (e.g. multivariable regressions) - But must be sure that all potential **confounding variables** were measured - **Selection bias** - Bias in the estimation of an association due to the manner in which subjects are selected in the study which results in pre-existing differences between groups in terms of exposure and outcome - Selection bias can occur at: - the stage of recruitment of participants - and/or during the process of retaining them in the study (in cohort studies and RCT) - **Selection bias in cohort studies** - At the stage of recruiting participants for a study: - 'Healthy' volunteer bias: individuals who volunteer for a study are different from those who do not - Referral bias: in case-control studies, cases with a given exposure are more likely to be invited to take part in a study - During the process of retaining participants in the study: - Attrition (loss to follow up) bias: individuals who remain in a cohort study may be different from those lost to follow up - **Information (measurement) bias in observational designs** - Bias in the estimation of an association arising from **measurement errors** in - Exposure / independent variable - Outcome / dependent variable - The quality of the information is different between comparison groups - Examples: - Outcome is ascertained differently among exposed and non-exposed groups in cohort study (*or among intervention and control arms in a RCT*) - Information on exposures is collected differently from cases and controls in a case-control study - Types of information (measurement) bias: - **Reporting bias (social desirability bias)** - Use objective measures, use valid questionnaires, favour self- rather than interviewer-administered questionnaires for sensitive outcomes - **Surveillance bias (detection bias):** - In cohort study where exposed individuals are more likely to have the outcome detected because due to increased surveillance/screening/testing - Use Standard Operating Procedures - **Recall bias:** - In case-control study when relying on questionnaires to assess exposures and cases are better at remembering past exposures compared to controls - Rely on existing data to ascertain exposure status - **Causality** - Causality is never based on statistics - It is based on accumulating evidence over time - Causality must be inferred - Based on evidence - Can rely on a set of criteria - Subjective - Note: Hill's criteria to assess if an estimated effect (association) is a causal effect - All of these criteria are important but the only one that always has to be true is t**emporality = exposure precedes outcome** **Lecture 6: Measurement and measurement instruments** - **Developing a data collection plan** - Identify data **needs** - What types of data are required to answer the research question? - Select data collection **methods** - What data collection methods do the variables require? - Which data collection methods are ethical and feasible? - Select and/or develop measurement **instruments** - Does a reliable and valid instrument already exist for the operational concept I want to study? - Does the existing instrument require modifications/adaptations? - If a new instrument must be developed, how will its reliability/validity be determined? - **Measurement error** - Difference between the 'true' value and the measured (or observed) value for a given variable (construct) - **Observed value** = **True value ± Error** - We don't know what the true value is (what we are trying to get to) - **Exists to some degree in all measures** - High quality data = less measurement error - But is also typically more invasive, more costly, less practical, requires more time, etc. - Many measures in nursing research are **subjective** which are even more prone to measurement error compared to **objective** measures - **Potential sources of measurement error** - **Equipment** = weight scales, BP monitor not calibrated - **Assessments** = misuse of equipment, or poor assessment skills - **Questionnaires/interviews** = questions not reliable/valid, or questions not posed/administered in consistent manner - **Data source** = medical record (e.g. incomplete information), person (e.g. omitting to report certain behaviours) - **Data entry** = miscoding/entry error - **Accuracy of a measurement instrument** - **Reliability**: The consistency with which an instrument measures a given attribute - **Validity**: The extent to which an instrument is measuring what it is supposed to be measuring - These are known as **psychometric properties of** instruments - **Reliability and validity** - The target represents what is being measured. - The center of the target represents the hypothetical 'true value'. - The distance between a given point (observed value) and the target center (true value) corresponds to the measurement error. A red and black circles with black dots Description automatically generated - **1) Test-retest reliability** - **Inter-rater (inter-observer) reliability**: measurement obtained by 2 or more raters/observers on the same participant - E.g. Two research assistants measure the height of a participant, was the same measurement obtained? - Requires: precise operational definition, detailed measurement protocol, training of staff/raters - **Stability over time**: measurement obtained on the same participant but 2 or more occasions - E.g. Participants complete a quality of life questionnaire on week 1 & week 2 - A reliable instrument should be able to measure the same thing on different occasions - Interval between first and second measurement should be: - Long enough so that experience from 1^st^ occasion does not influence results on 2^nd^ occasion - But not too long to avoid that the participants 'true value' changes (e.g. change in health status over time) - **How is reliability reported in a research article for test-retest reliability?** - **Correlation coefficients** are used for **continuous** variables: - Pearson correlation coefficient (-1 to +1) - Intraclass correlation coefficient (0 to 1) - \> 0.75 is considered good to excellent - **For dichotomous and categorical variables**, use: - Proportion of agreement (0 to 100%) - Kappa coefficients (0 to 1) - \> 0.75 is considered good to excellent - **2) Internal consistency** - An indicator of reliability used for **multi-item measurement instruments** (sometimes called test-score reliability) - E.g.: General Self-Efficacy Scale is a 10-item questionnaire used to measure personal self-efficacy - Captures the extent to which people respond in the same way to items within a measurement instrument - Note: Long questionnaires tend to have better internal consistency, but people are less likely to answer long questionnaires (attrition risk) - Most commonly used measures of internal consistency is **Cronbach's alpha** - Dependent on the number of items included in the tool/scale - Strive for a balance between the tool's length and the internal consistency - **How is reliability reported in a research article for internal consistency?** - Cronbach's alpha (sometimes called Coefficient Alpha) (0 to 1) - \> 0.8 is considered good to excellent - **Validity** - Notes: - The extent to which an instrument is measuring what it is supposed to measure - Conceptual definition: abstract or theoretical meaning of a given construct - Self-efficacy: what does it mean, theories that underly this construct, etc. - Operational definition: the actual procedures by which a construct is measured - Being able to correctly operationalize and quantify a construct is at the core of measurement - Validity evidence is **built over time** -- cannot be determined with a one study - Multiple sources accumulated over time will help to determine the validity of a measure - **1) Content validity** - For multi-item instruments, does the instrument's content (i.e., questionnaire items) adequately capture the construct that is being measured? - Based on **expert judgement** - Example: General Self-Efficacy Scale - I can always manage to solve difficult problems if I try hard enough. - If someone opposes me, I can find the means and ways to get what I want. - It is easy for me to stick to my aims and accomplish my goals. - For each item, experts are asked whether the item targets characteristics that the instrument is designed to cover - **2) Criterion validity** - How does the (new) measurement instrument compare to another **criterion measure** of the same construct? - Ideally this criterion measure should be a gold-standard measure - **Concurrent validity**: both measures are obtained at the same time - If the measurement instrument being assessed is valid, we expect it to be highly correlated with the criterion measure - **Predictive validity**: the criterion measure is obtained sometime after the measure - If the measurement instrument being assessed is valid, we expect it to accurately predict the criterion measure - **3) Construct validity** - For many self-reported abstract constructs, there is no gold-standard measurement (self-efficacy, depression, quality of life, etc.) - With construct validity, we **assess the correlation** between the measurement - instrument being assessed and other measures that are theoretically related to the underlying construct - **Convergent validity:** based on other measures of the same construct (we hope correlation will be high) - **Divergent (discriminant) validity:** based on other measures of a distinct construct (we hope correlation will be low, i.e. near 0) - Example: The General Self-Efficacy Scale - Positively correlated with optimism, work satisfaction and negatively correlated with depression, stress, health complaints, burnout, and anxiety. - **Measurement tool responsiveness** - Ability of a measurement tool to **detect change** over time - Some measurement instruments may not be precise enough - Some measurement instruments may have 'ceiling effects' - **Validity and reliability are context dependent** - Validity evidence is **built over time**, with validation offering in a variety of population - Example: Barrat Impulsiveness Scale -- a self-report measure of impulsive personality traits - One of the most widely self-report impulsivity measures used in psychiatric research - Originally developed in the USA in English - Good psychometric properties **Lecture 7: Overview of Qualitative Research** - **Types of qualitative designs** - Ethnography: To describe an interpret a culture or group - Grounded Theory: To describe a psycho-social process - Case Study: To describe and interpret a particular event, program or activity - Phenomenological study: To describe and interpret phenomena from the participant's point of view - Critical analysis: To describe text from a critical theory perspective - **Key features of qualitative research** - Studies phenomena in the **natural contexts** of individuals or groups - Tries to gain a deeper understanding of people's experiences, perceptions, behaviours and processes and the meanings they attach to them - Qualitative research is **pluralistic** (many different approaches) - During the research process, researchers use 'emerging design' to be flexible in adjusting to the context - Data collection and analysis are iterative processes that happen simultaneously as the research progresses - **Flow of activities in qualitative research** - **Planning the study** - Identifying the research problem - Doing a literature reviews - Developing an overall approach and research question - Selecting and gaining entry into research sites - Developing methods to safeguard participants - **Developing data collection strategies** - Deciding what type of data to gather and how to gather them - Deciding from whom to collect the data - Deciding how to enhance trustworthiness - **Gathering and analyzing data** - Collecting data - Organizing and analyzing data - Evaluating data: - making modifying to data collection strategies if necessary - determining if saturation has need achieved - **Disseminating findings** - Communicating findings - Making recommendations for utilizing findings to inform practice and future research - **Qualitative research questions** - Tend to be **broad** and **open** - 'what', 'how, and why?' (rather than 'how many, how much, and how often?') - Can **change** (to a certain degree) over the course of data collection and analysis, e.g.: - Initial question: Why are GPs hesitant to ask about intimate partner violence? - Revised question: - 'What are GPs' attitudes and perspectives towards discussing family abuse and violence?' - 'How do GPs behave during the communication and follow-up process when a patient's signals suggest intimate partner violence?' - **PICOT is not well suited** to formulate qualitative research questions (or qualitative search questions) - Focus on the **Population** and **Concept** - **3 main qualitative approaches/traditions (Research Designs)** +-----------------+-----------------+-----------------+-----------------+ | | **Ethnography** | **Phenomenology | **Grounded | | | | ** | theory** | +-----------------+-----------------+-----------------+-----------------+ | Definition | A branch of | A qualitative | A qualitative | | | inquiry rooted | research | research | | | in anthropology | tradition, with | methodology | | | that focuses on | roots in | with roots in | | | the culture of | philosophy and | sociology that | | | a group of | psychology, | aims to develop | | | people, with | that focuses on | theories | | | the goal of | the lived | grounded in | | | understanding | experience of | | | | the world view | humans. | real-world | | | of those under | | observations. | | | study. | | | +-----------------+-----------------+-----------------+-----------------+ | Domain | Culture | Lived | Social settings | | | | experience | | +-----------------+-----------------+-----------------+-----------------+ | Area of inquiry | Holistic view | Experiences of | Social | | | of culture | individuals | structural | | | | within their | process within | | | | experiential | | | | | world or | a social | | | | 'life-world'. | setting. | +-----------------+-----------------+-----------------+-----------------+ | Focus | Understanding | Exploring how | Building | | | the meanings | individuals | theories about | | | and behaviours | make sense of | social | | | associated with | the world t | | | | the membership | provide | phenomena. | | | of groups, | insightful | | | | teams, etc. | accounts of | | | | | their | | | | | subjective | | | | | experience. | | +-----------------+-----------------+-----------------+-----------------+ - **Qualitative description as a research approach** - Seeks to discover and understand a **phenomenon**, a **process**, or the **perspectives** and **worldviews** of those involved by accessing the meanings participants ascribe to them - Remains at a more descriptive level - An approach that is useful where **information is required directly from those experiencing the phenomenon under investigation** but where time and resources are limited - Used a lot in nursing and in health research - Often used as part of a mixed methods approach - **Qualitative description** - An inductive process - Designed to develop an **understanding** and describe a **phenomenon** - Researcher is **active** in the research process (researcher becomes part of the phenomenon being studied) - Recognizes the **subjectivity** **of the experience** (of the participant and the researcher) - an emic stance - Conducted in the **natural setting** - **Methodological Congruence** -