Session 8: Developing and Evaluating Interventions PDF

Document Details

AmenableHurdyGurdy5261

Uploaded by AmenableHurdyGurdy5261

University College London, University of London

Tags

mental health interventions evaluation methods randomized controlled trials research methods

Summary

This document provides an overview of developing and evaluating interventions, particularly in the context of mental health. It discusses the theoretical framework, methods, and considerations related to intervention design, including the use of RCTs and quasi-experiments. The document highlights the importance of considering different perspectives and factors like feasibility, resources, and stakeholder input within the context.

Full Transcript

Session 8: 1st Nov Created @September 24, 2024 3:16 PM Tags Developing and Evaluating Interventions (Workshop): Lead Teachers: Sonia Johnson, Becky Appleton, Bryn Lloyd-Evans, Maev Conneely To be covered: Theory of B...

Session 8: 1st Nov Created @September 24, 2024 3:16 PM Tags Developing and Evaluating Interventions (Workshop): Lead Teachers: Sonia Johnson, Becky Appleton, Bryn Lloyd-Evans, Maev Conneely To be covered: Theory of Behaviour Change, How Interventions are Developed and Methods of Evaluation, Including RCTS Reading list: PSBS0002: Core Principles of Mental Health Research | University College London (talis.com) Class Prep: Key: Preliminary Lectures - Investigating Treatments in Mental Health Basic Principles of RCTs? Evidence before RCTs: Expert opinion Clinical Judgement Case series Hierarchy and Power Session 8: 1st Nov 1 Evidence-Based Health Care Evidence-based medicine (later evidence-based practice) – a revolt against tradition and unsupported clinical intuition (1990s). Establishes a hierarchy of evidence: Systematic reviews(generally of RCTs) Randomised controlled trials (RCT) Other controlled studies/cohort designs 1980s: case study gives way to RCT RCTs prioritised by funders, reviewers and guideline developers as the gold standard for research. Developed for drug trials, but extended The Point of RCTs A fair comparison, not affected by initial differences between groups Everything possible to achieve objectivity – replicable measures, independent observers and analysts who do not know what group participants are in Ammunition to argue for innovations that really work (CBT for psychosis), against ones that don’t (Community Treatment Orders) RCTs in the pathway for developing and testing psychosocial interventions Recommended pathway for developing and testing interventions: Develop theoretical framework, assemble relevant evidence, look for elements to adapt or incorporate Model components of intervention, test feasibility and acceptability Full pilot of intervention and trial proceedings Definitive RCT Implementation study Session 8: 1st Nov 2 But in practice, new interventions often developed by enthusiastic service leaders/practitioners Stages of development and evaluation of complex interventions - NIHR/MRC framework for development and evaluation of complex interventions Limitations of Trials Slow process 15 years + to definitive evidence Contextual factors can affect outcome e.g. the content of “treatment as usual” Tricky for interventions delivered to whole services or whole areas Do not tend to prioritise service user experience or preference Quasi-Experiments and Naturalistic Studies Quasi-experiments: Non-randomised comparative study Main types: Pre-post comparison Area by area, or service by service comparison Can be seen as a type of cohort study Example: TIPS Study - comparison of an area with a large-scale early detection of psychosis campaign vs. usual service in neighbouring area Session 8: 1st Nov 3 Naturalistic studies – Investigate research questions that can be addressed without changing the care people receive (overlaps with quasi-experiment) Example: REAL Cohort Study (Killaspy et al.) - investigation of individual and service- level factors associated with good outcome in a national cohort of people using rehabilitation services for complex psychosis Current thinking about quasi-experiments Previously – biased and disreputable Now – some evidence to suggest not very different outcomes from RCTs, especially if contemporaneous Modern statistical methods (e.g. adjusting for confounders) improve quality. Routine data – especially “big data” currently attracting increasing interest – numbers, representativeness & efficiency. The Rise of “Big Data” Routine data available as: National monitoring data e.g. NHS Digital data on admissions Session 8: 1st Nov 4 Anonymised extracts from casenotes e.g. CRIS National or local case registries e.g. Swedish cohorts consortium Potential for rapid and highly representative studies Health informatics: science of using information technology to improve health e.g. by developing approaches to routine data use in research The Role of Health Economists For widespread adoption, mental health innovations tend to have to demonstrate cost- effectiveness Cost-effectiveness: relates health gain to how much needs to be spent to achieve it Classically QALYS (Quality Adjusted Life Years) are outcome measure – expenditure per QALY is measured Other outcomes often used in mental health e.g. cost per 5 point improvement on a depression scale. Cost-benefit analysis: Costs of intervention balanced against financial benefit e.g. an intervention costs £1000 per person but saves mean £1,500 in inpatient costs Understanding individual views and experiences: qualitative research Service user perspective – RCTs do not focus on individual experiences, preferences or needs. Qualitative research explores words and text rather than numbers Variety of theoretical frameworks Sources: interviews, focus groups, diaries, direct observation… Service users often involved: co-production models now frequently used Role of Qualitative Research in Evaluating Interventions Understanding preferences and what’s feasible and/or acceptable Generating new ideas about what might work Understanding mechanisms, including why some interventions fail Mixed methods: Now standard in studies of complex interventions to combine quantitative and qualitative methods Session 8: 1st Nov 5 Summary RCTs are still the ideal way to test a new treatment – no better way of making sure comparison is fair Should be embedded in systematic step by step development and testing But not always feasible or the most externally valid way of investigating treatments Quasi-experiments, naturalistic studies, qualitative and mixed methods all have important roles “Big data” – key to a future of rapid and straightforward evaluations? Key: Designing a Research Project: Randomised Controlled Trials and Their Principles Definition and Key Features RCTs are considered the gold standard for determining cause-effect relationships between interventions and outcomes. They involve: 1. Random assignment of subjects to experimental and control groups 2. Intervention applied to the experimental group 3. Comparison of outcomes between groups RCTs are the most stringent way to determine whether a cause-effect relation exists between an intervention and an outcome. Design Considerations Minimizing Errors: Bias: Systematic errors in methodology Selection bias: Differences between study groups Observer/information bias: Systematic differences in data collection methods Confounding: Factors associated with both intervention and outcome Example: Age as a confounding factor in treatment effectiveness studies Chance: Random errors affecting results Strategies to Reduce Errors: Large sample size Session 8: 1st Nov 6 Concealed randomization Blinding of patients, investigators, and outcome assessors Intention-to-treat analysis Focus on a priori hypothesis Protocol Development 1. Formulate a specific, a priori hypothesis Example: "Drug A is more efficacious in reducing diastolic blood pressure than drug B in patients with moderate essential hypertension." 2. Provide rationale and focused literature review 3. Define study design elements: Population sampling Randomization method Intervention application Outcome measures Analysis plan 4. Conduct peer review and seek expert advice 5. Perform a pilot study to test methodology Population Sampling Key Considerations: Define target population Set appropriate inclusion/exclusion criteria Use consecutive sampling for best representativeness Consider stratified sampling for rare outcomes or high-risk populations Sample Size Calculation: Determine clinically significant difference Consult with statistician Session 8: 1st Nov 7 Conduct pilot study to estimate recruitment rate Randomization Importance: Equally distributes confounding variables Basis for measuring differences between groups Methods: 1. Computer-generated random allocations 2. Sealed, opaque, numbered envelopes 3. Remote randomization facility Advanced Techniques: Blocked randomization: Ensures equal group sizes Stratified randomization: Balances important baseline variables Baseline Measurements Collect demographic information Measure important prognostic factors Demonstrate equal distribution of variables between groups Conducting the Trial 1. Apply intervention to experimental group 2. Measure pre-defined outcomes 3. Implement quality control measures Reporting Follow CONSORT guidelines for reporting RCT results, including: Population sampling methods Randomization process Baseline characteristics of groups Session 8: 1st Nov 8 Key Points for Validity 1. Representative sample of target population 2. Adequate sample size 3. Effective concealment of randomization 4. Identical treatment of groups except for intervention 5. Blinding of patients, investigators, and outcome assessors 6. Intention-to-treat analysis 7. Focus on a priori hypothesis in analysis Additional Considerations Ethical Considerations: Obtain informed consent from all participants Ensure ethical approval from relevant institutional review boards Consider potential risks and benefits to participants Data Management and Analysis: Develop a detailed data management plan Use appropriate statistical methods for analysis Consider interim analyses and stopping rules for safety Reporting Adverse Events: Establish a system for reporting and managing adverse events Include a safety monitoring committee for large trials Long-term Follow-up: Consider the need for long-term follow-up to assess durability of effects Plan for participant retention strategies Cost-effectiveness Analysis: Include economic evaluation alongside clinical outcomes when appropriate Generalizability and External Validity: Session 8: 1st Nov 9 Discuss how the study population and setting may affect generalizability of results Consider conducting multi-center trials to enhance external validity By adhering to these principles and considering these aspects, researchers can design and conduct high-quality RCTs that minimize bias and confounding, providing robust evidence for the effectiveness of interventions. Webpage: CONSORT 2010 Read: Improving the Delivery and Organisation of Mental Health Services: Beyond the Conventional Randomised Controlled Trial Read: ‘Big Data’ in Mental Health Research: Current Status and Emerging Possibilities Read: An Introduction to Economic Evaluation: What’s in a Name? Read: The Basics of Economic Evaluation in Mental Healthcare Read: Qualitative Research Methods in Mental Health Read: More Over RCT—Time for a Revised Approach to Evidence Based Medicine Read: Alternatives to Randomised in the Evaluation of Public Health Interventions: Design Challenges and Solutions Notes: Methods for Investigating Treatments and Services Paper 1: Adewuya et la. Paper 2: Johnson et al. Paper 3: Tsiachristas et al. NIHR/MRC framework for development and evaluation of complex interventions (2021): The four main perspective on which research on interventions may be based Session 8: 1st Nov 10 How are interventions developed? MRC/NIHR framework for developing and evaluating interventions (2021) – update of influential framework Main questions that should be addressed in developing an intervention for a study What clinical problem is the intervention meant to address? Is there a demonstrable unmet need and what is the relevant population? What evidence is already available? Are there interventions that could be used/adapted? What are the views of the relevant stakeholders (service users, carers, clinicians, researchers, experts) on what may be acceptable and workable and fits their priorities? Increasing emphasis on co-production/co-design, involvement (not just consultation) at every stage What is the programme theory underpinning the intervention? Can you describe how and why it’s supposed to achieve change? What is the exact content of the intervention? Who will deliver it to whom? In what context? The main stages of development and evaluation of complex interventions - NIHR/MRC framework for development and evaluation of complex interventions (2021) Session 8: 1st Nov 11 The core elements in the middle box are revisited throughout development and evaluation of an intervention. Programme Theories Interventions should be theory-driven, in the sense that they are based on a programme theory → intended causal link between an intervention and its outcomes Consensus is that there should be a clear programme theory underpinning any intervention tested in research (though this varies in practice) Ideally should be developed at the outset Refined in the light of subsequent findings and experiences May incorporate existing psychological or sociological frameworks (e.g. the Theory of Planned Behaviour) They often include more practical ad hoc ideas about the pathways by which an intervention may achieve an effect Incorporating previous research and stakeholder input Basic requirement: to start an investigation with a clear idea of the potential pathway by which an intervention is hypothesised to produce its expected effect. Theories of Behaviour Change Stages of change model: a widely used Theories of behaviour change are among example of a theory of behaviour change the most frequently used pre-existing Session 8: 1st Nov 12 theoretical frameworks in intervention development Seen as as central to many health interventions Many grant applications/major studies include a relevant expert, often health psychologist Intervention development often involves adapting/applying well-established theoretical frameworks to a specific study context along with evidence about pathways, stakeholder views Result: a study-specific theory of change or logic model Community Navigator Community Navigators Study: Theory of Trial Change (Pinfold et al.) Reducing loneliness for people with complex Anxiety or Depression RCTs in Current Research? Drugs Session 8: 1st Nov 13 RCT is main paradigm High proportion in industry – issues of partial disclosure (Trial Registration procedures introduced as a way to deal with this) & short follow-up times, selection of unfavourable controls that make new therapy look more successful Trials of medicinal products - highly regulated and monitored as CTIMPs (Clinical Trials of Medicinal Products) Publication bias the greatest hazard – commentaries on excess of positive published results Psychological Treatments Complex Psychosocial Treatments RCTs – standard for CBT and 3rd Many RCTs of complex interventions with wave psychological treatments components beyond a therapist/patient interaction - e.g. crisis care, supported More suspicion of RCT employment, early intervention methodology in other areas e.g. psychoanalysis (concern Quality improved by: of missing subtleties being multicentre design, co-production, theory overlooked in RCTs) development, thorough feasibility & pilot Important considerations: studies, manualisation, rigorous conduct overseen by CTUs, fidelity measurement Manualising interventions Delivery & replication are challenging, Set out clear standards for obstacles include: each session to make it clearer what’s being evaluated Local factors: Standardisation of therapists being Participating individuals, & trained to right level to deliver an organisational & cultural context intervention affect content & impact, of both experimental and control Monitoring of delivery fidelity– interventions. often by taping sessions Individual randomisation may be Assessment of cost-effectiveness challenging usually included e.g. where an intervention affects practice of a team or organisation. Session 8: 1st Nov 14 Implementing interventions fully and consistently is often challenging, more so with complex team-level interventions Cluster Randomised Trials Randomise at a level higher than individual e.g. staff or team caseloads, areas Useful in some circumstances where it’s hard to randomise individuals Challenges: 40 rehabilitation inpatient units/ward Having enough clusters for randomly allocated between an innovative randomisation to produce similar package to increase activity & control groups (generally 10+ clusters per condition - main outcomes measured at arm) patient level Bigger overall numbers are needed half implemented new way of working and other half didn’t Logistics → especially of assessment pre-randomisation New variants of approach e.g. stepped wedge designs → approach to cluster randomised trial start with no service and get randomised into experimental arm in waves at various points (across the service) during study period and assessed between when they were and weren’t in the experimental activity Question: you are assessing quality of a randomised controlled trial (RCT). Which three of the following are the most significant as indicators of low quality in RCTs? Session 8: 1st Nov 15 Not blinding researchers Failing to record and report attrition in the intervention (treatment) and TAU groups A written protocol not being accessible Question: What is the principal disadvantage of a cohort study compared to a randomised clinical trial in examining the relationship between treatments received and outcomes? Confounding is more likely Question: Which three of the following are TRUE of a good randomised control trial designs? A pilot study testing feasibility and acceptability of recruitment procedures, interventions and outcome measures should precede a definitive trial. Baseline measures should be assessed for each participant before they are randomised to a treatment group. A statistician who is independent of the study research group should conduct the randomisation of participants between experimental and control group. Protocol Design Group: ‘Suspiciousness’ = vague Similar to ‘paranoia’ → already interventions to address paranoia ‘chief investigator…has tried it out in clinical practice’ - ?ethics Exclusion criteria - ?dangerous & ?otherwise unsuitable Right to withdraw → pushed by ‘highly assertive’ research assistant Stakeholders → patients/individuals with psychosis not consulted Late informed consent All research assistant ‘moved between experimental and control group - exactly similar age and sex’ → bypass randomisation Should be 1 primary outcome - did the intervention succeed or not? Session 8: 1st Nov 16 Non-Randomised Evaluations in Mental Health Care Research Studies may be (overlapping categories): Observational: investigating care that is already delivered without changing interventions Natural experiments: observational studies in which different groups receive different treatment for reasons beyond investigators’ control, allowing opportunistic investigation of differences in outcomes Quasi-experiments: non-randomised comparisons between groups receiving different treatments (including natural experiments) Advantages: Options where randomisation not feasible/acceptable (e.g. in public mental health interventions) Often cheaper and quicker → may use existing data (including big routinely collected electronic data sets) Naturalistic → findings more applicable to the real-world Disadvantages: Confounding (harder to attribute effects confidently to intervention and not other potential influences) Often have less control over intervention Poorer quality of routinely collected data Observational Research on Drugs This study: 18,869 people starting Observational research: observe associations antipsychotics in Quebec. between potential explanatory and outcome variables without manipulating them Routine clinician-recorded data used to investigate differences In psychopharmacology: between drugs in rates of adverse Observational designs used to compare events, hospital visits etc. patterns of adherence and discontinuation, outcomes, side effects between drugs. Some of the most successful studies based on routine data address questions in Session 8: 1st Nov 17 psychopharmacology Advantages: Longer follow up than most RCTs Unselected samples – estimated

Use Quizgecko on...
Browser
Browser