Lecture 5: Evaluation PDF
Document Details
Uploaded by GoldHorse
Vrije Universiteit Amsterdam
Tags
Related
- Health Program Context PDF
- Health Program Context PDF
- Health Program Planning and Evaluation Lecture Notes PDF
- Health Promotion And Education: Selection Of Educational Strategies PDF
- SESIÓN 13: Diseño de Programas Educativos en Alimentación, Nutrición y Salud PDF
- Community Nursing Progress Planning Notes PDF
Summary
This document details different types of evaluation, such as effect, process, and economic evaluations, for health programs. It also presents the RE-AIM framework for guiding the planning and evaluation of such programs. The evaluation process considers factors like program reach, effectiveness, and adoption.
Full Transcript
Lecture 5: Evaluation --------------------- **Learning objectives:** - Understand and describe the concept of evaluation - To name the main elements of an evaluation plan - Distinguish different types of evaluation - Distinguish and name basic evaluation questions - Describe and use...
Lecture 5: Evaluation --------------------- **Learning objectives:** - Understand and describe the concept of evaluation - To name the main elements of an evaluation plan - Distinguish different types of evaluation - Distinguish and name basic evaluation questions - Describe and use the RE-AIM framework - GRADE / Quality of Evidence **Evaluation** A process that critically examines a program. It involves systematically collecting and analyzing information about the activities, characteristics and outcomes **Reasons to evaluate** - To determine the **effects of a program** Measure the successfulness in preventing and/or reducing the health problem - **Accountability** Is the money spent wisely? - **Development** To improve the effectiveness and/or implementation of an intervention on larger scale and/or inform programming decisions - **Ethical aspects** Look at the unwanted side effects Too often evaluation is **omitted**. Evaluation is typically the last consideration in planning and implementation of an intervention. Known reasons are a lack of money, threat, time constraints, already proven effectiveness and the fact that the intervention is still developing. **Main elements of an evaluation plan**: - Relevant evaluation questions - Appropriate design and methods - Program or intervention - Outcomes - Involvement of stakeholders **Types of evaluation:** - **Effect:** How effective is the program in preventing and/or reducing the health problem? This type of evaluation examines the **effects** (outcomes) of the program. Provides the answer to the **primary research question**. (Dependent variable = primary outcome). The effect evaluation or program outcomes can be described for **health, quality of life, behavior and environment.** (An RCT is the best method to use this evaluation) - **Process**: **Why** is the program (not) successful in preventing and/or reducing the health problem? This type of evaluation is to understand processes in order to **strengthen or improve** the program being evaluated. It examines the **delivery and conten**t of the program, the quality of its implementation, the barriers and facilitators of the implementation, the organizational context, staff, procedures, inputs etc. In this type of evaluation, a **qualitative** research method is often applied. - **Economic:** Is the (preventive) health intervention **cost-effective** in preventing and/or reducing the health problem? This type of evaluation makes a comparison of the costs and the outcomes of the (preventive) health interventions. Takes into account the **intervention** costs, **treatment** costs (hospital, extra visits general practitioner) and the **societal costs** (absenteeism) Basis evaluation refers to: 1. Reach, integrity, acceptability 2. Observed change 3. Internal validity 4. Effect explanation 5. Cost-benefit assessment 6. Applicability 7. Generalizability An example of a basic evaluation question can be: '*Was the program carried out as planned*?' The aspect referred to is **program integrity**, and is based on the **process** evaluation. For the question: '*How many people have been reached*?', the aspect referred to is **program reach**. This is based on process evaluation. Other possible questions for different domains are listed below. Program integrity "was the program carried out as planned?" (process) Program reach "how many people have been reached?" (process) Internal validity "can the results be attributed to the program?" (effect) Cost-benefit assessment "what were the costs of the program? Do the benefits outweigh the costs?" (effect) Generalizability "would the program yield similar results in another (real-life) setting?" (effect) Evaluation distinguishes: - Study protocol design paper - Effect study effect paper - Process evaluation process paper - Economic evaluation cost-effectiveness paper **RE-AIM framework:** This is a framework to guide the planning and evaluation of programs according to the 5 key outcomes: - **Reach:** The percent and representativeness of individuals willing to participate in an initiative program. It explores **characteristic** of study participants. "*Who* is **intended to benefit** and who **actually participates** or is exposed to an intervention?" - **Effectiveness**: The **impact** of the intervention on targeted outcomes. Outcomes types can be; health promotion outcomes, intermediate outcomes or health outcomes. "*What* is the **most important benefit** you are trying to achieve and what is the likelihood of negative outcomes?" - **Adoption**: The percent and representativeness of settings and intervention staff that agree to deliver a program. Explore the facilitators and barriers for adoption. Where is the intervention applied? & Who applied it? - **Implementation**: The consistency and skill with which various program elements are delivered by various staff. "How easily was the study being conducted (*feasibility*)?" / "How consistently is the intervention delivered (*fidelity*)?" / "How will it be adapted (*adaptations*)?" / "How much will it cost (*costs*)?" - **Maintenance**: The extent to which individual participants maintain behavior change **long term** and at the setting level. The degree to which the program is sustained over time within the organizations delivering it. "When will the intervention become operational and how long will it sustain?" GRADE: **G**rading of **R**ecommendations **A**ssessment, **D**evelopment and **E**valuation **Quality or evidence** Reflects the extent of our confidence that the estimates of the effect are correct. This can be high, moderate, low or very low. Factors that lower the quality of evidence are study limitations (risk of bias), inconsistency of results, indirectness of evidence (surrogate outcomes), publication bias and imprecision Factors that increase the quality of evidence; - Large magnitude of effect - Dose-response gradient Randomized controlled trials (RCTs), systematic reviews of RCTs and meta-analysis of RCTs provide the highest level of quality of evidence, ordered from low(er) to highest ¬ So a meta-analysis of RCTs provides the highest quality of evidence, followed by a systematic review of RCTs and then a RCT itself.