Cambridge Somerville Project PDF
Document Details
Uploaded by PrizePhotorealism
Western University
Tags
Summary
This document describes the Cambridge Somerville Project – a large-scale social intervention program designed to prevent delinquency in boys from lower-class backgrounds in Boston, USA. The analysis of the program reveals contradictory results in which subjective reviews of the program's impact contrasted with the rigorous statistical evaluation. The document explores the factors that might explain the intervention's failure, including environmental factors and potential negative consequences from participation.
Full Transcript
Reading Tuesday, February 4, 2025 8:42 AM Intervention and Evaluation: The Cambridge Somerville Project: - The program was a model of intervention design, with approximately 250 boys randomly assigned to the program, and another 250 or so randomly assign...
Reading Tuesday, February 4, 2025 8:42 AM Intervention and Evaluation: The Cambridge Somerville Project: - The program was a model of intervention design, with approximately 250 boys randomly assigned to the program, and another 250 or so randomly assigned to a control group. - Although the program was the “kind of multifaceted intervention that many social scient would love to see implemented today”, it turned out to be a dismal failure. - A major goal of the intervention was prevention; that is, to reduce the likelihood of youn boys from lower-class backgrounds in a Boston suburb—some of whom were identified a delinquency prone—from going down a criminal path. - The long-term effects of the program were evaluated in a series of studies for (remarkabl 40 years following the intervention. - Two kinds of evaluative data sharply contradicted each other: ○ On the one hand, the subjective impressions of caseworkers and many program participants presented a positive picture of the benefits of the program. ○ On the other hand, the more reliable statistical evidence indicated that the program participants, as compared with the control group, had no fewer juvenile and adult offenses and did not fare better on a number of other indicators, such as health, mortality, and life satisfaction. - Explaining Why it Failed: § Ross and Nisbett (1991) offered several plausible explanations for the failure the Cambridge–Somerville project, including the fact that the situational fact (i.e., program activities) that were manipulated as the intervention, although impressive in terms of expenditure of time and human resources, were “trivi compared with the environmental forces that the boys faced on an ongoing basis. § They also mentioned the possibility that being identified with the program might have had a stigmatizing effect on the boys, such that they and others would have viewed them as troubled and delinquency prone, with such a vie becoming a self-fulfilling prophecy. § This explanation raises the unfortunate possibility that the intervention actua may have had a harmful effect on the boys. In fact, on several indicators (e.g. multiple offenses, alcoholism, achievement of professional status), the progr participants were less well-off than the control participants during adulthood § Another possible reason why the program may have worked to the detrimen the participants is that because the boys could be seen as already receiving h y tists ng as ly) m e of tors h ial” ew ally., ram d. nt of help § may have had a harmful effect on the boys. In fact, on several indicators (e.g. multiple offenses, alcoholism, achievement of professional status), the progr participants were less well-off than the control participants during adulthood § Another possible reason why the program may have worked to the detrimen the participants is that because the boys could be seen as already receiving h from the program, the usual community sources of help (e.g., clergy, teacher social service agencies) might have been less likely to provide their assistance § The possibility of unanticipated negative consequences must always be recognized and assessed in the evaluation of a program. Reactance: - Reactance: the idea that when a source of influence threatens people’s sense of freedom think or behave as they see fit, people will act against the influence to protect their freedom. - Even though an intervention is intended to help people, if the people feel pressure to change, they might resist the social influence attempt that the program represents. - Designers of interventions often must take steps to minimize the undermining effects of reactance by avoiding the use of overly strong (i.e., reactance-triggering) persuasive communications, and by helping (as much as possible) to sustain in individuals a sense of choice or control about being exposed to program activities. - The potential undermining role of reactance speaks to the need for program designers to understand and anticipate how program recipients will define the program’s objectives, goals, and activities. Evaluation Types and Methods: - Process evaluation: undertaken to determine whether the program has reached its targe audience (as identified in the intervention hypothesis), and whether the program activitie (as outlined in the program’s logic model) have been implemented in the prescribed manner. ○ They ask a question like: Is the program being implemented in the way in which it w planned? - Outcome evaluation: assesses how well a program meets its objectives (i.e., short-term outcomes as described in the program logic model), and in a more comprehensive evaluation, it also assesses how well the program is achieving its goals (i.e., long-term outcomes, also part of the logic model). ○ Essentially, the overriding purpose of an outcome evaluation is to determine wheth the hypothesized improvement in functioning occurs among the recipients of the program as a result of exposure to its activities. - Developmental evaluation: involves testing or experimenting with new approaches to a problem—perhaps involving multiple trial interventions—with the intention of developin an innovative solution that can be subjected to process and outcome evaluation approaches. ○ Can be used when interventions are in a stage of early innovation or in situations o high complexity, like poverty or homelessness, where the causes and solutions to t., ram d. nt of help rs, e. m to f o et es was her ng of the - Developmental evaluation: involves testing or experimenting with new approaches to a problem—perhaps involving multiple trial interventions—with the intention of developin an innovative solution that can be subjected to process and outcome evaluation approaches. ○ Can be used when interventions are in a stage of early innovation or in situations o high complexity, like poverty or homelessness, where the causes and solutions to t problem are unclear and intervention stakeholders are not on the same page. NIU Intervention: - The goals of the intervention, which was developed and implemented at Northern Illinois University (NIU), were to reduce high-risk drinking among students and to reduce the incidence of injuries due to alcohol consumption. - To reduce alcohol consumption among NIU students, an intervention that represented an application of social norm theory was designed. ○ Social norm theory: explains how people's behavior is influenced by their perceptio of what is normal. ○ Principles of Social Norm Theory: 1. Individuals tend to conform to what they perceive to be the norm for a particular behavior. 2. In some instances, individuals may behaviorally conform to misperceived norms. 3. If misperceptions of norms are corrected, individuals will change their behavi to agree with the corrected perceptions. - it was decided that the main program activity would be to use a mass media campaign. Given that most students at NIU reported that the campus newspaper was their primary source of information about campus activities, it was decided that a print media campaig would reach the largest number of students at the lowest cost. - An additional program activity involved a means of increasing the likelihood that student would read and remember the campaign message. This entailed rewarding students who remembered the message and spread the message to others. ○ For example, groups of students were approached at random and asked, “Who kno how many drinks most NIU students drink when they party?” The student with the correct answer received $1. Students also received $5 for putting campaign posters their dorm room walls. - Evaluating the Intervention: ○ The evaluation sought to answer three questions: 1. Did the perceived rate of high-risk drinking (defined as having more than five drinks when partying) among peers decrease to a more accurate perception? 2. Did the rate of actual high-risk drinking decrease? 3. Did the rate of alcohol-related injuries decrease? ○ To answer these three questions, baseline information was collected from the students in 1988, that is, before the intervention was implemented. § Baseline information refers to data that are collected on the target populatio prior to an intervention (i.e., the pretest) and that are compared with data ng of the s n ons ior gn ts o ows s on e ? on 2. Did the rate of actual high-risk drinking decrease? 3. Did the rate of alcohol-related injuries decrease? ○ To answer these three questions, baseline information was collected from the students in 1988, that is, before the intervention was implemented. § Baseline information refers to data that are collected on the target populatio prior to an intervention (i.e., the pretest) and that are compared with data collected after the intervention has been implemented (i.e., the posttest). § A student survey was used in 1988 to collect three pieces of information abo NIU students: the perceived rate of high-risk drinking among other students, actual rate of high-risk drinking, and the rate of alcohol-related injuries. Single-Item Measures - Validity: Benefits of Single-Item Measures: - Practical Advantages: ○ They reduce respondent burden, shorten surveys, and prevent item repetition, making them useful in resource-intensive research (e.g., diary studies, experience sampling). This helps retain participants, minimize non-response bias, and reduce cognitive strain, leading to more reliable data. - Construct Validity & Reduced Contamination: ○ Single-item measures can be designed to capture key aspects of a construct while avoiding criterion contamination (irrelevant characteristics). Though concerns exist about their validity and reliability, research suggests they can still be effective when properly developed. Study 1 and Table 1: - Study 1 focused on assessing content validity, an essential first step in construct validatio To evaluate this, the study measured definitional correspondence, or how well an item aligns with a construct’s definition. Using a large sample of naïve raters (working adults) helped ensure the measures were evaluated by a representative group. Since the single- item measures were designed with clear definitions and relevant examples, they were expected to demonstrate strong content validity based on these evaluations. - Identified Constructs: ○ Content Validity: § The degree to which a measure accurately represents the concept it is intend to assess. It ensures that an item or scale captures all relevant aspects of a construct. ○ Construct Validation: § A broader process that establishes whether a measure truly assesses the theoretical concept it claims to measure. Content validity is an initial step in t process. ○ Definitional Correspondence: § The extent to which a measure’s items align with the definition of the constru on out the t n on. ded this uct § A broader process that establishes whether a measure truly assesses the theoretical concept it claims to measure. Content validity is an initial step in t process. ○ Definitional Correspondence: § The extent to which a measure’s items align with the definition of the constru being measured. A high definitional correspondence means the item accurate reflects the construct without unnecessary or misleading content. § Definitional correspondence refers to how closely a measure’s items match t precise definition of a construct. If an item clearly reflects the key aspects of concept it intends to measure (either by explicitly stating the definition or providing relevant examples), it is considered to have strong definitional correspondence. This is used to establish content validity by ensuring that ea item remains faithful to the construct’s meaning without including unrelated misleading elements. ○ Naïve Raters: § Individuals (in this case, working adults) who evaluate the content validity of measures without prior exposure to the constructs. They serve as an unbiase representative sample to assess how well items correspond to construct definitions. Discussion: - The study provides evidence-based support for the reliability and validity of single-item measures for many constructs. It emphasizes that researchers should evaluate their use contextually rather than relying on subjective biases. While not all constructs can be measured with single-item measures, their use does not inherently weaken research desi or compromise validity for convenience. - The research also offers a comprehensive review of single-item measures, providing a practical compendium for scholars to use while maintaining measurement validity. The findings highlight key applications, a structured process for developing and validating sing item measures, and future research directions. this uct ely the the ach d or ed, ign gle-