Research Methods in Clinical Psychology 2019/20
78 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

Clinical significance can be assessed using Kendall's equivalency testing method.

False

Clinical significance is expressed when a client's post-intervention score represents abnormal functioning.

True

Clinical significance is expressed when a client's post-intervention score has moved outside the range of the dysfunctional population.

True

Clinical significance is reached when intervention and control groups differ significantly regarding the primary outcome.

<p>True</p> Signup and view all the answers

Seligman (1995) identifies five properties of psychotherapy that are difficult to assess in controlled studies. Which of these properties are they?

<p>Psychotherapy in the field is self-correcting.</p> Signup and view all the answers

How can the results of small-n-studies be generalized across studies?

<p>...using so called clinical replication studies.</p> Signup and view all the answers

An audit in clinic research is the basis of quality assurance to set up procedures in services.

<p>True</p> Signup and view all the answers

An audit in clinic research is a concept that refers to treatment efficacy studies in outpatient services.

<p>False</p> Signup and view all the answers

An audit in clinic research is an approach that is comparable (if not synonymous) to program evaluation.

<p>True</p> Signup and view all the answers

An audit in clinic research describes a circle in progress, as is typically present in evaluation research.

<p>True</p> Signup and view all the answers

In comparison to fMRI, EEG has a much higher spatial resolution.

<p>False</p> Signup and view all the answers

PET is the only method to infer causal effects of differences in brain activity on behavioral outcomes.

<p>False</p> Signup and view all the answers

Using TMS is not possible (and not warranted advisable) to reach subcortical structures.

<p>False</p> Signup and view all the answers

Neuro-scientific methods assess only functional but not structural malfunctioning.

<p>False</p> Signup and view all the answers

Outcome evaluation in program evaluation research includes the assessment of positive outcomes and benefits for the clients.

<p>True</p> Signup and view all the answers

Outcome evaluation in program evaluation research includes the assessment of negative outcomes.

<p>True</p> Signup and view all the answers

Outcome evaluation in program evaluation research includes the assessment of client satisfaction (with the service).

<p>True</p> Signup and view all the answers

Outcome evaluation in program evaluation research includes the assessment of outcomes of some individuals (case tracking).

<p>True</p> Signup and view all the answers

Before calculating a combined effect size, we need to ensure that all individual effect sizes have the same metric.

<p>True</p> Signup and view all the answers

Meta-analyses are highly powered interval-based estimates of parameters based on replication principles.

<p>True</p> Signup and view all the answers

Meta-analyses can be computed to generalize data across a number of small-n-design studies.

<p>True</p> Signup and view all the answers

“Random effects” meta-analyses are a way to address and model the heterogeneity between studies.

<p>True</p> Signup and view all the answers

The validity of a measurement is computed for metric data using Cronbach's alpha (internal consistency).

<p>True</p> Signup and view all the answers

The validity of a measurement can be assessed by means of face validity (as provided by means of qualitative expert ratings).

<p>True</p> Signup and view all the answers

The validity of a measurement is always (numerically) larger than the reliability of the measurement.

<p>False</p> Signup and view all the answers

The validity of a measurement assesses the meaning of measurement (as specified in its definition).

<p>True</p> Signup and view all the answers

“Random effects” is a term that refers to approaches that assume that parameter estimations are the results of random experiments but remain stable across different studies/participants.

<p>True</p> Signup and view all the answers

“Random effects” are evident in some types of non-overlapping pairs approaches in single case designs.

<p>True</p> Signup and view all the answers

“Random effects” are computed in some multi-level models.

<p>True</p> Signup and view all the answers

“Random effects” models are computed to generalize the results of meta-analyses to the whole population of studies.

<p>True</p> Signup and view all the answers

SCED refers to “single case experimental designs”.

<p>True</p> Signup and view all the answers

SCED designs rely on only a few participants.

<p>True</p> Signup and view all the answers

SCED have the advantage over larger scale clinical research studies to produce data that do not constitute a time series.

<p>False</p> Signup and view all the answers

ABAB studies represent typical designs for SCED.

<p>True</p> Signup and view all the answers

Using small-n pilot studies could lead to underestimations of statistical power in Randomized Controlled Trials.

<p>True</p> Signup and view all the answers

ANCOVA models should be preferred over ANOVA models in Randomized Controlled Trials.

<p>True</p> Signup and view all the answers

A PPF (pre-post-follow up) design is the best way to assess clinical outcomes and to evaluate the efficacy of a treatment in Randomized Controlled Trials

<p>False</p> Signup and view all the answers

The term 'controlled' in Randomized Controlled Trials refers to the inclusion of a control group.

<p>True</p> Signup and view all the answers

The number of studies that can be included in a meta-analysis is a limitation of meta-analyses.

<p>False</p> Signup and view all the answers

Qualitative information is ignored in meta-analyses.

<p>False</p> Signup and view all the answers

The quality of the results depends on the quality of the included studies in meta-analyses.

<p>True</p> Signup and view all the answers

Effect sizes do not convey meaningful information in meta-analyses.

<p>False</p> Signup and view all the answers

CONSORT is a Consolidated Standards of Reporting Trials.

<p>True</p> Signup and view all the answers

CONSORT includes a flow diagram which displays the steps of statistical analyses to be conducted in RCTs

<p>False</p> Signup and view all the answers

CONSORT provides a checklist to avoid biased estimates in analyses of RCTs.

<p>True</p> Signup and view all the answers

CONSORT are guidelines how to report RCTs.

<p>True</p> Signup and view all the answers

Client survey methods are a typical example of assessments in therapy process research.

<p>True</p> Signup and view all the answers

Audio-tape recordings of therapy sessions are a typical example of assessments in therapy process research.

<p>True</p> Signup and view all the answers

Therapist self-report measures are a typical example of assessments in therapy process research.

<p>True</p> Signup and view all the answers

Observational methods are a typical example of assessments in therapy process research.

<p>True</p> Signup and view all the answers

ITSACORR is a specific kind of time series analysis.

<p>True</p> Signup and view all the answers

ITSACORR is known to be (statistically) flawed but can be used in SCED studies with only minimal corrections in the error terms.

<p>False</p> Signup and view all the answers

ITSACORR is the most current type of multi-level analysis for SCED studies.

<p>False</p> Signup and view all the answers

ITSACORR is specifically designed to deal with short time series.

<p>False</p> Signup and view all the answers

Multilevel models (Multilevel modeling, MLM) are suggested to be computed in case of missing data and unbalanced designs.

<p>True</p> Signup and view all the answers

Multilevel models (Multilevel modeling, MLM) represent the state of the art of statistical analyses in RCTs.

<p>True</p> Signup and view all the answers

Multilevel models (Multilevel modeling, MLM) are sometimes just called mixed model regressions.

<p>True</p> Signup and view all the answers

Multilevel models (Multilevel modeling, MLM) should not be applied in therapy process research.

<p>False</p> Signup and view all the answers

Outcome domains in therapy process research are...

<p>Change in couple communication (= environments)</p> Signup and view all the answers

Klaus Grave identified therapeutic bond as a main mechanism of change in psychotherapy.

<p>True</p> Signup and view all the answers

Klaus Grave identified 5 mechanisms of change particularly important in psychodynamic therapies (but not in cognitive behavioural psychotherapy).

<p>True</p> Signup and view all the answers

Klaus Grave identified symptom shifts as typical change mechanisms in psychotherapy.

<p>True</p> Signup and view all the answers

Klaus Grave conducted the first meta-analysis to identify typical change mechanisms in psychotherapy.

<p>False</p> Signup and view all the answers

Small-n-studies are well known to lead to underpowered RCTs when they are used to estimate necessary sample sizes.

<p>True</p> Signup and view all the answers

Small-n-studies are essential for establishing treatment efficacy in therapy research.

<p>False</p> Signup and view all the answers

Small-n-studies can be reliably analyzed using graphs and diagrams (=visual data analysis).

<p>True</p> Signup and view all the answers

Small-n-studies are best analyzed using multilevel random effects modeling.

<p>True</p> Signup and view all the answers

In evaluation research, there is a distinction made in service evaluation between needs, demands, and supply. Explain what is meant by these concepts, and how they differ, using an example of your choice.

<p>In the context of service evaluation, needs, demands, and supply represent distinct aspects of service provision and utilization. Needs embody the fundamental requirements or desires of individuals or communities. Demands reflect the expressed requirements or requests for services based on perceived needs. Supply refers to the availability and accessibility of resources and services to meet the expressed demands. For instance, in healthcare, the need might be the presence of a mental health condition. The demand would be individuals actively seeking treatment for that condition, while the supply represents the availability of mental health professionals and facilities to meet the demand. Therefore, needs, demands, and supply highlight the discrepancy between the fundamental requirements of individuals and the actual resources available to meet those requirements.</p> Signup and view all the answers

________ is a study which combines results from more than one quantitative empirical study into a single estimate of the parameter.

<p>Meta-analysis</p> Signup and view all the answers

_________ represents a more powerful and precise statistical test than ANOVA when used for randomized treatment studies.

<p>Meta-analysis</p> Signup and view all the answers

Internal consistency (Cronbach's Alpha) is the standard way of assessing the ________ of a scale that is composed of multiple similar items. The assumption is that the items are equivalent or parallel, that is, that they all aim to tap the same underlying construct.

<p>reliability</p> Signup and view all the answers

It is important that covariates are measured ________ treatment begins; otherwise differences between conditions on the first assessment will be adjusted or equalled between conditions.

<p>before</p> Signup and view all the answers

In clinical research, external validity refers to the generalizability of the study's findings to other populations, settings, and interventions. Explain what is meant by external validity in the context of clinical research, and how is it defined in this case?

<p>External validity in clinical research assesses how well the findings from a study can be generalized to other populations, settings, and interventions beyond the specific context of the original study. It addresses the question of whether the observed results from a study apply to individuals and situations that were not directly investigated. In the context of a clinical trial, external validity would examine whether the results of a particular intervention studied with a specific population in a specific setting are likely to hold true for individuals who received the intervention in different settings or for individuals with different characteristics.</p> Signup and view all the answers

When comparing different therapeutic interventions, what factors need to be considered to ensure a fair and accurate comparison? Describe some problems and pitfalls that need to be considered.

<p>Comparing therapeutic interventions requires careful consideration to ensure a fair and unbiased analysis. Factors like the characteristics of the participants, the intervention delivery methods, and the outcome measures must be meticulously assessed. Some potential problems and pitfalls that need to be considered include: Participant characteristics: It is crucial to ensure that the intervention groups are comparable in terms of demographic factors, clinical characteristics, and baseline severity of the condition. Differences in these factors can skew the results and make it difficult to isolate the effects of the intervention. Intervention delivery: The methods used to deliver the interventions should be consistent across groups. Variations in therapist experience, intervention intensity, or adherence to protocols can introduce bias and limit the comparability of findings. Outcome measures: Choosing reliable and validated outcome measures that are sensitive to changes in the target condition is essential. Using different measures or inconsistent measurement procedures can impede the comparison of results. Additionally, ensuring that the measures are relevant to the specific goals and interventions being compared is important. Furthermore, it may be necessary to consider potential confounders, which are factors that can influence the outcome but are not directly related to the intervention. Failing to address confounders can lead to biased assessments of the interventions' effectiveness. In summary, a comprehensive evaluation of therapeutic interventions requires a systematic approach that considers various potential confounders and ensures that the groups being compared are similar in relevant aspects.</p> Signup and view all the answers

What are 3 different methods to assess the reliability of an observation? Which of these methods assumes nominal scales of the ratings?

<p>Parallel forms reliability</p> Signup and view all the answers

What is “creaming”? At which step of a service evaluation could ‘creaming’ create a problem and for whom?

<p>Creaming refers to a phenomenon in service evaluation where individuals with better or more favorable characteristics, often those with less severe needs or better resources, are disproportionately selected for or attracted to a particular program or service. This selective enrollment can create a bias in the evaluation process by making it difficult to assess the true effectiveness of the program for individuals with a broader spectrum of needs or characteristics. Creaming can create problems in the recruitment or selection phase of a service evaluation. It can lead to an overestimation of the service's effectiveness, as the individuals participating in the program may be more likely to experience positive outcomes due to their pre-existing advantages. This can create misleading conclusions about the overall effectiveness of the service, as it may not represent the true impact on individuals with more diverse or challenging needs. Therefore, addressing creaming is crucial for ensuring that the service evaluation is representative of the actual population and the effectiveness of the service is accurately assessed.</p> Signup and view all the answers

In clinical research, there is a distinction made between efficacy and effectiveness studies. Explain in your own words what these terms refer to.

<p>In clinical research, efficacy studies focus on determining the effectiveness of a treatment intervention under ideal conditions. They typically involve controlled settings with a specific population and well-defined protocols. Efficacy studies aim to answer the question: &quot;Does the treatment work in controlled settings?&quot; Effectiveness studies, on the other hand, evaluate the effectiveness of an intervention in real-world settings with a more diverse population and factors that may not be controlled. Effectiveness studies address the question: “Does the intervention work in routine settings?” For instance, an efficacy study might evaluate the effectiveness of a new medication for depression in a controlled trial with a carefully selected group of participants. An effectiveness study could assess the impact of the same medication in a community setting with a more heterogeneous group, taking into account real-world factors like medication adherence and other potential interventions.</p> Signup and view all the answers

Name four reasons why clinical psychologists (who are practitioners) might not engage in research, and explain one of these reasons by your own words.

<p>Clinical psychologists who are practitioners might not engage in research for several reasons. These reasons can range from practical constraints to personal preferences. Common reasons include: Time constraints: Clinical practice demands significant time commitment, often leaving little room for engaging in research activities that require dedicated time, effort, and resources. Lack of funding: Research often requires funding for various aspects, like data collection, analysis, and dissemination. Clinical psychologists might not have access to the necessary funding for pursuing research endeavors. Limited knowledge and skills: Engaging in research requires specific knowledge, skills, and training in research methodology, design, and analysis. Clinical psychologists may not have the necessary training or feel confident in their research expertise. Lack of interest or motivation: Some clinical psychologists may simply not have a strong interest or motivation for conducting research. They may find satisfaction and fulfillment in their clinical work, prioritizing direct patient care over research activities. An example of why clinical psychologists might not engage in research is the time constraint factor. Clinical practice is extremely demanding, requiring extensive time dedicated to counseling, assessments, and other patient-related tasks. Finding the time and resources to engage in research, which involves tasks like planning, data collection, analysis, and writing, can be a significant challenge for practitioners who already have a demanding clinical workload. This lack of time can make research a daunting endeavor, especially when competing with clinical responsibilities.</p> Signup and view all the answers

Study Notes

Exam Research Methods - 2019/20

  • Clinical Significance:

    • Can be assessed using Kendall's equivalency testing.
    • Expressed when a client's post-intervention score is outside the range of the dysfunctional population.
    • Reached when intervention and control groups differ significantly on primary outcome.
  • Seligman (1995) Psychotherapy Properties:

    • Psychotherapy in the field is self-correcting.
    • Therapy duration isn't fixed in the field.
    • Patients in field settings often have multiple problems.
    • Clinical equivalence is often a focus in field settings.
  • Generalizing Small-N Studies:

    • Clinical replication studies can help generalize results.
    • Multi-level methods can be used to generalize results.
    • Meta-analyses based on effect sizes are another method for generalization.
    • Small N studies may not be generalizable due to limited power.
  • Audit in Clinic Research:

    • Sets up procedures in services.
    • Refers to treatment efficacy studies in outpatient settings.
    • Comparable to program evaluation.
    • Illustrates a cyclical evaluation process.
  • Neuroscientific Methods:

    • EEG has higher spatial resolution than fMRI.
    • TMS is not generally used/warranted for subcortical structures.
    • PET can show causal effects, but is not the only method.
    • Neuroscientific methods evaluate brain function and structure.
  • Outcome Evaluation:

    • Includes client outcomes, satisfaction, and service benefits.
    • Outcome evaluation assesses positive and negative outcomes in service evaluations.
    • Outcome evaluations also include case tracking.
  • Meta-analyses:

    • Combined effect sizes are crucial for proper calculation.
    • Interval-based estimates of parameters are derived from replication principles.
    • Generalizing data from small-N studies is possible through meta-analyses.
    • Meta-analyses account for heterogeneity between studies.
  • Measurement Validity:

    • Cronbach's alpha assesses internal consistency (for metric data).
    • Face validity (using expert ratings) can also be used.
    • Measurement validity pertains to the meaning of measurements.
  • "Random Effects" in Studies:

    • Parameter estimations remain stable across different studies.
    • Can be shown in non-overlapping pairs approaches in single-case designs.
    • Random effects models generalize meta-analysis results to the study population.
  • SCED Methodology:

    • Refers to single-case experimental designs (SCEDs).
    • Relies on a few participants.
    • Advantages in clinical research include flexibility.
    • ABAB studies are one type of SCED.
  • Randomized Controlled Trials (RCTs):

    • Small pilot studies may underestimate statistical power in RCTs.
    • ANCOVA may be used instead of ANOVA if covariates exist.
    • The inclusion of a control group is key for a controlled trial.
    • Pre-Post-Follow-up (PPF) design is suitable for evaluating clinical outcomes and treatment efficacy.
  • Meta-analysis limitations:

    • Number of included studies is limited in meta-analysis.
    • Data quality influences meta-analysis reliability.
    • Qualitative information is ignored.
    • Effect sizes are not always meaningful indicators.
  • CONSORT:

    • Consolidated Standards of Reporting Trials.
    • Diagrams guide the steps of statistical analysis in RCTs.
    • Guidelines ensure accurate reporting of RCTs.
    • Checklist aids in minimizing bias in RCT analyses.
  • Therapy Process Research Assessments:

    • Client surveys are methods used.
    • Audio-taped recordings and therapist self-reports are common methods.
    • Observational methods can also be useful.
  • ITSACORR:

    • A type of time-series analysis.
    • Flawed but applicable in single-case experiments.
    • Appropriate for short time series.
    • Best for short time series analyses.
  • Multilevel Models (MLM/Multilevel Modeling):

    • Suitable for missing data and unbalanced designs.
    • Useful in statistical analyses of RCTs.
    • Often referred to as mixed model regressions.
    • Avoid use in therapy process research.
  • Outcome Domains:

    • Diagnoses, symptoms, client experience, (e.g., consumer satisfaction) and change in couple communication.
  • Klaus Grave (Mechanisms of Change):

    • Identified the therapeutic bond as a main change mechanism.
    • Identified 5 change mechanisms in psychodynamic therapy.
    • Found change mechanisms associated with symptom shifts in psychotherapy.
  • Small-N Studies:

    • Can lead to underpowered RCTs when determining sample sizes.
    • Essential in therapy research for evaluating treatment efficacy.
    • Data analysis using graphs and diagrams can be used.
    • Best analyzed using multilevel random effects modeling.
  • Service Evaluations (Needs, Demands and Supply):

    • Needs refer to the services needed.
    • Demands refer to the requested service.
    • Supply is the actual service availability.
  • Filling Missing Words (Quantitative Methods):

    • Meta-analysis combines results from multiple studies into one estimate.
    • ANOVA is often used for randomized treatment studies and is statistically powerful.
    • Internal consistency, assessed with Cronbach's alpha, is for items on a scale.
  • External Validity:

    • Refers to how findings can be generalized to other contexts.
    • Defined in clinical research using the ability to evaluate the intervention in multiple contexts.
  • Comparing Interventions:

    • Consideration of the measurement tools for both groups is essential.
    • Avoid methodological pitfalls and problems.
    • Potential bias exists if groups differ in terms of participants.
  • Reliability Assessments:

    • Inter-rater reliability, where multiple raters assess observations.
    • Agreement and consistency among raters (inter-rater reliability) can be measured to test reliability.
  • Creaming:

    • Process where clients who would benefit the most from a service are sought.
    • Happens during service evaluation, causing potential issues.
    • Creaming can create problems by selecting clients who may not accurately represent the general population of clients in need.
  • Efficacy vs. Effectiveness Studies:

    • Efficacy studies examine whether an intervention works under ideal conditions.
    • Effectiveness studies determine whether an intervention works in routine clinical settings
  • Clinical Psychologist Research Participation Barriers:

    • Reasons include workload and time constraints.
    • Lack of resources/fundings.
    • Lack of training/experience.
    • Concerns regarding experimental design implementation.
  • Clinical Significance Assessment:

    • Determining whether an intervention's effect is meaningful in a clinical setting.
    • Difficulty is in defining a meaningful effect size.
    • Statistical significance does not equal clinical significance.
  • Unpaired t-test for RCTs:

    • Not suitable for group comparison in RCTs.
    • Unpaired t-test can overestimate significance when comparing two non-independent groups.
  • Transference-Focused Therapy Evaluation:

    • Ethical considerations may influence control group selection.
    • The chosen control group should ideally have similar characteristics to the experimental group.
    • Participants for control groups must be adequately selected.
    • Using ANCOVA to account for baseline differences through covariates in order to avoid biases and to appropriately generalize data

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Description

This quiz covers essential concepts and methods used in clinical research, including clinical significance, psychotherapy properties, generalizing small-N studies, and audit procedures in clinic research. Test your understanding of these critical topics and their applications in the field of psychology.

More Like This

Research Approaches in Psychotherapy
23 questions

Research Approaches in Psychotherapy

HeartwarmingConsciousness avatar
HeartwarmingConsciousness
Basics of Clinical Research
20 questions
Research Methods in Clinical Practice
34 questions
Clinical Research Methods Quiz
33 questions
Use Quizgecko on...
Browser
Browser