Calculating CVI in Research Methods

PolishedNirvana avatar
PolishedNirvana
·
·
Download

Start Quiz

Study Flashcards

38 Questions

What is the purpose of the four-point Likert scale in content validity?

To determine the equivalence of items in an instrument

What is the minimum CVI value required for an item to be considered valid?

0.80

What type of validity involves correlating a test with a criterion variable?

Criterion-related validity

What is an example of predictive validity?

The Barthel Index predicting performance in ADL

What type of validity is assessed when a test is compared to a criterion variable at the same time?

Concurrent validity

What is the purpose of calculating the CVI value?

To assess the content validity of an instrument

What is the minimum factor loading value required in confirmatory factor analysis?

0.5

What is the purpose of confirmatory factor analysis?

To confirm if the items are measuring what they are supposed to measure

What is the recommended minimum value for the total variance explained in factor analysis?

50%

What is the term for the ability of a measure to consistently produce the same results?

Reliability

What is the recommended minimum value for the KMO value in factor analysis?

0.7

What is the term for the degree to which a measure is actually measuring what it claims to measure?

Validity

What is the recommended minimum number of factors to be extracted in factor analysis?

1

What is the term for the correlation between multiple constructs?

Discriminant validity

What is the term used to describe the consistency of ratings between two or more observers?

Interobserver reliability

What is the measure used to assess inter-observer reliability?

Kappa coefficient

What is the acceptable range for the corrected Item-Total correlation?

0.30 < r < 0.85

What is the term used to describe the correlation between the responses at two time points?

Correlation coefficient

What is the ideal Cronbach's alpha value?

> 0.70

What is the range for fair agreement according to the Kappa coefficient?

0.21–0.40

What is the scale used to interpret the Kappa coefficient?

All of the above

What is the term used to describe the correlation between the items in a set of measures?

Interitem reliability

What does a test with 100% sensitivity indicate?

The test correctly identifies all patients with the disease

What is a true positive in a clinical test?

The patient has the disease and the test is positive

Which of the following strategies is NOT a way to increase the sensitivity of a clinical test?

Using a test that is 100% specific

What is the purpose of kappa testing in assessing the reliability of a clinical test?

To measure the internal consistency of the test

What is the range of values for a correlation coefficient in measuring reliability?

-1 to +1

What does a test with 80% specificity indicate?

The test correctly identifies 80% of patients without the disease

What is a false positive in a clinical test?

The patient does not have the disease but the test is positive

What is the minimum test-retest reliability value considered to be quite good?

+0.6

What is reliability in research?

The ability to reproduce a consistent result in time and space, or from different observers

What is the purpose of test-retest reliability?

To measure the stability of a measuring instrument over time

What is the problem with having a too short interim in test-retest reliability?

Respondents may remember and simply repeat their previous responses

What is the purpose of kappa statistics of agreement (κ)?

To measure the agreement between raters

What is the value of kappa (κ) when the raters are in complete agreement?

1

What is the purpose of the interclass correlation coefficient (ICC)?

To measure the agreement between raters

What is the purpose of correlating the scores in test-retest reliability?

To find the correlation between the responses at the two time points

What is the problem with having a too long interim in test-retest reliability?

Real change in behavior may have occurred between the two tests

Study Notes

Content Validity

  • Consists of a four-point Likert scale to assess the equivalence of items
  • Items are scored as:
    • 1: Non-equivalent item
    • 2: Item needs extensive revision
    • 3: Equivalent item, needs minor adjustments
    • 4: Totally equivalent item
  • CVI (Content Validity Index) is calculated by adding scores 3 and 4 and dividing by the total number of answers
  • CVI should be at least 0.80 and preferably higher than 0.90
  • A measure of how well a test predicts what it is designed to predict
  • Involves correlation between the test and a criterion variable (or variables)
  • Two types:
    • Predictive validity: If the test accurately predicts what it is supposed to predict
    • Concurrent validity: When the predictor and criterion data are collected at the same time

Construct Validity

  • Concerns whether the items are measuring what they are supposed to measure
  • Confirmed using Confirmatory Factor Analysis (CFA) with AMOS software
  • Key factors to consider:
    • Model fit indices
    • Convergence (factor loadings, AVE, and CR)
    • Discriminant validity (correlation between constructs)

Reliability

  • Refers to the consistency and dependability of a measure
  • Concerns whether the indicator measures something consistently and reproducibly
  • Types of reliability:
    • Test-retest reliability: Stability of measuring instruments over time
    • Interitem (internal consistency) reliability: Consistency of multiple items measuring the same concept
    • Interobserver/inter-rater reliability: Agreement between multiple observers using the same measure

Test-Retest Reliability

  • Measured using:
    • Kappa statistics of agreement (κ)
    • Interclass correlation coefficient (ICC)
    • Correlation between responses at two time points (r value)

Interitem (Internal Consistency) Reliability

  • Measured using:
    • Correlation between items (0.30 < r < 0.85)
    • Corrected Item-Total correlation (>0.30)
    • Cronbach's Alpha (>0.7)

Interobserver/Inter-rater Reliability

  • Measured using:
    • Kappa statistics of agreement (κ)

Sensitivity and Specificity

  • Sensitivity: Ability of a test to correctly identify patients with the disease
  • Specificity: Ability of a test to correctly identify patients without the disease
  • Strategies for increasing sensitivity:
    • Improving understandability and cultural validity
    • Assuring that the measure covers the full range of the latent construct
    • Eliminating redundant items
    • Maximizing sensitivity of the device used to collect responses
    • Asking directly about change

Learn how to calculate the Content Validity Index (CVI) in research methods, a crucial step in ensuring the validity of survey instruments. Understand the four-point Likert scale and how to revise or remove items.

Make Your Own Quizzes and Flashcards

Convert your notes into interactive study material.

Get started for free

More Quizzes Like This

Exploring ICTs in Nigerian Mass Media
5 questions
Survey Research Methods
30 questions

Survey Research Methods

HealthyAndradite avatar
HealthyAndradite
Survey Research Methods
24 questions

Survey Research Methods

IlluminatingRomanesque avatar
IlluminatingRomanesque
Use Quizgecko on...
Browser
Browser