Document Details

SustainableNoseFlute

Uploaded by SustainableNoseFlute

null

null

null

Tags

scientific method research methods communication study guide

Summary

This document is a study guide for an exam. It covers topics like the scientific method, theory, variables, and research design. The study guide includes questions.This guide will be helpful for undergraduate students.

Full Transcript

1COMD 6230 - F24 Exam 1 Study Guide Scientific Method and Communication 1. What is the scientific method? Systematically testing theory-based hypothesis. 2. Why is the scientific method also called hypothetico-deductive? “Seeking knowledge throug...

1COMD 6230 - F24 Exam 1 Study Guide Scientific Method and Communication 1. What is the scientific method? Systematically testing theory-based hypothesis. 2. Why is the scientific method also called hypothetico-deductive? “Seeking knowledge through process of testing general concept through specific occurrences” (Scientific Method ppt, slide 24). 3. What does it mean that the scientific method is probabilistic?. We use strong evidence to give a probable outcome, we do not prove them - it means that the conclusions and predictions made through scientific inquiry are based on probabilities rather than certainties 4. Why does your instructor avoid the terms prove or proof in discussing research findings? What could you say instead of prove? Proofs are unquestionable - we think of them as absolutely true AND suggest predetermined answers. Science collects evidence for/against and should not be conducted to “confirm” an idea or opinion. STRONG EVIDENCE and PROBABLY TRUE! 5. How does the attitude of research users affect how they make decisions based on the evidence? A receptive attitude towards new or contradictory evidence can lead to better decision-making, while a closed or biased mindset may hinder the objective use of scientific findings. When researchers or users seek to "prove" something, they might selectively interpret evidence that supports their hypothesis and ignore or downplay evidence that contradicts it. 6. Your textbook says method of intuition involves pure rationalism. What is the difference between intuition and rational thought? Intuition can sometimes lead us astray and is based on personal experiences (e.g., guessing what food is making you sick), whereas rationalism involves logical and critical thinking (e.g., keeping a food diary). 7. Identify four features that signal a claim is based on junk/pseudo science (excluding poorly done or few studies since that situation can also occur in new or hard-to-study areas). Disregard for generally accepted information Lack of transparency/Vague methodology Results can’t be replicated Rely on personal experience and anecdotal claims. Claims that cling to conclusions 8. Why should we be explicit and detailed in reporting our treatment and testing procedures? So, a study can show how the treatment and testing were controlled for extraneous variables, that the treatment and testing were determined to be reliable through intra- or inter-reliability testing, and that treatment was done with fidelity. By detailing this information, a study can show that its evidence is strong, and it allows for a stronger outcome statement to be made. It can also be easier to peer review and duplicate. 9. What are two reasons why citations are provided for claims in scientific writing? 1- Give credit to the authors contributing 2- Give yourself credit for where you got the information, provide evidence for your findings 10. What is peer review and how does it help assure quality of published studies? Peers who are specialists in that area of research (at least 2) review and provide feedback, and then the paper is revised until accepted or rejected. The editor in Chief makes the final decision. The study is more confident when published in a reputable journal. Question and Control 1. What are theories and why are they helpful? Statements formulated to explain phenomena are called theories- theories establish a framework from which meaningful generalizations can be made. 2. What are three standards for judging the strength of a theory? (ATP) Accountability: A strong theory must be able to account for most, if not all, of the existing data in its domain. Explanatory Relevance: The explanation a theory offers for a particular phenomenon should give good grounds for believing that the phenomenon would occur under the conditions specified by the theory. Testability: A theory must be testable, meaning it can be subjected to empirical testing and has the potential to be proven false if it is incorrect. Predictive Power: A strong theory should not only explain past and present phenomena but also be predictive. Parsimony: According to Occam’s razor, a theory-. A simpler theory that explains the same set of data is generally preferred. 3. How do theories differ from hypotheses? The theory is the framework from which generalizations can be made, and the hypothesis is the testable question based on the idea in the theory. Theories attempt to explain problems. A testable version of a theory is a hypothesis. 4. How does a study purpose differ from its research question? Research purpose is a clear statement of what you are attempting to achieve through your research. The research question is a specific concern you will answer through your research. 7. What is a variable? Variable is a measurable characteristic that can change or be changed (Extraneous, independent/Dependant, attribute/active, confounding/controlled, predictor/predicted, categorical (have or not have)/ continuum ( infinite number of values within a range) 8. Why do we seek experimental control of variables? To make internal validity stronger showing that the IV probably cause a change in the DV. 9. What are each of the following types of variables: a. Independent & dependent- Independent variable is the presumed cause of the dependent variable b. Extraneous- Extraneous variables may be responsible for the changes in the dependent variable, or may negate, moderate or even enhance the effect of the independent variable on the dependent variable. When they affect the outcome - it is confounding and when they are accounted for they become controlled. c. Confounding When extraneous variables are not controlled for in a study they are confounding variables. C. controlled- When extraneous factors are recognized and kept constant so as to minimize their effects on the outcome are called control variables. d. Predictor & predicted- variables studied to determine relationship between them (intentional variation occurring, but which causes which is not clear, is it possible another variable causes joint variation). Examines how strongly and in what ways 2+ variables are related or associated. Example: Does age predict height? Does height predict age? e. Active & attribute- Attribute: is also called organismic. Attribute - can’t change it (male vs female)(ELL vs ENGL). Active variables: can be manipulated (intensity of treatment). f. Categorical : measured as present or absent; continuous: measured on a continuum of at least rank order. Example: Stuttering vs. not stuttering. 11. What is internal validity for an experimental versus a descriptive study? Internal validity for an experimental study - sufficient control of extraneous variables and attributes changes in the dependent variable to independent variable. Internal validity for a descriptive study- sufficient control of variables and presents a convincing description of what occurred. 12. What does it mean to say a study design/method is "tight" versus "full of holes"? High internal validity - VS - Low internal validity. 13. What does it mean to say internal validity "rules out competing explanations"? High internal validity shows that the study’s design and execution support the conclusion that the observed effects are indeed due to the independent variables being manipulate or studied rather than being the result of other factors or explanations 14. Explain each of the ten potential threats to internal validity discussed in class (not #10 subtypes). HMRISDATRE - Harry met Ron in science, discussing amazing topics regarding experiments. 1. History- events occurring between 1st and 2nd (or more) measurements (a traumatic event) 2. Maturation- changes in subjects themselves that can’t be controlled (how you change due to trauma) 3. Reactive Measures- Subjects may react to a pretest when taking a subsequent test. 4. Instrumentation—Changes in the calibration of a measuring instrument or changes in observers 5. Statistical regression-When test subject scores are low there will be a tendency to test higher 6. Differential subject selection - different characteristics of subjects controlled by matching or randomization. 7. Attrition - People dropping out of the study. 8. Treatment order effects 9. Researcher bias 10.Effects of being in a study – Pygmalian or Rosenthal (rise to occasion or expectation of the researcher) – Hawthorne Effect (act differently when being watched) – Placebo Effect (belief alone, possibly seeing results) 15. What are a priori vs. post hoc questions? Why are post hoc answers less certain? A prior are the question we ask before and try to answer with research, post hoc are the question that arise from the research and usually do not have enough evidence to determine an outcome or answer too without further research/another study. Research Design 1. What are each of these research design features: a. Single/group Single -individuals that data is reported on indi Group- averages of multiple points b. Between/within Within- you get all of the baselines and treatment -own control group (change up the order too ABC or CBA order…) Between- comparing with different conditions (some get control compared to other that gets treatment) c. Retrospective/prospective Past info OR present/gather info d. Longitudinal/cross-sectional Long- over time testing and observing the same people/group Cross- testing two different ages at the same time and saying its a representative sample of how they would grow and change. e. Relations/differences How you analyse the research design Relational- associations among two or more variables Differences- dissimilarity among dependent variable results f. Descriptive/experimental Experiment- manipulate and control variables Descriptive- systematic observation 3. What kind of design is suggested in the following research questions: a. Does X cause Y? - EXPERIMENTAL b. Is X positively associated with Y? - CORRELATION c. How does X appear over time? - DESCRIPTION 4. What is an interaction effect between two independent variables? One independent variable has a different outcome depending on another independent variable (diet/exercise - weightloss) 5. What are the features of these main types of group designs (I wrote descriptions) a. Developmental How do they develop b. Correlational Do they correlate (did the cause effect this) the relationship c. Pre/post manipulation and systematic data collection, compare before/after for many participants- not control group. d. Comparative manipulation and systematic data collection, compare before/after for many participants e. Quasi-experimental but lack random assignment to treatment or control groups f. Experimental Use of randomize control groups. 6. What study design is most likely to use randomized-stratified participant selection? Survey or epidemiological research study designs (clinical trial?) 8. Most of our field research involves convenience sampling. What is convenience sampling? Take participants from a readily available pool 9. Does convenience sampling affect internal or external validity more? External - involves selecting participants who are easily accessible rather than randomly chosen from the broader population, the sample may not represent the larger population - it also introducing bias and reduces internal validity. 10. How does participant assignment occur in the following procedures a. Pre-existing - group by conditions previously has b. Sequential - assigned as they start c. Randomized - equal probability to be assigned. d. Balanced groups - similar distributions of the extraneous variables e. Matched pairs - similar distributions of the extraneous variables 3 11. Why can purely randomized assignment be problematic in small sample studies? May not have balanced groups 16. What is a deferred treatment control condition and how does it address each of scientific control and ethics? the control group receive the intervention, but it is delayed until after the initial assessment period. By delaying the treatment for the control group, researchers are able to establish a stable baseline of the participants' behavior or condition. This allows for a clear comparison between the pre-treatment phase and the post-treatment phase The deferred treatment design avoids withholding treatment from the control group. 17. How is a within-group repeated measures design used to compare two treatments? How does this give greater experimental control of participant variation than a between-group design? greater statistical power because the variability due to individual differences is minimized, also requires 18. What major source of error can occur in a within-group design and how can it be controlled? 19. How do you know if a small sample study is single-subject vs group design? Answer about data points and analysis, data graphs, structure of treatment and control, participant assignment, and how findings apply to the population. 23. What does it mean that a participant is their own control? The participant is compared to themselves 24. What is a time series experimental design? Single-subject or small-group designs are often called time-series designs because they involve the systematic collection of a series of measurements of the dependent variable over a period of time. (AB, ABA, ABAB) 25. How are time series phases the same as and different from conditions? One part of the time series is treatment and and another is removal of treament so each phase is like a “condition” however the removal of treatment doesn’t not act exactly like a control condition. 26. What are the minimum number of data points in a baseline phase and why is more better? 3+ data points per phase - better if data taken until baseline is stable & to provide sufficient data points to show a clear pattern across phases. 27. Explain what happens in the following time series single-subject experimental designs: a. AB - Tx - Stop treatment b. ABAB - Tx - Stop - Tx - Stop c. Alternating treatments - Tx A - Tx B repeating. d. Changing criterion - Criterion (or level of achievement) changes as each is reached) 28. In an ABA design, why does a researcher want to see performance drop and not maintain in the second A phase? By removing treatment in second A one hope to show a drop in performance to indicate the treatment is causing the the better scores in the B phase. What extra control does the B give in an ABAB design? By having a second B phase with scores that increase, it provides more evidence that the treatment is the cause of the increased scores. 29. What are multiple baseline across behaviors and multiple baseline across subject designs? Multiple baseline across subject - one intervention is provided and the same target behavior is measured across several subjects who share common relevant characteristics. Multiple baselines across behaviors: The intervention is then introduced sequentially across these different behaviors. 31. In a multiple baseline across behavior design, what can you conclude about treatment efficacy if the control behavior starts improving at the same time as the treatment behavior? There could be many other factors contributing to the change in behavior, other than treatment. Was the design implemented correctly? Has the treatment generalized across behaviors? Can external factors (ex: change in environment, routine, socialization) be influencing the behaviors? 32. Why do we do graphical analysis of the full pattern of the data in a single-subject study like Petersen et al. (2014), instead of just looking at the first baseline and last treatment data points? So we can see growth or no growth over time. 33. Explain possibility versus probability in relation to single-subject and group designs. Possibility- more related to single-subject design. You can’t really give a probability with just one subject - Although single-subject designs have more interval validity, they have less external validity. 4 Methods 1. What are the three principles of fair treatment for research participants? a. Respect for persons : participants with diminished autonomy are protected; able to leave study at any time b. Beneficence: does study benefit whole population, benefit/risk to participants c. (Social) Justice: whether the benefits and burdens are spread across society to be fair and unbiased 3. What are the two opposing forces that affect study participation explanation on a consent form? Sufficiently detailed and precise to cover legal requirements vs. should be written in language appropriate to attention and comprehension level HIGH ATTRITION Internal validity: Attrition can introduce selection bias, where the characteristics of participants who drop out differ systematically from those who remain in the study. Also, the remaining sample may no longer be representative of the initial sample, potentially leading to skewed results. HIGH ATTRITION External validity: Limited Generalizability. If a large portion of participants drop out, the remaining sample may not accurately represent the broader population from which it was drawn. This limits the ability to generalize findings to a larger population. 6. What are three factors that affect decisions about sample size in a group design. number of variables, population and sample characteristics, research design, measurement procedures 7. Why are homogeneous participant samples better than heterogeneous samples for internal validity? Homogeneous- sample is made up of a similar group, eg. same age Heterogeneous- Sample made up of different parts e.g range of ages 8. What are two ways to investigate heterogeneous populations with good experimental control? 1- balanced group 2- do a follow up study 9. What type of design uses extremely large sample sizes and why? Survey and Epidemiological - S: composed to be representative of the population. E: detect a true effect or association and to look as sub-groups. 10. Why are many data points collected for each participant in a single-subject design? It allows you to see the trend in the data, reduces the impact of outliers 11. What is external validity? How well the results are generalized to the target population 12. What is the trade-off often faced between internal and external validity? Internal validity- really tight study versus a study full of holes. If it doesn’t have good internal validity then you can’t get external validity. Sometimes though you need to give up on some internal validity in order to make the study be better equipped to relate to the general population. 13. Explain and give examples of ratio, ordinal, interval, and categorical data. Zero means absent - Categorical/Nominal: Mutually exclusive categories; named groupings. qualitative data that can be grouped into categories instead of being measured numerically. Example: area code, hometown, age, race, educational experience, type of disfluency, sex. Ratio: Equal intervals; true meaningful zero; plus properties of all other scales. Example: Age, weight, vowel, duration, number correct Ordinal: Ranked data; plus properties of Nominal. Example: sweetest-sourest, tallest-shortest, severity. Interval: Equal intervals; no meaningful zero; plus properties of Ordinal and Nominal. Example: temperature, acidity, IQ and other standard scores. 14. What are objective versus subjective data? Which is more confirmable? More valid? Objective: totally observable and measurable with no difference from inner or unmeasurable aspects of it. No disagreement on how we measure it Subjective: someone’s opinion/ how they perceive something/ not fully measurable, up to interpretation. An opinion may not be measurable, but still may be true. 17. What does it mean to calibrate an instrument? Calibrating an instrument means checking and adjusting its response so the output accurately corresponds to its input. 18. What is observer calibration? Observer calibration: how accurate and how reliable are you as an observer who is measuring data. 19. What are independent and consensus scoring? Which one is a better measure of reliability of scoring? Independent scoring: is primarily a measure of reliability. Its purpose is to assess the degree of agreement or consistency among raters. Consensus scoring: is more about increasing accuracy rather than being a direct measure of reliability. Consensus scoring involves having multiple raters or observers discuss and collaborate to reach a collective agreement or consensus on the ratings 20. For reliability scoring, what is point-to-point agreement and why is that better than total score agreement? Point-by-point agreement: comparing how each individual section/question/point relates to each other–do this to gather the 80% agreement score - Point-by-point is preferred because it helps identify specific areas of disagreement. Total score agreement: if the total score of the two participants is agreeable 22. What is a rule-of-thumb for the minimum acceptable level of inter-rater agreement? 80% 23. If scoring reliability is low, what can the researchers do to improve it? If lower, retrain and simplify the scoring system or explain how that’s ok. 24. What is treatment fidelity and why is it important to report? How closely a treatment is delivered or implemented as intended. It is important because it makes it more reliable for scoring, gathering and assessing data, and for replication. *** RANDOMIZED CONTROL TRIAL IS THE GOLD STANDARD Participant numbers: At least 30 for total, at least 10 in each condition, at least 5 in cell

Use Quizgecko on...
Browser
Browser