Sources of Knowledge and Ways of Knowing PDF
Document Details
Uploaded by PortableKnowledge7065
Binghamton University
Tags
Summary
This document discusses different sources of knowledge, including authority, common sense, intuition, rationalism, and empiricism. It also explores concepts such as theories, hypotheses, data, variables, and research designs, with examples and explanations of different types of experimental design. It also mentions ethical and research considerations.
Full Transcript
Sources of Knowledge and Ways of Knowing 🤔 There are several sources of knowledge and ways of knowing, including: Authority: Knowledge based on the opinions or statements of influential individuals or groups. "Authority can often be biased, as they...
Sources of Knowledge and Ways of Knowing 🤔 There are several sources of knowledge and ways of knowing, including: Authority: Knowledge based on the opinions or statements of influential individuals or groups. "Authority can often be biased, as they are humans, and you need to ask how they came to know that." Common Sense: Knowledge based on folk wisdom and personal experience. "Common sense is not always accurate, and people tend to accept things they think are true and ignore counter-examples." Intuition: A feeling of knowing something without being sure where that knowledge came from. Rationalism: A method of knowing based on logic and reasoning. "Rationalism involves using evidence and logic to arrive at a conclusion." Empiricism: Knowledge based on experience and observation. "Empiricism involves using evidence from observation to arrive at a conclusion." Theories, Hypotheses, and Data 📊 Theory: A set of statements that describe how variables relate to each other. "A theory is a collection of established facts that describe how variables relate to each other." Hypothesis: A prediction, stated in terms of study design, about the relationship between variables. "A hypothesis is a specific prediction about the relationship between variables." Data: A series of observations in numerical form. "Data can either support or refute a hypothesis." Variables 📈 Variable: Something that varies and has at least two levels or values. Constant: Something that could vary but only has one level in a study. Manipulation: The researcher controls the variable and assigns participants to different levels. 📢 Measurement: The researcher observes and records the variable. Claims Frequency Claim: A statement about the number of occurrences of a variable. Association Claim: A statement about the relationship between two or more variables. Causal Claim: A statement that changes to one variable are responsible for changes to 📊 another variable. Validity and Reliability Validity: The correctness or accuracy of a study. Reliability: The consistency of a study's results. Internal Validity: The extent to which a study measures what it is intended to measure. External Validity: The extent to which a study's results can be generalized to other 📏 populations. Measurement Scales Scale Description Example Nominal A set of categories with different names Car ownership: Ford, Toyota, BMW Ordinal Categories with different magnitudes Gold, silver, bronze Interval Ordered categories with sequential intervals Temperature Ratio Has an absolute zero point Weight Experimental Design 🎯 Independent Variable (IV): The variable that is manipulated by the researcher. Dependent Variable (DV): The variable that is measured by the researcher. Control Variable: A variable that is held constant to prevent it from influencing the outcome. Quasi-Experimental Variable: A variable that cannot be assigned to participants, but may 📊 influence the outcome. Examples of Experimental Design Study IV DV Results Factual and Mueller and Typing vs. taking notes Handwritten notes better for conceptual Oppenheimer (2014) by hand conceptual knowledge knowledge Babies who watched adults Babies watching adults Babies' persistence Yong (2017) persisting were more likely to persist persist at difficult tasks at difficult tasks themselves Internal validity refers to the extent to which a study's design and methodology allow for confident conclusions about the relationship between variables. Design Confounds 🤔 A design confound occurs when a study's manipulation influences more than one psychological construct, making it difficult to determine which construct is responsible for the observed effect. Example: Flickering lights in an attention study may impair attention, but also cause eye 📝 strain, making it unclear which factor is responsible for the observed effect. Operational Confounds An operational confound occurs when a study's manipulation is poorly defined, making it unclear what the independent variable (IV) actually is. Example: Measuring love by using arousal, but arousal can also be caused by other 📊 factors, such as fear or excitement. Selection Effects Selection effects occur when the levels of the independent variable (IV) differ because of the participants in each level are different. Example: Lovas (1987) study on autism therapy, where families who lived closer to the university were more likely to receive the experimental therapy, but also had better 📈 access to education and services. Order Effects Order effects occur when the order of presentation influences the outcome, such as fatigue or practice effects. Example: Within-groups design, where participants are measured more than once on the 🎯 same dependent variable (DV). Experimental Designs Independent Groups Design 📊 An independent groups design, also known as a between-groups or between-subjects design, involves different participants in each group. 📝 Example: Group 1 vs. Group 2, with different participants in each group. Post-Test Only Design A post-test only design, also known as an equivalent groups design or between-groups study, involves randomly assigning participants to one of at least two groups and testing them only once. Example: GRE test, where participants are randomly assigned to one of two groups and 📊 tested only once. Within-Groups Design A within-groups design, also known as a repeated measures design, involves measuring participants more than once on the same dependent variable (DV). 🔍 Example: Participants are measured on the DV twice, with a manipulation in between. Causal Claims and Validity Construct Validity 📊 Construct validity refers to the extent to which a study's variables are measured and manipulated accurately. Example: Note-taking study, where the dependent variable (DV) is measured using a standardized test, and the independent variable (IV) is manipulated using a laptop vs. 🌎 paper condition. External Validity External validity refers to the extent to which a study's findings can be generalized to other populations and settings. Example: Lab settings are artificial, and generalizing to other people is hard, but 📊 replication with similar experiments and stimuli can help. Statistical Validity Statistical validity refers to the extent to which a study's findings are statistically significant and meaningful. Example: Effect size (d) and confidence interval can help determine the magnitude and significance of the effect. Threats to Internal Validity 🚨 Maturation 📈 Maturation refers to a change in behavior that emerges spontaneously as a result of time. Example: Experiment A, where the participants may have settled and gotten used to the 📆 camp environment. History History refers to an event that occurs between the pre-test and post-test that influences the outcome. Example: Experiment B, where the participants may have improved due to spontaneous 📊 remission. Regression Regression refers to the tendency for extreme scores to return to the mean over time. Example: Participants who are chosen because they are extreme also regress to the 📊 mean at time 2. Attrition Attrition refers to the loss of participants over time, which can affect the outcome. 📊 Example: Participants who live further away from the lab are more likely to drop out. Testing Testing refers to the effect of taking a test on the outcome. 📊 Example: Participants can get better at taking the test, or become bored or fatigued. Instrumentation Instrumentation refers to a change in the measuring instrument over time. 🚫 Example: Coders may change how they judge behavior over time. Controlling for Threats Random Assignment 📊 Random assignment can help control for selection effects and maturation. 🔍 Example: Randomly assigning participants to one of at least two groups. Double-Blind Design Double-blind design can help control for observer bias and demand characteristics. 💊 Example: Neither the researcher nor the participants know which group they are in. Placebo Control Study Placebo control study can help control for placebo effects and reactivity effects. Example: Participants receive a placebo treatment, and the researcher measures the 🎯 outcome. Factorial Designs Independent Groups Factorial Design 📊 An independent groups factorial design involves studying two or more independent variables (IVs) in a between-groups design. 📊 Example: 2x2 factorial design, where there are two levels of two different IVs. Within-Groups Factorial Design A within-groups factorial design involves studying two or more independent variables (IVs) in a within-groups design. Example: 2x2 factorial design, where participants are measured on the DV twice, with a 📊 manipulation in between. Mixed Factorial Design A mixed factorial design involves studying one IV in a between-groups design and another IV in a within-groups design. Example: 2x2 mixed factorial design, where one IV is manipulated between groups and 📈 the other IV is manipulated within groups. Interactions Main Effect 📊 A main effect refers to the overall effect of one IV on the DV. Example: In a 2x2 factorial design, there are two main effects. Interaction Effect 📈 An interaction effect refers to the effect of one IV on the DV that depends on the level of another IV. 🚫 Example: In a 2x2 factorial design, there is an interaction effect between the two IVs. Ethical Guidelines The Tuskegee Syphilis Study 📊 The Tuskegee Syphilis Study was a study that involved withholding treatment from African American men with syphilis, resulting in harm and exploitation. Example: The study was unethical because it involved lying to participants, withholding 🚫 treatment, and targeting a disadvantaged group. Ethical Violations Ethical violations occur when researchers fail to treat participants with respect and dignity, or when they withhold information or coerce participants. 📚 Example: The Tuskegee Syphilis Study involved several ethical violations, including lying to participants and withholding treatment.## Research Ethics Core Ethical Principles in Research The Belmont Report (1976) outlines three principles for guiding ethical decision-making in research: Respect for Persons: Treating individuals as autonomous agents with the right to make informed decisions about their participation in research. Beneficence: Protecting individuals from harm and ensuring their well-being. Justice: Ensuring fair representation of participants in the study and those who benefit from the research. Respect for Persons "Respect for persons incorporates at least two ethical convictions: first, that individuals should be treated as autonomous agents, and second, that persons with diminished autonomy are entitled to protection." Two provisions: Informed consent: Participants must be fully informed about the risks and benefits of the research. No coercion or undue influence: Participants must not be pressured or influenced to participate in the research. Beneficence "Beneficence is an obligation to protect persons from harm by maximizing anticipated benefits and minimizing possible risks." Researchers must: Protect participants from harm Ensure their well-being Consider the potential impact of the research on the community Justice "Justice is the principle that the benefits and burdens of research should be distributed fairly." Fair representation of participants in the study and those who benefit from the research 📊 Avoiding exploitation of vulnerable populations Survey and Observation Research Measuring Psychological Constructs Converting feelings, attitudes, and thoughts into numbers Assigning values to psychological states Reporting results verbally or through observations Construct Validity of Surveys and Polls Association claims describe the relationship between two variables Questions to ask: Does the measure have good reliability? Is it measuring what it intends to measure? Face validity? Discriminant and convergent validity? Response Bias Solutions to the problem: Reverse-worded items Neutral language in questions Use normalizing questions Stressing anonymity Observer Bias When expectations influence interpretations of behavior Example: Participants (psychotherapists) watched the same video of a person answering questions about work experiences 📈 Different interpretations based on expectations Correlation and Association Bivariate Correlation/Association Association between two variables Three kinds: positive, negative, and zero Example: Couples who meet online have better marriages (Cacioppo et al., 2013) The Correlation Coefficient (r) A value between +1.0 and -1.0 Strength of the correlation Direction of the correlation: positive or negative Statistical Validity How well does the data support the conclusion? Effect size: the strength of the relationship between two or more variables Confidence interval: a range of values within which the true value is likely to lie Internal Validity Can we make causal inferences from associations? Three causal criteria: Covariance of cause and effect Temporal precedence Directionality problem Moderating Variables When the level of a variable changes because of the level of another variable 🚨 Example: Fandom and ticket sales Threats to Internal Validity Design Confounds Problems with the design of the experiment Example: Testing students on weekends for the 4-hour sleep condition and on weekdays for the 8-hour sleep condition Confounding Variables Variables that can influence the outcome of the experiment Example: The day of the week could influence academic performance due to different levels of stress Selection Effects Individuals in different conditions may be different from one another Example: Participants in the 4-hour sleep condition may be more sleep-deprived than those in the 8-hour sleep condition Order Effects The order of the conditions can influence the outcome of the experiment Example: Participants in the 4-hour sleep condition may perform worse on the test because they are more sleep-deprived Experiment Limitations Control group absence Spontaneous remission Specific threats to internal validity: 🚨 Maturation threat History threat## Threats to Internal Validity Internal validity refers to the extent to which a study's design and methodology allow for confident conclusions about the relationship between variables. Regression Threat 📉 A threat to internal validity that occurs when extreme scores tend to normalize over time, resulting in a regression towards the mean. Example: In a stress study, employees score very high on a stress test (pretest) after a stressful project deadline. A week later, their scores drop closer to average, not due to any intervention, but because extreme scores tend to normalize over time. Attrition Threat 👋 A threat to internal validity that occurs when there is systematic dropout of certain kinds of participants, resulting in biased results. Example: In a weight-loss study, participants with less progress drop out over time, skewing the results and making the program appear more effective than it is. Testing Threat 📝 A threat to internal validity that occurs when participants perform better on a test due to prior exposure to the test, rather than any real change in the variable being measured. Example: In a memory study, participants take the same test twice. On the second test, scores improve simply because they remember the questions from the first time, not due to any real change in memory ability. Instrumentation Threat 📊 A threat to internal validity that occurs when the measurement instrument changes over time, resulting in inconsistent or biased results. Example: In a classroom study, teachers use different grading criteria on the pretest and posttest, making it hard to tell if changes in student scores are due to actual learning or just the grading adjustments. Observer and Participant Bias 👀 Observer Bias 🔍 A type of bias that occurs when researchers' expectations influence the results of a study. Demand Characteristics 📣 A type of bias that occurs when participants guess the purpose of the study and change their behavior accordingly. Solutions: Double-Blind Design: A study design in which both the researchers and participants are unaware of the treatment or condition being administered. Masked Design: A study design in which the researchers or participants are unaware of the treatment or condition being administered. Placebo Effects: A phenomenon in which participants experience a change in behavior or outcome due to their expectation of receiving a treatment, rather than the actual 📈 treatment itself. Experimental Design Factorial Designs 📊 A study design that involves the manipulation of two or more independent variables to examine their individual and combined effects on a dependent variable. Example: Treatment A Treatment B Group 1 + + Group 2 + - Group 3 - + Group 4 - - Main Effect 📈 The individual effect of an independent variable on a dependent variable. Interaction Effect 🔄 The combined effect of two or more independent variables on a dependent variable. Notable Studies 📚 Tuskegee Syphilis Study 🤕 A study that involved the deliberate withholding of treatment from African American men with syphilis, resulting in severe health consequences and widespread ethical criticism. Milgram Study (1960) 📝 A study that involved the manipulation of participants to administer electric shocks to another person, resulting in findings on obedience and highlighting the importance of informed consent. 🤝 Ethical Considerations Respect for Persons 👥 The principle of respecting the autonomy and dignity of research participants. Beneficence 🤝 The principle of promoting the well-being and safety of research participants. Justice 🤝 The principle of ensuring that research is conducted in a fair and equitable manner. Fidelity and Responsibility 📝 The principle of being honest and transparent in research practices and taking responsibility for one's actions. Integrity 📊 The principle of maintaining the accuracy and reliability of research data and methods. Statistical Validity 📊 Mean 📊 A measure of central tendency that represents the average value of a dataset. Standard Deviation (SD) 📊 A measure of variability that represents the spread of a dataset from the mean. Variance 📊 A measure of variability that represents the average of the squared differences from the mean. Effect Size (d) 📊 A measure of the magnitude of the relationship between variables. Confidence Interval 📊 A range of values within which a population parameter is likely to lie. Internal Validity Threats 🚨 Threat Definition Example Regression Threat Extreme scores tend to normalize over time. Stress study Attrition Threat Systematic dropout of certain kinds of participants. Weight-loss study Testing Threat Participants perform better on a test due to prior exposure. Memory study Instrumentation Threat Measurement instrument changes over time. Classroom study Observer Bias Researchers' expectations influence results. Participants guess the purpose of the study and change Demand Characteristics behavior.