Research Methods Study Guide PDF
Document Details
Tags
Summary
This document provides a study guide on various research methods, including non-experimental, pre-experimental, quasi-experimental, and true experimental designs. It explains concepts like manipulation, random assignment, and control groups. The guide also covers different data collection techniques like surveys, questionnaires, and interviews.
Full Transcript
Research Methods Study Guide: Type of Study: 1. Non-Experimental Design Definition: In non-experimental designs, there is no manipulation of an independent variable by the researcher, and no random assignment to conditions. Researchers simply observe and measure variables as they naturally occur, o...
Research Methods Study Guide: Type of Study: 1. Non-Experimental Design Definition: In non-experimental designs, there is no manipulation of an independent variable by the researcher, and no random assignment to conditions. Researchers simply observe and measure variables as they naturally occur, often to determine correlations between them. Example: A researcher collects data on students' study habits and their exam scores to see if there is a relationship between the two. The researcher doesn’t manipulate study habits or assign participants to different study conditions. Structure: Group 1 (Observation): [Study habits] —> [Exam scores] No manipulation, just observation. 2. Pre-Experimental Design Definition: In pre-experimental designs, a treatment or intervention is applied, but there is either no control group or no random assignment. This design often lacks rigor because it cannot definitively establish cause and effect. Example: A teacher introduces a new teaching method to her class and measures their performance on a test afterward. There is no comparison group, so it’s unclear if the new method caused the improvement. Types: One common pre-experimental design is the one-group posttest-only design. Structure (One-group posttest-only): Group 1 (Intervention): [New teaching method] —> [Test scores] 3. Quasi-Experimental Design Definition: In quasi-experimental designs, the independent variable is manipulated, but participants are not randomly assigned to groups. Instead, pre-existing groups are used. This type of design provides more control than non-experimental or pre-experimental designs but still cannot definitively establish causality due to the lack of randomization. Example: A researcher introduces a new exercise program to one school and uses another school without the program as a control group. However, the schools were not randomly assigned to receive the exercise program. Types: One common quasi-experimental design is the nonequivalent control group pretest-posttest design, where two groups are compared but participants are not randomly assigned. Structure (Nonequivalent control group pretest-posttest design): Group 1 (Pretest) —> [Intervention] —> [Posttest] Group 2 (Pretest) —> [No intervention] —> [Posttest] 4. True Experimental Design Definition: In true experimental designs, the researcher manipulates the independent variable and randomly assigns participants to different conditions or groups. This design allows for the most control over variables and is the gold standard for establishing cause-and-effect relationships. Example: A researcher randomly assigns participants to either a group that receives a new drug or a placebo group and compares their health outcomes. Random assignment ensures that differences between the groups are due to the intervention, not pre-existing differences. Types: One common type of true experimental design is the randomized control trial (RCT), where participants are randomly assigned to an experimental group or a control group. Structure (Randomized control trial): Random Assignment —> Group 1 (New drug) —> [Health outcomes] —> Group 2 (Placebo) —> [Health outcomes] Visual Diagrams: 1. Non-Experimental Design No manipulation or random assignment Group 1 (Observation): [Variable A] —> [Variable B] 2. Pre-Experimental Design Intervention with no control group Group 1 (Intervention): [Treatment] —> [Outcome] 3. Quasi-Experimental Design No random assignment, pre-existing groups Group 1: Pretest —> [Intervention] —> Posttest Group 2: Pretest —> [No intervention] —> Posttest 4. True Experimental Design Random assignment to groups Random Assignment —> Group 1: [Intervention] —> Posttest —> Group 2: [Control] —> Posttest Summary of Key Differences: Type Manipulation Random Assignment Control Group Causality Non-Experimental No No No No Pre-Experimental Yes No No Weak Quasi-Experimental Yes No Yes Moderate True Experimental Yes Yes Yes Strong Conclusion: Non-Experimental designs are purely observational and cannot establish cause-and-effect relationships. Pre-Experimental designs include some manipulation but lack control groups and randomization. Quasi-Experimental designs improve control by adding comparison groups but lack random assignment, limiting causal conclusions. True Experimental designs are the most rigorous, using both random assignment and control groups to establish strong cause-and-effect conclusions. Data Collection: Surveys and Questionnaires Example: Online surveys using platforms like SurveyMonkey or Google Forms to collect opinions on a new product. Pros: Can reach a large audience quickly. Cost-effective and easy to administer. Data can be easily quantified and analyzed. Cons: Responses may be biased or influenced by question wording. Limited depth of information (especially in closed-ended questions). Low response rates can affect data representativeness. Interviews Example: Conducting one-on-one interviews with participants to explore their experiences with a service. Pros: Provides in-depth information and insights. Allows for clarification of responses and follow-up questions. Can build rapport, leading to more honest answers. Cons: Time-consuming and potentially expensive. Data may be subject to interviewer bias. Analysis of qualitative data can be complex and subjective. Focus Groups Example: Gathering a group of participants to discuss their perceptions of a brand. Pros: Encourages interaction and can generate rich qualitative data. Diverse perspectives can emerge from group dynamics. Cons: Dominant voices may skew the discussion. Analysis can be complicated due to group dynamics. May not be generalizable to a larger population. Observations Example: Watching and recording behavior in a natural setting, like observing customer interactions in a store. Pros: Provides real-time data and insights into behavior. Can capture context and non-verbal cues. Cons: Observer bias can influence data interpretation. Time-intensive and may require a lot of resources. Ethical concerns regarding privacy may arise. Experiments Example: Conducting a controlled experiment to test the effects of a new drug on patients. Pros: Can establish cause-and-effect relationships. Allows for control over variables. Cons: May lack ecological validity if conducted in artificial settings. Ethical considerations may limit certain types of experiments. Requires careful design and implementation. Case Studies Example: An in-depth analysis of a single organization to explore its practices and outcomes. Pros: Provides detailed, context-rich information. Useful for exploring complex issues in real-life contexts. Cons: Findings may not be generalizable to larger populations. Potential for researcher bias in interpretation. Time-consuming and resource-intensive. Secondary Data Analysis Example: Analyzing existing datasets from government reports, academic research, or previous surveys. Pros: Cost-effective and time-saving, as data is already collected. Allows for analysis of large datasets and historical trends. Cons: Data may not be specific to the current research question. Limited control over data quality and relevance. Potential issues with data consistency and comparability. Ethnography Example: A researcher immerses themselves in a community to study cultural practices and behaviors over an extended period. Pros: Provides in-depth understanding of social dynamics and cultural contexts. Rich qualitative data from direct experience. Cons: Highly time-consuming and requires significant commitment. Researcher presence may alter participant behavior (observer effect). Data analysis can be complex and subjective. Longitudinal Studies Example: Following the same group of individuals over several years to assess changes in health outcomes. Pros: Can track changes over time and establish sequences of events. Useful for studying developmental and long-term effects. Cons: Time-consuming and often expensive. Attrition can threaten the validity of findings. Changes in measurement tools over time can complicate data analysis. Content Analysis Example: Analyzing media coverage of a particular event to identify themes and patterns. Pros: Allows for systematic analysis of text, images, or other media. Can reveal trends and shifts in public discourse. Cons: Interpretation can be subjective and open to bias. Requires clear coding schemes to ensure reliability. Time-consuming, especially for large datasets. Study Design: 1. Between-Subjects Design Definition: In a between-subjects design, different participants are assigned to each condition of the experiment. Each participant experiences only one condition, meaning comparisons are made between different groups of people. Example: Suppose you want to test the effect of music on concentration. You have two groups of participants. Group 1 listens to music while studying, and Group 2 studies in silence. You compare their performance on a test afterward. Each participant is exposed to only one condition, either music or no music. 2. Within-Subjects Design Definition: In a within-subjects design, the same participants are exposed to all conditions of the experiment. The comparisons are made within the same group of people. Example: Using the same example of music and concentration, you might have one group of participants study in both conditions: first in silence and then while listening to music (or vice versa). You measure their performance on a test in both conditions, and each participant serves as their own control, reducing variability due to individual differences. 3. Mixed Design Definition: A mixed design combines elements of both between-subjects and within-subjects designs. Some factors are tested between different groups, while others are tested within the same participants. Example: In a study on concentration and music, you could have two groups of participants, one group of older adults and one group of younger adults (between-subjects factor: age). Within each group, participants might study both with music and in silence (within-subjects factor: music condition). This way, age is a between-subjects factor, while the presence or absence of music is a within-subjects factor. Key Differences: Between-Subjects: Different people in each condition. Example: One group with music, one without. Within-Subjects: The same people experience all conditions. Example: The same people study with and without music. Mixed Design: Some factors use different groups (between-subjects), and other factors are tested within the same participants (within-subjects). Example: Groups divided by age (between), and everyone experiences both music conditions (within). Advantages and Disadvantages: Between-Subjects: Reduces the risk of carryover effects (where one condition affects performance in the next), but requires more participants to account for individual differences. Within-Subjects: Requires fewer participants and controls for individual differences, but risks carryover effects and fatigue. Mixed Design: Allows researchers to investigate both individual differences (e.g., age) and condition differences (e.g., music vs. silence), but it is more complex to design and analyze. Sampling Techniques: Simple Random Sampling: A sampling technique where every individual in the population has an equal chance of being selected. This method ensures randomness and is free from bias. Example: Drawing names from a hat to select participants for a study. Stratified Random Sampling: The population is divided into distinct subgroups or strata (e.g., age, gender, income level), and random samples are taken from each group to ensure representation. Example: A researcher might divide the population into age groups and randomly select participants from each age group. Cluster Sampling: The population is divided into clusters (often based on geographic location or institutions), and entire clusters are randomly selected for the sample, rather than individuals. Example: A researcher might select random schools (clusters) and survey all students within those selected schools. Non-Random Sampling Methods: Convenience Sampling (or Haphazard Sampling): A non-probability sampling technique where participants are selected based on their availability or ease of access. Example: Surveying people at a local shopping mall because they are easily accessible. Purposive Sampling: A non-random sampling technique where the researcher selects individuals based on specific characteristics or qualities relevant to the research study. Example: A study on expert musicians may intentionally select participants who have a professional background in music. Quota Sampling: A non-random technique where researchers divide the population into groups and then select participants in proportion to the population's characteristics (like age, gender), but the selection within each group is not random. Example: If 60% of the population is female, a researcher using quota sampling would ensure that 60% of the sample is female, even if it isn’t selected randomly within those quotas. Control Measures: Counterbalancing Definition: A method used to control for the effects of the order of treatments by varying the order in which conditions are presented to participants. Example: In a study examining two types of therapy, one group experiences Therapy A followed by Therapy B, while another group experiences Therapy B followed by Therapy A. This helps control for any order effects. Randomization Definition: The process of randomly assigning participants to different groups or conditions to minimize biases and ensure that each participant has an equal chance of being placed in any group. Example: In a drug trial, participants are randomly assigned to either the treatment group or the placebo group to control for pre-existing differences. Control Groups Definition: A group that does not receive the treatment or intervention being tested, providing a baseline for comparison. Example: In an experiment to test a new educational program, one group receives the program (treatment group) while a similar group does not (control group). Blinding Definition: A technique where participants (single-blind) or both participants and researchers (double-blind) are unaware of the treatment assignments to reduce bias. Example: In a clinical trial for a new medication, neither the participants nor the researchers administering the treatment know who is receiving the actual medication versus a placebo. Matching Definition: Pairing participants in the experimental and control groups based on certain characteristics (e.g., age, gender) to control for those variables. Example: In a study on the effects of exercise on mood, participants might be matched in pairs based on age and baseline mood scores, with one in each pair assigned to the exercise group and the other to the control group. Intra-Group Counterbalancing Definition: A technique where the order of conditions is varied among different groups of participants within a study, ensuring that each condition appears in each position equally across groups. Example: In a study with two treatment conditions (A and B), one group of participants might experience A first and then B, while another group experiences B first and then A. This helps control for potential order effects within the larger group of participants. Intra-Subject Counterbalancing Definition: A technique where the order of conditions is varied for each individual participant in a study, allowing each participant to experience all conditions in different orders. Example: In an experiment testing two types of learning methods, each participant might first use method A and then method B, while another participant uses method B first and then method A. This controls for the influence of order on individual responses. Intra-Group Randomization of Order Definition: A method where the order of conditions is randomized for different groups of participants, ensuring that each condition is presented in a different order across the groups. Example: In a study with three treatment conditions (A, B, and C), groups are formed and the order in which each group experiences these conditions is randomly assigned (e.g., Group 1: A, B, C; Group 2: C, A, B). Intra-Subject Randomization of Order Definition: A method where the order of conditions is randomized for each individual participant, allowing each participant to experience all conditions in a different, randomized sequence. Example: In a cognitive task study involving three tasks (X, Y, Z), one participant may complete the tasks in the order Z, X, Y, while another participant might complete them in the order Y, Z, X. This randomization helps to mitigate order effects for each individual. Latin Square Design Definition: A Latin square is a type of experimental design used to control for two extraneous variables simultaneously while ensuring that every treatment appears exactly once in each row and column. This design is particularly useful when dealing with two blocking factors (e.g., time and participants) and is helpful in reducing variability. Structure: In a Latin square, the treatments are arranged in a grid where each treatment appears once per row and once per column. The size of the square is determined by the number of treatments, resulting in an 𝑛 × 𝑛 n×n grid. Example Scenario: Suppose a researcher wants to test the effectiveness of four different teaching methods (A, B, C, D) on student performance. To control for two extraneous variables—time of day (morning and afternoon) and classroom (Room 1 and Room 2)—the researcher can use a Latin square design. Explanation of the Example: Rows: Each row represents a different day of teaching. Columns: Each column represents a different classroom. Treatments: Each teaching method (A, B, C, D) is assigned to each cell of the square, ensuring that: Each teaching method is used exactly once per day (row). Each teaching method is taught in each classroom exactly once over the four days (column). Validity: Validity of a study: The certainty of the conclusions of the study, given the research method applied 1. Statistical Validity Definition: Statistical validity refers to whether the statistical conclusions drawn from the data are accurate and reliable. It ensures that the study's results are not due to chance and that the correct statistical methods were applied. Example: A study on the effectiveness of a new drug shows a statistically significant improvement in patient outcomes. Statistical validity would confirm that the sample size was large enough and the proper tests (like t-tests or ANOVA) were used to support the conclusion. 2. Internal Validity Definition: Internal validity refers to the degree to which a study establishes a cause-and-effect relationship between the independent and dependent variables, without being influenced by confounding variables. It ensures that the observed changes in the dependent variable are directly caused by the manipulation of the independent variable. Example: In an experiment testing whether sleep improves memory, internal validity ensures that the memory improvement is due to sleep and not another factor, like caffeine consumption or prior knowledge of the test material. In general: Internal validity in - non-experimental research: low - pre-experiments: low - quasi-experiment: reasonable/moderate - true experiment: high 3. Construct Validity Definition: Construct validity refers to how well the study’s measurements and procedures accurately represent the theoretical constructs they are supposed to measure. It ensures that the test measures what it claims to measure. Example: A researcher developing a survey to measure "job satisfaction" should ensure that the questions truly capture the concept of job satisfaction and not something else, like work-life balance or general happiness. High construct validity would mean that the survey accurately reflects the specific idea of job satisfaction. 4. External Validity Definition: External validity refers to the extent to which the results of a study can be generalized to other populations, settings, times, or contexts. It ensures that the findings can be applied beyond the specific conditions of the study. Example: A study conducted on college students may find that regular exercise improves concentration. For high external validity, these results should be applicable to other populations, like older adults or non-students, in various environments (not just in academic settings). Required for external validity: Random selection of participants Random selection of situations 5.Criterion Validity Definition: The extent to which a measure is related to an outcome or criterion that it should theoretically be related to. It is often divided into predictive validity (how well a measure predicts future outcomes) and concurrent validity (how well it correlates with a measure taken at the same time). Example: A new intelligence test is compared to an established IQ test. If the scores on both tests are highly correlated, the new test demonstrates good criterion validity. 6. Content Validity Definition: Content validity refers to the extent to which a test or measure represents all facets of a given construct. It assesses whether the items included in a measure adequately capture the full range of the concept being studied. Example: Consider a new test designed to measure mathematical ability in high school students. To establish content validity, the test developers would ensure that the test covers various areas of mathematics, such as algebra, geometry, and statistics, rather than focusing solely on one area. They might involve experts in mathematics education to review the test items, ensuring they represent the full scope of mathematical skills expected at that grade level. If the test includes a balanced mix of problem types that reflect the curriculum, it demonstrates strong content validity. In summary: Statistical validity ensures results are not due to chance. Internal validity confirms a cause-and-effect relationship without confounding variables. Construct validity checks that the study measures what it's supposed to. External validity determines if the results can be generalized to other populations or settings. Content validity assesses whether a test or measure comprehensively captures all aspects of a given construct. Criterion validity evaluates how well a measure correlates with an established outcome. Reliability: Reliability Definition: The consistency or stability of a measure over time, across items, or across raters. A reliable measure produces similar results under consistent conditions. Example: A personality test that yields the same score for a participant when taken multiple times over a short period is considered reliable. Test-Retest Reliability Definition: A measure of the stability of a test over time. It assesses whether the same individuals receive similar scores when tested at different points in time. Example: If participants take a depression inventory and score 20 on the first administration, they should score similarly (e.g., 19 or 21) when retested a few weeks later, indicating good test-retest reliability. Internal Consistency Reliability Definition: A measure of how well the items on a test measure the same construct. It evaluates the consistency of responses across different items within the same test. Example: In a survey measuring anxiety, if items designed to assess feelings of anxiety correlate highly with one another, the survey exhibits good internal consistency. Cronbach’s Alpha Definition: A statistical measure of internal consistency reliability. It quantifies the degree to which a set of items measures a single construct, typically ranging from 0 to 1, with higher values indicating better reliability. Example: A psychological scale assessing self-esteem may have a Cronbach’s alpha of 0.85, suggesting high internal consistency among the items. Interrater Reliability Definition: The degree to which different raters or observers give consistent estimates or scores when assessing the same phenomenon. Example: In a study where multiple therapists assess the severity of a client's symptoms, if their ratings are highly correlated, the interrater reliability is considered high. Cohen’s Kappa Definition: A statistical measure of interrater reliability that accounts for the agreement occurring by chance. It is used for categorical data. Example: If two psychologists independently diagnose a group of patients as either having a mental disorder or not, Cohen’s Kappa can be calculated to assess how much their agreement exceeds what would be expected by chance. Intraclass Correlation (ICC) Definition: A statistic used to evaluate the reliability of ratings for continuous data among multiple raters. It assesses both consistency and agreement. Example: If three judges rate the same set of therapy sessions on a scale from 1 to 10, the ICC would indicate the level of agreement among their scores. Biases: Sampling Bias: This occurs when a sample is not representative of the population from which it was drawn, leading to inaccurate or skewed results. Sampling bias can result from improper or non-random selection processes, causing certain groups to be overrepresented or underrepresented. Example: If a researcher only surveys college students about their political opinions, the results may not represent the views of the general population. Response Bias: This occurs when participants in a study provide inaccurate or misleading answers due to various factors, such as social desirability, memory issues, or misunderstanding the questions. Response bias can distort the findings of a survey or experiment. Example: In a survey about illegal drug use, participants might underreport their behavior due to fear of judgment or legal consequences, resulting in response bias. Selection Bias Definition: Occurs when the sample is not representative of the population from which it is drawn. Example: Conducting a survey about fitness habits only among gym members, excluding those who do not go to gyms. Confirmation Bias Definition: The tendency to search for, interpret, and remember information that confirms one’s preexisting beliefs or hypotheses. Example: A researcher only citing studies that support their theory while ignoring those that contradict it. Publication Bias Definition: The tendency for journals to publish positive results over negative or inconclusive ones. Example: Studies showing that a new drug is effective are more likely to be published than studies showing it is ineffective. Experimenter Bias Definition: When a researcher’s expectations or preferences influence the outcome of the research. Example: A researcher unintentionally cues participants to respond in a way that supports their hypothesis. Measurement Bias Definition: Occurs when the tools or methods used to collect data are flawed, leading to inaccurate results. Example: Using a faulty scale to measure participants' weight, resulting in systematic errors. Recall Bias Definition: When participants do not remember previous events accurately, often affecting retrospective studies. Example: In a study on dietary habits, participants may misremember their food intake, skewing results. Funding Bias Definition: When research outcomes are influenced by the source of funding or sponsorship. Example: A study funded by a pharmaceutical company may favor the drug being tested, resulting in biased conclusions. Social Desirability Bias Definition: When respondents provide answers they think are more socially acceptable rather than their true feelings. Example: Participants may underreport smoking habits in a health survey. Hawthorne Effect Definition: When individuals alter their behavior simply because they know they are being observed. Example: Workers improve productivity during a study simply because they are aware that researchers are watching. Leading Question Bias Definition: Occurs when the wording of a question suggests a certain response. Example: Asking “How much do you enjoy our service?” instead of “What is your opinion of our service?” Overgeneralization Bias Definition: Making broad conclusions based on limited evidence or a small sample size. Example: Concluding that all teenagers are irresponsible based on a study of a small group. Biased Sampling Definition: When certain groups are overrepresented or underrepresented in a sample. Example: Conducting a survey in a wealthy neighborhood, leading to a skewed understanding of community opinions. Interviewer Bias Definition: When the interviewer’s characteristics or behavior influence the responses of the interviewee. Example: An interviewer’s body language might suggest disinterest, affecting how candidly a participant responds. Ethics: Belmont Report: 1. Respect for Persons (Autonomy) This principle emphasizes the importance of treating individuals as autonomous agents capable of making informed decisions, as well as offering additional protection to those with diminished autonomy. Subtopics: Informed Consent: Participants must be given comprehensive information about the study, including its purpose, risks, and benefits, so they can voluntarily choose to participate. Voluntariness: Participation must be free from coercion or undue influence. Individuals should feel free to decline or withdraw from the study at any time without penalty. Protection for Vulnerable Populations: Special protections should be in place for populations that may have limited autonomy, such as children, prisoners, the elderly, or individuals with mental disabilities. This ensures their consent is obtained through legal representatives if needed. 2. Beneficence The principle of beneficence refers to the obligation to maximize potential benefits while minimizing possible harms to participants. This requires a careful balance of risks and benefits in the study design. Subtopics: Risk-Benefit Analysis: Researchers must assess the potential risks to participants and ensure that they are justified by the expected benefits to the participants or society. This is done by weighing possible harms against the scientific value of the study. Minimizing Harm: Efforts should be made to reduce physical, emotional, social, or psychological harm to participants, through safe study procedures, confidentiality, and privacy protection. Maximizing Benefits: Researchers should strive to ensure that the potential benefits (to participants, society, or scientific knowledge) outweigh the risks involved in the study. 3. Justice The principle of justice concerns fairness in the distribution of the benefits and burdens of research. It requires that no group be unfairly burdened by the risks of research or excluded from its potential benefits. Subtopics: Fair Subject Selection: Research participants should be selected fairly, ensuring that the selection process is free of bias or exploitation. For example, vulnerable or disadvantaged groups should not be disproportionately selected for high-risk research unless the research specifically benefits them. Equitable Distribution of Risks and Benefits: The benefits of research should not be limited to privileged groups, while less-advantaged groups bear the risks. This principle ensures that no group is unduly exposed to risks while others reap the rewards. Avoiding Exploitation: Care should be taken not to take advantage of vulnerable populations, particularly those with limited autonomy or resources, in ways that might expose them to unnecessary risk. Summary of the Ethical Principles: Respect for Persons emphasizes autonomy and protection for vulnerable individuals. Beneficence seeks to maximize benefits and minimize harm. Justice ensures fair and equitable distribution of research risks and rewards The APA (American Psychological Association) Ethics Code outlines five general principles of ethics that provide a framework for ethical conduct in psychological practice, research, and education. These principles are aspirational, meaning they guide the behavior of psychologists but are not enforceable rules. They are meant to inspire and encourage high standards of professional conduct. Here’s a breakdown of each principle and its significance: 1. Principle A: Beneficence and Nonmaleficence This principle emphasizes the importance of promoting the welfare of clients, research participants, and the community, while also avoiding harm. Beneficence means psychologists should strive to benefit those with whom they work. Nonmaleficence refers to the duty to avoid causing harm. Psychologists are expected to assess the potential risks and benefits of their actions and strive to minimize harm. This principle applies to therapy, research, and other professional activities. Example: A psychologist ensures that a treatment plan benefits the client without causing unnecessary distress. Similarly, in research, the psychologist takes care to prevent physical or emotional harm to participants. 2. Principle B: Fidelity and Responsibility This principle highlights the importance of trustworthiness, accountability, and professional responsibility in relationships with clients, colleagues, and society. Psychologists are expected to uphold professional standards of conduct and accept responsibility for their behavior. They should establish trust with those they work with, including clients, research participants, and colleagues. Psychologists must also be aware of their professional responsibilities toward society, ensuring that their work contributes positively to the public good. Example: A therapist maintains professional boundaries with clients and follows through on commitments. A researcher reports accurate data and avoids misleading practices. 3. Principle C: Integrity This principle stresses the importance of honesty, accuracy, and truthfulness in all professional activities. Psychologists should avoid fraudulent activities, misrepresentation, and deception unless deception is ethically justified in specific research settings (with appropriate safeguards in place). They should promote honesty in scientific reporting, clinical practice, and professional communication. Example: A psychologist accurately presents research findings and does not fabricate or manipulate data. In therapy, a psychologist avoids making false promises about treatment outcomes. 4. Principle D: Justice The principle of justice calls for fairness and equity in access to psychological services, benefits, and treatments. Psychologists should ensure that all people have access to and can benefit from the contributions of psychology, regardless of their background, status, or identity. They must also be aware of their own biases and limits of competence, making sure they don’t inadvertently harm or exclude individuals. Example: A psychologist ensures that services are available to underserved populations and actively works to avoid bias in diagnosis or treatment, ensuring equitable treatment for all clients. 5. Principle E: Respect for People's Rights and Dignity This principle emphasizes the importance of respecting the dignity, autonomy, and rights of all individuals. Psychologists should respect the privacy, confidentiality, and self-determination of clients and research participants. They should be aware of cultural, individual, and role differences, such as those based on age, gender, race, ethnicity, religion, disability, sexual orientation, and socioeconomic status. Psychologists must take steps to eliminate biases, avoid discrimination, and respect the rights of individuals to make their own choices. Example: A psychologist obtains informed consent before conducting therapy or research, respecting the autonomy of the client or participant. The psychologist is also sensitive to cultural differences in working with diverse populations. Summary of APA's Five Ethical Principles: Beneficence and Nonmaleficence: Strive to benefit others and do no harm. Fidelity and Responsibility: Be trustworthy, maintain professional standards, and take responsibility for your actions. Integrity: Promote honesty and accuracy in all professional activities. Justice: Ensure fairness and equity, and avoid biases in practice and research. Respect for People's Rights and Dignity: Honor the dignity, privacy, and autonomy of individuals and groups, being mindful of diversity. Generalization: The quality of a sample is determined by representativity Representative sample: characteristics sample = characteristics population (except for size) Example: Population: men/women/other 45%/45%/10% educational level (low/medium/high) 50%/30%/20% Representative sample: (also, approximately) men/women/other 45%/45%/10% educational level (low/medium/high) 50%/30%/20% Probability sampling is the only way to obtain a representative sample. In non-probability sampling, the sample is likely to be ‘biased’ (i.e. not representative). Generalising Replications: - Exact replication - Conceptual replication independent variable and/or dependent variable are operationalised in a different manner. Based on multiple studies: - Literature review - Meta-analysis