Research Methods Notes PDF
Document Details
Uploaded by UnrealGyrolite6069
Tags
Summary
These notes detail various types of research questions, such as descriptive, relational, and causal, and their importance in research design. They also discuss aspects of research, including different types of research questions, methodologies, and structure.
Full Transcript
MODULE 1 Five Big Words in Social Research (Trochim, n.d.) 1. Theoretical: - Refers to the development, exploration, or testing of theories about social phenomena. - Involves understanding how the world operates based on the...
MODULE 1 Five Big Words in Social Research (Trochim, n.d.) 1. Theoretical: - Refers to the development, exploration, or testing of theories about social phenomena. - Involves understanding how the world operates based on theoretical frameworks. 2. Empirical: - Based on observations and measurements of reality. - Involves collecting data and evidence to support or refute theories. 3. Nomothetic: - Pertains to general laws or rules that apply to groups or populations (general case). - Contrasts with idiography, which focuses on individual cases. 4. Probabilistic: - Acknowledges that certainty is unattainable in social research. - Findings are often expressed in terms of probabilities rather than definitive conclusions. 5. Causal: - Involves examining cause-effect relationships within social phenomena. - Important for understanding how different variables influence one another. Types of Research Questions (Trochim, n.d.) 1. Descriptive Questions: - Aim: To describe what is happening or what exists in a particular context. - Example: Public opinion polls that measure the proportion of people holding various opinions (e.g., voting preferences). - Characteristics: Focus on providing a snapshot of a situation without exploring relationships or causes. 2. Relational Questions: - Aim: To examine the relationships between two or more variables. - Example: Investigating whether there is a correlation between gender and voting preference. - Characteristics: These studies look for patterns or associations but do not establish causation. 3. Causal Questions: - Aim: To determine whether one or more variables cause or affect one or more outcome variables. - Example: Analyzing whether a political advertising campaign influences voter preferences. - Characteristics: Causal studies are more complex and require demonstrating that a relationship exists between cause and effect. Cumulative Nature of Research Questions - The three types of questions can be viewed as cumulative: - Descriptive studies provide the foundation by measuring or observing variables. - Relational studies build on this by exploring how these variables relate to each other. - Causal studies assume both descriptive and relational understanding to establish cause-effect relationships. Importance of Research Questions - Clearly defined research questions guide the research design and methodology. - They help researchers focus their studies and determine the appropriate analytical techniques. Feel free to ask if you need more details or examples related to these types of research questions! Structure of Research (Trochim, n.d.) 1. General Structure: - Research projects typically follow a structured process that can be visualized as an hourglass shape. - The process begins broadly and narrows down to specific questions, then expands again during analysis and interpretation. 2. Initial Area of Interest: - Research starts with a broad area of interest or a general problem that the researcher wishes to study. - Example: Interest in how technology impacts student performance in mathematics. 3. Narrowing the Focus: - The initial broad interest is refined into a more specific research question that can be feasibly studied. - This often involves formulating a hypothesis that articulates what the researcher expects to find. - Example: Hypothesizing that a specific method of computer instruction will improve math performance in a particular student group. 4. Direct Measurement: - At the narrowest point of the research hourglass, the researcher engages in direct measurement or observation of the specific question of interest. - This involves collecting data relevant to the hypothesis. 5. Data Analysis: - After data collection, the researcher analyzes the data using various methods to understand the findings. - This step may involve statistical analysis, thematic analysis, or other analytical techniques. 6. Returning to Theory: - The research process often cycles back to theory, where findings may lead to new insights or modifications of existing theories. - This iterative process allows researchers to refine their understanding of the phenomena being studied. 7. Importance of Structure: - A well-defined structure helps ensure that the research is systematic, organized, and focused. - It aids in maintaining clarity throughout the research process and facilitates effective communication of findings. Components of a Study 1. Research Problem: - The starting point of any research study, identifying a specific issue or area of interest that needs investigation. - Example: Understanding the factors that contribute to unemployment among recent graduates. 2. Research Question: - A clear, focused question that the study aims to answer, often derived from the research problem. - It guides the direction of the research and is typically framed in the context of existing theories. - Example: What programs are most effective in helping recent graduates secure employment? 3. Hypothesis: - A specific, testable statement predicting the relationship between variables. - It operationalizes the research question, providing a clear expectation of the study's outcomes. - Example: The implementation of a mentorship program will significantly increase employment rates among recent graduates compared to those who do not participate. 4. Variables: - In causal studies, at least two major variables are identified: the cause (independent variable) and the effect (dependent variable). - The cause is often a program, treatment, or intervention, while the effect is the outcome being measured. - Example: The mentorship program (cause) and the employment rate (effect). 5. Units of Analysis: - Refers to the entities being studied, which can include individuals, groups, organizations, or geographical areas. - The choice of units is crucial for sampling and data collection. - Example: Recent graduates from specific universities or regions. 6. Sampling: - The process of selecting a representative subset of the population to participate in the study. - Distinction is made between the theoretical population (the entire group of interest) and the actual sample used in the study. - Example: Selecting a sample of graduates from multiple universities to ensure diversity. 7. Research Design: - The overall strategy that outlines how the research will be conducted, including the methods for data collection and analysis. - It determines how participants are assigned to different conditions (e.g., treatment vs. control groups). - Example: A randomized controlled trial to assess the effectiveness of the mentorship program. 8. Data Collection Methods: - Techniques used to gather information, which may include surveys, interviews, observations, or experiments. - The choice of method depends on the research question and design. - Example: Conducting surveys to measure employment outcomes and participant satisfaction with the mentorship program. 9. Data Analysis: - The process of interpreting the collected data to draw conclusions and answer the research question. - This may involve statistical analysis, thematic analysis, or other analytical techniques. - Example: Using statistical tests to compare employment rates between participants and non-participants of the mentorship program. Deduction & Induction (Trochim, n.d.) 1. Deductive Reasoning: - Definition: A logical process that moves from general principles to specific conclusions. It is often referred to as a "top-down" approach. - Process: - Start with a general theory or hypothesis. - Narrow down to specific predictions or hypotheses that can be tested. - Collect observations or data to confirm or refute the hypotheses. - Example: - Theory: All humans are mortal. - Hypothesis: Socrates is a human. - Conclusion: Therefore, Socrates is mortal. - Characteristics: - Deductive reasoning is more structured and focused on testing existing theories. - It often leads to definitive conclusions if the premises are true. 2. Inductive Reasoning: - Definition: A logical process that moves from specific observations to broader generalizations and theories. It is often referred to as a "bottom-up" approach. - Process: - Begin with specific observations or data. - Identify patterns or regularities in the data. - Formulate tentative hypotheses or general conclusions based on the observed patterns. - Example: - Observation: The sun has risen in the east every day observed. - Conclusion: The sun will rise in the east every day. - Characteristics: - Inductive reasoning is more exploratory and open-ended, especially at the beginning of research. - It allows for the development of new theories based on empirical evidence. 3. Comparison of Deduction and Induction: - Direction of Reasoning: - Deduction: General to specific. - Induction: Specific to general. - Nature of Conclusions: - Deduction: Conclusions are definitive if premises are true. - Induction: Conclusions are probable and may require further testing. - Use in Research: - Deductive reasoning is often used in hypothesis testing and experimental research. - Inductive reasoning is commonly used in exploratory research and qualitative studies. 4. Integration of Both Approaches: - Most research involves a combination of both deductive and inductive reasoning. - Researchers may start with inductive reasoning to explore a phenomenon and then use deductive reasoning to test specific hypotheses derived from their findings. - This cyclical process enhances the robustness of research by allowing for theory development and testing. Positivism and Post-Positivism (Trochim, n.d.) 1. Positivism: - Definition: A philosophical theory that asserts that knowledge is primarily derived from empirical evidence and observable phenomena. It emphasizes the use of the scientific method to uncover truths about the world. - Key Characteristics: - Empiricism: Knowledge is based on observable and measurable facts. The emphasis is on data collection through experiments and observations. - Determinism: The belief that the world operates according to natural laws and that events can be predicted based on these laws. - Objective Reality: Positivists hold that there is an objective reality that can be understood through scientific inquiry. - Rejection of Metaphysics: Positivism dismisses metaphysical claims that cannot be tested or observed. - Research Approach: - Typically involves quantitative methods, such as experiments and surveys, to test hypotheses and establish causal relationships. - Aims for generalizability and seeks to produce laws or theories that apply broadly. 2. Post-Positivism: - Definition: A philosophical stance that emerged as a critique of positivism, acknowledging the limitations of scientific inquiry and the complexity of reality. It recognizes that while there is an objective reality, our understanding of it is always fallible and subject to revision. - Key Characteristics: - Critical Realism: Post-positivists believe in an independent reality but recognize that our observations and theories about it are imperfect and can contain errors. - Fallibility of Knowledge: Emphasizes that all scientific knowledge is provisional and subject to change as new evidence emerges. - Probabilistic Nature: Accepts that conclusions drawn from research are often probabilistic rather than certain, reflecting the complexities of social phenomena. - Use of Multiple Methods: Encourages the use of both qualitative and quantitative methods to gain a more comprehensive understanding of the research problem. - Research Approach: - Involves a mix of deductive and inductive reasoning, allowing for theory development and testing. - Focuses on understanding context, meaning, and the subjective experiences of individuals. 3. Comparison of Positivism and Post-Positivism: - View of Reality: - Positivism: Believes in a single, objective reality that can be discovered through scientific methods. - Post-Positivism: Acknowledges an objective reality but emphasizes that our understanding is always limited and influenced by context. - Approach to Knowledge: - Positivism: Seeks to establish universal laws and generalizations. - Post-Positivism: Accepts that knowledge is probabilistic and context-dependent. - Methodological Implications: - Positivism: Primarily quantitative methods, focusing on measurement and statistical analysis. - Post-Positivism: A combination of qualitative and quantitative methods, valuing multiple perspectives and triangulation. 4. Implications for Research: - Researchers adopting a positivist approach may prioritize experiments and surveys to test hypotheses and seek generalizable results. - Those embracing post-positivism may focus on understanding the complexities of social phenomena, using diverse methods to capture different dimensions of reality. Measurement theory and applications for the social sciences (Bandalos, 2018) 1. Historical Context of Testing: ○ Ancient Greece emphasized utilitarian testing focused on rhetoric and morality for citizenship (T1). ○ Testing was subjective, based on teacher or audience judgments, with informal scoring methods (T1). 2. Nature of Constructs: ○ Constructs are theoretical entities representing behaviors or characteristics (e.g., creativity, intelligence) that are not directly observable (T4). ○ Measurement relies on indirect methods, using samples of behavior to infer constructs (T4). 3. Measurement Challenges: ○ Tests often represent limited samples of behavior; not all possible questions or behaviors can be observed (T2). ○ There is no single correct method for measuring constructs; various approaches can yield different results (T2). ○ Issues such as test anxiety, misunderstanding of questions, and response styles can lead to inaccurate measurements (T4). 4. Test Development: ○ Researchers must design tasks that effectively elicit the desired construct (T6). ○ Tasks should be standardized to ensure fairness (e.g., time limits, no outside help) (T2). 5. Examples of Measurement: ○ Alfred Binet focused on higher mental abilities and developed tests for various cognitive functions (T5). ○ Different methods (e.g., performance assessments, interviews) can be used to measure the same construct (T2). 6. Purpose of Measurement: ○ Understanding measurement pitfalls is crucial for developing, administering, and interpreting tests effectively (T4). Problems in Social Science Measurement 1. Limited Samples of Behavior: ○ Tests are based on a limited sample of possible behaviors; not every question or behavior can be observed (T2). ○ This limitation can lead to incomplete or biased measurements of constructs. 2. No One Correct Method: ○ There is no universally accepted method for measuring a construct; different approaches can yield varying results (T2). ○ For example, the ability to apply knowledge can be assessed through multiple-choice tests, performance assessments, or interviews. 3. Measurement Errors: ○ Measurement errors are inherent in social science testing and can arise from various sources (T3). ○ Common errors include: Test Anxiety: Anxiety may prevent test takers from performing to their potential. Language Proficiency: Limited English proficiency can lead to misinterpretation of tasks or instructions. Socially Desirable Responding: Respondents may answer in a way they believe is more acceptable rather than truthfully. Response Styles: Variations in how respondents choose answers (e.g., favoring neutral or extreme options) can skew results. Malingering: Exaggerating symptoms to obtain a specific diagnosis can distort measurement in psychological assessments. 4. Indirect Measurement: ○ Constructs are measured indirectly, relying on behavior samples that may not accurately reflect the construct (T4). ○ The behavior sample may not elicit the intended construct due to insufficient understanding of the construct or measurement method. 5. Implications for Test Development: ○ Developers must be aware of potential errors and strive to create tests that minimize these issues (T3). ○ Continuous investigation into measurement error is essential for improving testing procedures and outcomes (T3). Measurement Theory 1. Definition of Measurement Theory: ○ Measurement theory is the study of methods for measuring constructs and understanding the problems associated with these methods (T5). ○ It encompasses the development of tests that aim to minimize measurement error and yield accurate representations of the desired constructs. 2. Importance of Measurement Theory: ○ Good measurement is crucial for diagnosing learning disabilities, personality disorders, and studying individual differences (T5). ○ The validity of theories in social sciences relies heavily on the quality of the measurements used to test them (T5). 3. Measurement Error: ○ Measurement errors are inherent in social science assessments and can affect the accuracy of test results (T5). ○ Understanding the impact of these errors is a key focus of measurement theory, as it helps improve testing procedures. 4. Constructs and Their Measurement: ○ Constructs are theoretical entities that cannot be directly observed; measurement theory seeks to develop reliable methods to infer these constructs from observable behaviors (T4). ○ The challenge lies in ensuring that the tasks used in tests effectively elicit the intended constructs (T4). 5. Role of Psychometrics: ○ Measurement theory is closely related to psychometrics, which focuses on the development and evaluation of psychological tests (T5). ○ Psychometricians work to create tests that are as free from error as possible and that provide meaningful measures of constructs. 6. Application of Measurement Theory: ○ Measurement theory informs the design, administration, and interpretation of tests in various fields, including education, psychology, and social research (T5). ○ It emphasizes the need for rigorous testing standards and methodologies to ensure the reliability and validity of measurements. Four Levels of Measurement 1. Nominal Level: ○ Definition: The nominal level is the most basic level of measurement, where data is categorized without any order or ranking (T6). ○ Characteristics: Data is classified into distinct categories (e.g., gender, race, or types of animals). No mathematical operations can be performed on nominal data. ○ Example: Assigning numbers to different types of fruits (1 for apples, 2 for oranges) without implying any order. 2. Ordinal Level: ○ Definition: The ordinal level involves ordered categories, where the order of values matters, but the differences between them are not uniform (T6). ○ Characteristics: Data can be ranked (e.g., satisfaction ratings from 1 to 5). The exact distance between ranks is not known or consistent. ○ Example: Ranking students based on their performance (1st, 2nd, 3rd) without knowing the exact score differences. 3. Interval Level: ○ Definition: The interval level of measurement has ordered categories with equal intervals between values, but no true zero point (T6). ○ Characteristics: Allows for the calculation of differences between values (e.g., temperature in Celsius). Ratios are not meaningful because there is no true zero (e.g., 20°C is not twice as hot as 10°C). ○ Example: The IQ scale, where the difference between scores is meaningful, but a score of 0 does not indicate a complete absence of intelligence. 4. Ratio Level: ○ Definition: The ratio level is the highest level of measurement, featuring ordered categories, equal intervals, and a true zero point (T6). ○ Characteristics: All mathematical operations are permissible, including addition, subtraction, multiplication, and division. A true zero indicates the absence of the property being measured (e.g., weight, height). ○ Example: Measuring height in centimeters, where 0 cm means no height, and 180 cm is twice as tall as 90 cm. Hierarchy: The levels of measurement are hierarchical, with nominal being the lowest and ratio being the highest. Mathematical Operations: The ability to perform mathematical operations increases with the level of measurement, from none in nominal to all in ratio. Application: Understanding these levels is crucial for selecting appropriate statistical analyses and interpreting data correctly in research. Criticisms of Stevens’s Levels of Measurement 1. Operationalism: ○ Critics argue that Stevens’s definition of measurement is based on operationalism, which assumes that a measure defined at a particular level inherently possesses the properties of that level (T6). ○ This assumption lacks empirical verification, leading to concerns about the validity of the classifications. 2. Lack of Quantitative Structure: ○ Joel Michell (1986, 1997) contends that Stevens’s levels do not adhere to the rules of quantitative structure, which require that numeric relations (such as additivity or ratios) between scale points can be verified (T6). ○ Michell argues that without a method to test these properties, the levels of measurement may not be truly quantitative. 3. Ambiguity in Measurement Properties: ○ Critics point out that Stevens did not provide clear methods for determining whether the properties associated with each level (e.g., equal intervals for interval measures) actually hold in practice (T6). ○ This ambiguity raises questions about the reliability of measurements classified under Stevens’s framework. 4. Inflexibility of the Hierarchy: ○ The strict hierarchical nature of Stevens’s levels may not accommodate the complexities of certain Brief History of Testing Overview: Testing has a long and varied history, influenced by different cultures and purposes. Key Influences: Major influences include civil service examinations in China, early psychological assessments in Europe and America, and educational testing practices. Evolution: The methods and purposes of testing have evolved from simple assessments to complex evaluations of individual differences and abilities. The Chinese Civil Service Examinations Historical Context: Dating back to as early as 2200 B.C.E., these examinations were initially used to evaluate civil servants (T6). Purpose: They were designed to assess candidates for government positions, ensuring that only qualified individuals were selected. Longevity: The examinations continued until 1905, reflecting their importance in governance and administration. Testing in Ancient Greece Nature of Testing: Testing in ancient Greece could be rigorous and often involved public scrutiny. Methods: Various forms of assessment were used, including oral examinations and competitions in rhetoric and philosophy. Cultural Significance: Testing was tied to social status and intellectual achievement, influencing educational practices. Early European Testing Development: Early testing in Europe focused on assessing knowledge and skills, particularly in educational settings. Influence of the Renaissance: The Renaissance period saw a renewed interest in standardized assessments as a means of evaluating learning. Emergence of Written Exams: Written examinations became more common, with formal structures established for assessing student performance. Beginnings of Psychological Measurement Focus on Sensation: Early psychological measurement was concerned with physical sensations and reaction times (T5). Influence of Physiology: Researchers trained in biology and physiology applied measurement techniques to study mental processes. Shift in Focus: Over time, the emphasis shifted from physical measures to understanding individual differences in psychological traits. Binet’s Contributions to Intelligence Testing Development of the First Intelligence Test: In 1905, Alfred Binet developed the first standardized intelligence test to identify students needing special assistance (T5). Concept of Mental Age: Binet introduced the concept of mental age, comparing a child's performance to that of peers. Impact: Binet's work laid the foundation for future intelligence testing and the study of cognitive abilities. Testing in the United States Adoption of European Methods: The U.S. adopted and adapted testing methods from Europe, particularly in education and psychology. Standardization: Efforts were made to standardize tests to ensure fairness and reliability in assessments. Growth of Psychological Testing: The early 20th century saw a significant increase in the use of psychological tests in various fields. The Stanford–Binet Scale Revision of Binet’s Test: Lewis Terman revised Binet's original test, creating the Stanford–Binet Scale in 1916. Focus on Intelligence Quotient (IQ): The scale introduced the concept of IQ, providing a numerical representation of intelligence. Widespread Use: The Stanford–Binet Scale became one of the most widely used intelligence tests in the U.S. Group Testing Emergence of Group Tests: Group testing methods were developed to assess large numbers of individuals efficiently. Advantages: Group tests allowed for quicker administration and scoring compared to individual assessments. Applications: These tests were used in educational settings and military assessments. The Army Alpha Purpose: Developed during World War I, the Army Alpha test was designed to assess the cognitive abilities of military recruits. Significance: It was one of the first large-scale group intelligence tests, influencing future testing practices. Impact on Military: The test helped identify suitable candidates for various military roles based on their cognitive abilities. Group Achievement Tests Assessment of Academic Skills: Group achievement tests were created to evaluate students' knowledge in specific subjects. Standardization: These tests aimed to provide a standardized measure of academic performance across different populations. Use in Education: They became common in schools to assess student learning and inform instructional practices. Occupational Interest Inventories Purpose: These inventories assess individuals' interests and preferences related to various occupations. Development: They were developed to help individuals make informed career choices based on their interests. Application: Used in career counseling and guidance, these inventories assist in matching individuals with suitable career paths. Testing in Business and Industry Employee Selection: Testing in business focuses on selecting and evaluating employees based on their skills and abilities. Types of Tests: Various assessments, including cognitive ability tests and personality assessments, are used to inform hiring decisions. Impact on Organizational Performance: Effective testing can enhance workforce productivity and job satisfaction. Personality Assessment Focus on Individual Differences: Personality assessments aim to measure traits, behaviors, and characteristics of individuals. Methods: Various methods, including self-report questionnaires and projective tests, are used to assess personality. Applications: These assessments are used in clinical psychology, counseling, and organizational settings to understand behavior and improve interpersonal dynamics. MODULE 2 Why is it so hard to give good directions? (Stafford, 2012). 1. Psychological Challenges in Giving Directions - Curse of Knowledge: Once we learn something, it becomes difficult to understand how it appears to someone who doesn't know it. This leads to assumptions that others have the same knowledge. - Example: Describing a tent as "the blue one" is unhelpful if there are many blue tents. 2. Theory of Mind - Definition: The ability to understand others' beliefs and desires, which is crucial for effective communication. - Human Distinction: Unlike other species, humans are naturally inclined to think about how others perceive the world. - Mental Simulation: We often simulate what we would want in someone else's position, which can lead to misunderstandings. 3. Improving Direction-Giving Skills - Deliberate Strategies: - Check for jargon that may not be understood by others. - Specify what can be ignored in directions (e.g., "There’s a pink door, but that’s not it"). - Practice: Developing "mind mindedness" can enhance our ability to give clear directions and improve interpersonal relationships. 4. Importance of Perspective - Understanding that everyone has different thoughts and beliefs is essential for effective communication. - Good direction-giving requires a proper understanding of what the other person knows and needs. 5. Humor and Misunderstanding - Example of a joke illustrating the importance of understanding different interpretations of instructions. - Highlights the need to appreciate the knowledge and desires of others to avoid miscommunication. Definitions of poverty: Twelve clusters of meaning (Spicker, 2010) 1. POVERTY AS A MATERIAL CONCEPT: Views poverty primarily as a lack of essential material goods and services necessary for survival and well-being. It emphasizes the importance of basic needs, such as food, clothing, shelter, and healthcare, and recognizes that poverty manifests through a pattern of deprivation over time due to limited resources. Individuals experiencing this form of poverty are unable to secure the necessities that allow for a decent standard of living. NEED: This aspect emphasizes the fundamental requirements for survival and well-being, such as food, clothing, shelter, and healthcare. Poverty is understood as the inability to meet these basic needs. A PATTERN OF DEPRIVATION: This refers to the recurring lack of essential goods and services over time. It highlights that poverty is not just a one-time event but a continuous state that affects individuals and families, leading to chronic deprivation. LIMITED RESOURCES: This concept focuses on the insufficient financial and material resources available to individuals or households. It underscores that poverty arises from a lack of access to the means necessary to secure a decent standard of living. 2. POVERTY AS ECONOMIC CIRCUMSTANCES: Frames poverty in terms of an individual's or household's economic status and position within the broader economic system. It highlights the relationship between poverty and factors such as standard of living, income inequality, and overall economic position. Poverty is understood as a condition resulting from insufficient financial resources, which limits access to opportunities and contributes to a lower quality of life compared to societal norms. STANDARD OF LIVING: This refers to the overall quality of life and access to goods and services that individuals or families can afford. Poverty is viewed as living below a certain standard that is considered acceptable in society. INEQUALITY: This aspect highlights the disparities in wealth and income distribution within a society. Poverty is seen as a result of systemic inequalities that prevent equitable access to resources and opportunities. ECONOMIC POSITION: This focuses on an individual's or household's financial status in relation to the broader economy. It considers factors such as income level, employment status, and economic mobility, which contribute to the experience of poverty. 3. SOCIAL CIRCUMSTANCES: Emphasizes the social dimensions of poverty, focusing on how social class, dependency, and exclusion contribute to the experience of being poor. It recognizes that poverty is not only an economic issue but also a social one, where individuals may face barriers to participation in society due to their social status. Key elements include the lack of basic security and entitlements, which can perpetuate cycles of poverty and marginalization. SOCIAL CLASS: This concept examines how an individual's or group's social class affects their access to resources and opportunities. Poverty is often linked to lower social class status, which can perpetuate cycles of disadvantage. DEPENDENCY: This refers to reliance on social welfare systems or external assistance for survival. It highlights how poverty can create a cycle of dependency, where individuals may struggle to achieve self-sufficiency. LACK OF BASIC SECURITY: This aspect emphasizes the absence of essential protections and guarantees, such as stable housing, employment, and access to healthcare. It underscores how insecurity contributes to the experience of poverty. LACK OF ENTITLEMENT: This refers to the absence of rights or claims to resources and services that can help individuals escape poverty. It highlights how social policies and structures can create barriers to accessing necessary support. EXCLUSION: This concept focuses on the social and economic exclusion of individuals or groups from participating fully in society. Poverty is viewed as a condition that isolates people from social networks, opportunities, and resources. 4. POVERTY AS A MORAL JUDGEMENT: Considers poverty as a moral issue, where the hardships faced by individuals living in poverty are viewed as unacceptable by societal standards. It emphasizes the ethical responsibility of society to address the injustices and suffering associated with poverty. This perspective calls for collective action and policy interventions to alleviate the conditions of those affected by poverty, framing it as a societal failure to provide for all members of the community. UNACCEPTABLE HARDSHIP: This aspect frames poverty as a moral issue, where the suffering and deprivation experienced by individuals are seen as unacceptable by societal standards. It calls for ethical considerations and collective responsibility to address the injustices of poverty and improve the conditions of those affected. 2.2: Concepts, Constructs, and Variables (Libretexts 2021) 1. Definitions Constructs: Abstract concepts specifically chosen to explain phenomena. They can be simple (e.g., weight) or multi-dimensional (e.g., communication skills). Variables: Measurable representations of constructs. They can vary and are used to operationalize constructs in research. 2. Types of Definitions Dictionary Definitions: Circular and not useful for scientific research (e.g., defining "attitude" as "disposition"). Operational Definitions: Specify how constructs will be measured (e.g., defining "income" as monthly or annual, before-tax or after-tax). 3. Levels of Abstraction Concepts can range from precise and objective (e.g., weight) to abstract and complex (e.g., personality). Multi-dimensional constructs consist of several related concepts, while unidimensional constructs are simpler. 4. Research Framework Research operates on two planes: ○ Theoretical Plane: Where constructs are conceptualized. ○ Empirical Plane: Where variables are operationalized and measured. 5. Types of Variables Independent Variables: Explain or influence other variables. Dependent Variables: Explained or influenced by independent variables. Mediating Variables: Explain the relationship between independent and dependent variables. Moderating Variables: Influence the strength or direction of the relationship between independent and dependent variables. Control Variables: Extraneous variables that must be controlled in a study. 6. Example Application If intelligence (independent variable) influences academic achievement (dependent variable), effort can be a moderating variable. The relationship between constructs can be visualized in a nomological network, illustrating how they interact. 7. Concept Development Concepts can be borrowed from other disciplines (e.g., "gravitation" in business) or newly created (e.g., "technostress"). The development of concepts is essential for explaining observed phenomena in research. Levels of Measurement (Trochim, n.d.) Definition: ○ The level of measurement refers to the relationship among values assigned to attributes of a variable (e.g., party affiliation). Types of Measurement: 1. Nominal: Values are unique names for attributes (e.g., party affiliation: 1 = Republican, 2 = Democrat, 3 = Independent). No ordering or ranking implied. Example: Jersey numbers in sports. 2. Ordinal: Attributes can be rank-ordered, but distances between them are not meaningful. Example: Educational attainment levels (0 = less than high school, 1 = some high school, etc.). 3. Interval: Distances between attributes are meaningful, but there is no true zero. Example: Temperature in Fahrenheit (the difference between 30-40 is the same as 70-80). 4. Ratio: Has a meaningful absolute zero, allowing for the construction of meaningful ratios. Example: Weight or the number of clients (0 clients means none, and you can say one group had twice as many clients as another). Importance of Level of Measurement: ○ Helps in interpreting data correctly. ○ Determines appropriate statistical analyses (e.g., you wouldn't average nominal data). ○ Higher levels of measurement (interval, ratio) are generally preferred for more sensitive analyses. Conceptualizing Research (Trochim, n.d.) Research Idea Development: ○ Formulating good research problems is crucial but often overlooked in training. ○ Professional researchers typically generate ideas through structured approaches. Methods for Idea Generation: ○ Concept mapping: Helps clarify and map out key research issues. ○ Other methods include brainstorming, nominal group technique, focus groups, and Delphi methods. Goal: ○ To enhance the ability of students and researchers to formulate effective research problems and projects.