Document Details

CalmRationality2733

Uploaded by CalmRationality2733

The University of Texas at Austin

Tags

research methods study guide social science exam preparation

Summary

This document is a study guide for an HDO 310 exam covering research concepts. The guide includes key points, questions, and definitions related to research design and analysis. Topics covered are research methods, logical reasoning, and sampling.

Full Transcript

Concepts to Review Week Key Points, Concepts, and Questions 1 Introduction to the Course ​ Why do we do research? ○​ Research minimizes bias, ensures decision is based on facts,...

Concepts to Review Week Key Points, Concepts, and Questions 1 Introduction to the Course ​ Why do we do research? ○​ Research minimizes bias, ensures decision is based on facts, and research follows rigorous methods that then allow information to be applied to various other situations The Benefits of Research ​ What makes an approach “scientific”? ○​ The existence of an interplay between theory and data ​ Scientific process: four major features ○​ Theory, data, hypothesis, conclusion ​ Theory and data: definition? Examples? ○​ Theory: story about what is going on in the world ○​ Data: observations of what is happening ​ Hypothesis ○​ Is a tentative, but unconfirmed, expectation about the relationship between two or more phenomena ​ Data: what does it mean to be verifiable, and systematically collected? ○​ It must be observable to the researcher and others in the scientific community ○​ scientific collection and analysis of verifiable data follows a typical process or series of steps ​ Empirical pattern ○​ It represents general tendencies based on data ​ Logical reasoning: inductive v. deductive ○​ Logical reasoning: the process of moving from information to conclusions ○​ Inductive reasoning: you start w/ specific observations in a setting, recognize patterns, and form a general conclusion that goes beyond the specific case (data → patterns → theory; “bottom-up) ○​ Deductive reasoning: you progress from the general ideas to specific conclusions (theory → hypothesis → data; “top-down”) ​ When is a scientific approach especially useful in an organizational setting? What are some of the limitations to taking a scientific approach to understanding people-contered problems in organizations? ○​ When the answer to the question is especially important, when you can come across conflicting information, or when you need to prioritize what is most useful ○​ Science takes time, money, and resources ​ What is an example of a question that cannot be answered using scientific research? ○​ Are UT students getting enough sleep? - Question is too broad ​ Why does inductive logical reasoning lead to weaker conclusions than deductive? ○​ The conclusion is still uncertain even if the evidence is true because the conclusion goes beyond the evidence ​ Benefits of research that takes deductive v. inductive? ○​ Deductive: stronger conclusions backed by evidence, theory focuses research ○​ Inductive: development of new theories 2 Research Design: Setting the Stage ​ Unit of analysis ○​ Definition: the level of social life on which a research question is focused (what or whom is being studied) ○​ Examples: individuals, groups, organizations, social interactions, and social artifacts ​ The ecological fallacy ○​ Definition: an error in reasoning in which incorrect conclusions about individual-level processes are drawn from group-level data ○​ Example: “The crime rate in Suzy’s neighborhood is 3x higher than the rest of the city. Therefore, Suzy is 3x more likely to commit crime than someone who doesn’t live in her neighborhood.” ​ Cross-sectional research design ○​ Definition: one sample, one moment in time ○​ Example: one sample drawn at one time ​ Repeated cross-sectional research design ○​ Definition: sample at time 1, new sample at time 2 (longitudinal = observations over time) ○​ Example: at least two samples, drawn at at least two different times ​ Panel data research design ○​ Definition: sample at time 1, try and follow same people to sample at time 2 ​ Cross-sectional vs. longitudinal research designs ○​ Strengths: more power to detect causal effects than cross-sectional (especially panel) ○​ Limitations: plagued by attrition problem ○​ Why choose cross-sectional?: employee satisfaction ○​ Why choose longitudinal?: placebo/medication trials ​ Qualitative research ○​ Definition: deepen understanding through providing in-depth knowledge – questions ask about social processes or the meaning and cultural significance of people’s actions ​ Aim is to establish context specific meaning or contribute to the development of theory ○​ Understanding: ask about social processes or the meaning and cultural significance of people’s actions ○​ Example: how do nurses make sense of unexpected patient deaths? ​ Quantitative research ○​ Definition: Uncover general relationships – questions ask about the empirical relationship between two or more variable ​ Aim is to test hypothesis or establish causality ○​ Understanding: ask about the empirical relationship between two or more variables ○​ Example: do real estate agents that send birthday cards get more repeat business? ​ Variable: ○​ Definition: measured concept - something that varies across cases or over time ○​ Example: given a placebo v. less medication in a trial ​ Independent or predictor variable: ○​ Definition: factor that is manipulated or controlled in an experiment ○​ Example: perfect attendance ​ Dependent or outcome variable: ○​ Definition: factor that depends on the of another controlled factor ○​ Example: grade in a class ​ What kind of evidence is necessary to infer that one variable causes another: association, direction of influence, non-spuriousness (be ready to provide an example of each criterion) ○​ Association: a change in one variable is associated w/ a change in the other ​ Example: perfect attendance → A+ in class ○​ Direction of influence: if X is predicting Y, then changes in X need to bring about changes in Y, and not the other way around ​ Example: perfect attendance ↔ A+ in class ○​ Non-spuriousness: an association between two variables should have no common cause ​ Notebook sales ↔ autumn season ​ Antecedent variables ○​ Definition: common causes that come before the relationship you care about assessing ○​ Example: age, gender, socio-economic status ​ Intervening variables ○​ Definition: a variable that is both an effect of the predictor (IV) and the cause of the outcome (DV) ○​ Example: attendance → better lecture notes → A+ in class ​ Is qualitative data systematically collected? ○​ Yes, though not as standardized as quantitative 3 Information Literacy and Presentation from Liberal Arts Career Services ​ You have already been assessed with your Evaluating Sources assignment – this info won’t show up on the test Research Design: Developing Research Questions ​ Qualities of an appropriate research question for social science ○​ About the who, what, where, when, why, or how of social ife ○​ Requiring data that can be obtained through the senses using methods of social research ○​ That are important to examine ○​ That can be answered in ways that are ethical ○​ That are sensitive to the characteristics of the person doing the research ○​ That can be explored given practicalities ​ Properties of a good research question (or a bad research question) ○​ Is the question right for me? ○​ Is the question right for the field? ○​ Is the question well articulated? ○​ Is the question doable? ○​ Does the question get the tick of approval from those in the know? ​ Two most basic rules of scientific research questions ○​ You have to use question words and a question mark ○​ It must be possible to answer your question through systemic empirical observation 4 Research Ethics ​ The Belmont Report and the three principles for ethical research (define, be ready to provide an example of each one, be ready to look at an example and pick the specific principle it violates) ○​ Respect for Persons: acknowledge the autonomy of people ○​ Beneficence: refers to the responsibility to do good ○​ Justice: distributing benefits and risks of research fairly ​ Four major concerns researchers must consider when it comes to the treatment of human subjects ○​ Potential harm - informed consent, screening out those most likely to be harmed, resources to ameliorate harm ○​ Informed consent: freedom of choice matters, no coercion ○​ Deception: researchers are transparent about what they are studying ○​ Invasion of privacy: anonymity and confidentiality ​ What ethical issues are most likely to come up in research in HDO? ○​ Deception and informed consent ​ Ethics in the practice of research (not just in protecting research subjects): the ethics of invalid research ○​ A study should be designed in a way that will get an understandable answer to an important research question ○​ Invalid research is unethical b/c it wastes rescuers and exposes people to risk for no purpose; can lead to bad policy and bad science ​ Rosenhan experiment: what happened (broadly speaking) and why was it unethical? ○​ Rosenhan was a psychology professor that convinced some of his graduate students to admit themselves into a psychiatric hospital, where they were told to say that they had schizophrenia but must act as themselves normally. An ethical issue of this study is that there was possible harm that can be done unto the graduate students who had to take pills for a mental health issue they didn’t have. Another ethical issue was that he left out crucial research data to suit the conclusion that he wanted to push, which is a form of deception towards the public. Measurement and Measurement in Action ​ Conceptualization ○​ Definition: defining and clarifying the meaning of a given concept ○​ What do you mean by…? ○​ Example: “Employee success” = accomplishment of duties outlined in job description ​ Operationalization ○​ Definition: process of identifying the empirical indicators and procedures for applying them to measure a concept ○​ How are we going to capture this concept? ○​ Example: Employee Engagement → measured via survey responses, attendance records, and participation in initiatives. ​ Operational definition ○​ A specification of the exact activities of the researcher in measuring or manipulating a variable ○​ ​ Indicator ○​ Definition: something that points to, provides evidence of, or otherwise measures a concept ○​ Example: Job satisfaction → indicator = employee turnover rate (higher turnover may indicate lower job satisfaction) ​ Level of measurement: you need to be able to explain how the level of measurement of a variable might affect what you can say in a given study ○​ The level at which you measure a variable determines what you can and can’t say about a relationship + how you can interpret your results/conclusion ​ How do we know if a measure is a “good measure”? What are the two principles researchers agree on? ○​ Reliability (consistency) → can the results be reproduced under the same conditions ​ Over time (test-retest) ​ Across items (internal consistency) ​ Across different researchers (inter-rater reliability) ○​ Validity (accuracy) → do the results really measure what they are supposed to measure? ​ What is the US Census? ○​ A data set that represents the demographics of the US ​ The dangers of not including enough/the right categories in a measure ○​ An entire population can be ignored and inaccurate data can be used in legislation ​ (Hansi Lo Wang article) – what’s the problem? ○​ Categorized people from North Africa and the Middle Eastern into racially white, which is not accurate to their social-cultural identity in the US. This inaccurately represents the true demographics of the US when these types of categorization mistakes are made. 5 Sampling ​ Population vs. sample ​ When do you need to take a sample? When don’t you need to take sample? What shapes this decision? ○​ Take a sample when researching the entire population is not feasible ○​ Money, time, resources shapes this decision ​ What is random selection (especially compared to examples of non-random selection)? Why is it the best strategy for selecting a representative sample? ○​ When each element in a set has an equal chance of being selected AND the chance of selection is independent of the selecting ○​ It is the best because done right, it is completely unbiased ​ Sampling frame ○​ Definition: the list of units composing a population from which a sample is selected ○​ Example: “People who live in Jacksonville, Florida.” The frame would name all of those people, from Adrian Abbott to Felicity Zappa. ​ Why is there still some uncertainty involved in drawing conclusions about a population from a sample, even with a randomly selected sample? Why? ○​ Sampling error, chance variation, measurement error, nonresponse bias, confounding variables ​ Why is a bigger sample size better? ○​ The larger the sample size, the less chance for outliers to affect the value (decreased standard error) ​ Standard error: Definition: Is a big standard error bad or good? How does sample size affect the standard error? ○​ Standard error = average distance between sample values and population values ○​ The bigger the sample size, the lower the sampling error ​ Simple random sample ○​ Definition: every case and combination of cases has an equal probability of being sampled ○​ Example: choose the names of 25/250 employees out of a hat ​ Stratified random sample – and pros/cons versus simple random sample ○​ Definition: some people are selected at random from each group so that every group is sure to have some people selected ○​ Example: oooo ooo ooooo ooo ○​ Pros: cheaper and can reduce standard error ○​ Cons: time-consuming, complicated ​ Proportionate stratified random sample ○​ Definition: the sample size of each stratum/group is proportional to the stratum’s share in the population ○​ Example: If you were making a fruit basket, and you had 200 different fruits, and 30% of the fruit were apples, if you were to get a sample of the apples amongst the fruit, you would need to get the amount that is the same as 30% in the population. Total population = 200, total apples (30%) = 60; sample population = 50, total sample apples (30%) = 15. ​ Disproportionate stratified random sample: definition, example ○​ Definition: some groups may be more/less present in the sample than they are in the population ○​ Example: ​ Multistage cluster sampling ○​ Definition: Intentionally divide up the total population into clusters. Each cluster should look like a mini-population. Then you randomly select ○​ Example: Sampling in Action ​ Conflict over sampling at the Census: what happened (broadly speaking) and how does this connect to what we have learned about probability sampling? (to be further discussed on Mon) ○​ The census was experiencing mass non-responsiveness, and the government proposed taking samples of non-responsive populations to estimate their data to fill in the blanks ○​ This connects to what we have learned because the census is not meant to randomly ask the population questions to make generalizations, ​ Non-probability sample: what is it, what does it involve, how does it affect what you can say or conclude from your study? (to be further discussed on Mon) ○​ Methods that aren’t random selection ○​ Why do we use it? ​ Sometimes we have no choice ​ Useful when the aim is to develop a holistic understanding of complex social units ​ Purposive sample​ ○​ Sampling is guided by researcher expertise on which units are best suited to provide information on particular topics ​ snowball sample ○​ Members of target population are located and they then refer other members of that population ​ theoretical sample ○​ The researcher continually recruits participants to test emerging theories based on new data and emerging theory ​ Statistical vs. theoretical inference (to be discussed on Mon) ○​ Non-probability samples only provide basis for theoretical inference ​ Saturation (to be discussed on Mon) ○​ When new materials fail to yield new insights and simply reinforce what the researcher already knows

Use Quizgecko on...
Browser
Browser