People and Organization Lecture 1 PDF
Document Details
Uploaded by FaultlessSerpentine9948
Tags
Summary
This lecture notes discusses how people make decisions and introduces the concepts of bounded rationality, predictable irrationality, and biases. It explores system 1 and system 2 thinking, heuristics, and their implications for decision-making processes.
Full Transcript
People and Organization - Lecture 1 How are decisions made? A “rational” decision is made as followed: 1. Define the problem 2. Identify all decision criteria 3. Allocate weights to the criteria 4. Identify all the alternatives 5. Evaluate the alternatives (based on the decisio...
People and Organization - Lecture 1 How are decisions made? A “rational” decision is made as followed: 1. Define the problem 2. Identify all decision criteria 3. Allocate weights to the criteria 4. Identify all the alternatives 5. Evaluate the alternatives (based on the decision criteria you identified and the weights these criteria have) 6. Choose the best alternative This sequence of steps was/is how economists make decisions, as they thought of people as rational human beings. à In reality, this is not how humans make decisions, due to: Bounded rationality: Recourses are limited (time, cognitive, etc.). Therefore, we often don’t optimize but satisfice (we only consider a few alternatives) Predictable irrationality: we do not just make suboptimal choices / we are not just often “wrong”. BUT, we are systematically wrong, because we are biased. This means we can study how we decide. System 1: automatic, fast, not cognitively demanding à intuitive answer System 2: controlled, slow, cognitively demanding à question the intuitive answer = “rational” answer Compromise effect = (happens when system 2 is working) à The idea of system 1 and system 2 is that it is a model that we can use to help us understand how we make decisions. Heuristics and biases How do system 1 and system 2 operate? System 1 thinking tends to be (NOT ALWAY) the result of heuristics Heuristics = mental shortcuts to satisfactory solutions Why are heuristics useful? - they are efficient (no resources required) - they work most of the time - It is not true that relying on heuristics will always result in inaccurate judgments What if relying on a heuristic does not work? - It leads to biases Biases = systematic deviations from “rationality” Systematic = predictable Two notes of caution: 1. “What is rational” is a debated question 2. Not all “biases” are due to heuristics/system 1 thinking Probability Estimation Probability estimation = the process of quantifying the likelihood of an event occurring, based on available data, evidence, a probabilistic model, or heuristics. When do we rely probability estimation? Some examples: - When deciding where to go on holiday. (Will we like the location? Will the weather be nice? Etc.) - When we buy consumer products. (Will we like the restaurant we go to? Will we like the new product we bought when we did not have the chance to test it?) All these judgments rely on our estimation of a probability. Will we like this? Will event X happen if we do this? Etc. How do we make these kinds of judgments? What are some of the heuristics we use? Heuristics and probability estimation List of heuristics we use in probability estimation: - Representative heuristic - Availability Representative heuristics = the more X resembles Y, the more likely X is to be Y Example: “When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck. Relying on the representativeness heuristic is more problematic if one has less information about the resemblance Problems with this way of thinking: - Insensitivity to prior probability - Insensitivity to sample size - Misconceptions of chance - Insensitivity to predictability - The illusion of validity - Misconceptions of regression of the mean o There is a certain mean, so if you see something great (better than the mean), we expect again something great again. However, we fail to realize the working of the mean, which is that after seeing something great, you will see something less good to level out the mean. The problem with relying on this heuristic, you might fail to answer the following two questions: 1. How confident are you in this resemblance? - People often resort to the use of stereotypes. Accurate stereotypes are still stereotypes, and stereotypes are an oversimplification of real life (example of the CEO riddle which reveals a stereotype which is accurate, however it biases our thinking, making us unable to solve a simple riddle.) - People are generally insensitive to sample sizes. Information is often not enough (example of the hospital sample size) 2. How likely is Y in the first place? Representativeness ignores the base rate. - Conjunction fallacy = people fail to understand that by definition the base rate is different (bank teller example, people think because of the background of the bank teller that she is more likely to be in both categories. But in real life the probability of being in one category is more likely to be true than being in two categories, despite the background information you have.) - False positives (antibodies example. Only ten percent of people are misclassified as having antibodies when they in fact do not have antibodies. It seems like the chances are thus low that if you test positive for antibodies that you are a false positive. However, in reality the chance is 66 percent that your test was a false positive) Heuristics and probability estimation: Availability Availability heuristic: The more examples of X come to mind, the more likely X is This would be okay, if your mind and memory would be unbiased, however there are elements that influence what we do and do not recall. Factors that influence how good we can recall examples 1. Familiarity: you think X is more likely if you are familiar with it, because it is easier to recall somethings that are more familiar 2. Recency: how recent was your last encounter with an entity that you trying to estimate. The more recent, the more we tend to inflate the probability that that will happen again even if the real probability of it happening is much smaller. 3. Salience: events or examples that are vivid, dramatic, or emotionally striking are more likely to be recalled, which makes us overestimate their likelihood. For example, media coverage of airplane crashes makes them seem more frequent than they actually are, even though car accidents are statistically far more common. The problem with the availability heuristic: - Biases due to the retrievability of instances - Biases due to the effectiveness of a search set - Biases of imaginability - Illusory correlation Anchoring Anchoring and (insufficient) adjustment = people make estimates by starting from an initial value that is adjusted to yield the final answer. This means that first pieces for information overly matter. Anchoring is a heuristic. It is a cognitive shortcut where individuals rely heavily on an initial piece of information (the "anchor") when making decisions or estimates, even if the anchor is irrelevant or arbitrary. This starting point disproportionately influences subsequent judgments, leading people to adjust insufficiently from the anchor. Anchoring is widely studied in behavioral economics and psychology as a bias that can affect reasoning and decision-making. Problem with this heuristic: - Insufficient adjustment: People rely on an initial anchor and fail to adjust enough when estimating probabilities. - Biases in conjunctive and disjunctive events: o Conjunctive events: When multiple events must all happen together (e.g., the success of a complex project), people tend to overestimate the likelihood of the conjunction. This is because they focus on the vividness or availability of individual success scenarios while underestimating the improbability of all events occurring simultaneously. o Disjunctive events: When only one of many events needs to occur (e.g., at least one system failing in a large network), people tend to underestimate the likelihood of the disjunction. They fail to account for the cumulative probabilities of many small risks adding up. - Anchoring in subjective probability distributions: Estimates are biased toward initial anchors because easily recalled examples disproportionately influence probability assessments. Choice context matters Biases do not only occur when you use your system 1 thinking. You can still be biased when using your system 2 thinking. Attraction Effect = also known as the decoy effect, occurs when the presence of a third, less desirable option (the "decoy") makes one of the original two options more attractive by comparison. This happens because the decoy is asymmetrically dominated—it is worse than one option (the "target") in all aspects but only slightly worse or comparable to the other (the "competitor"). This shifts people's preference toward the target option. For example, if you're choosing between two phones (one with better battery life and another with a better camera) and a third phone is introduced with worse battery life and a similar camera to one of the original phones, people tend to prefer the original phone with the better camera. Compromise Effect = occurs when people are more likely to choose an option that represents a middle ground between two extremes, as it feels like a safer or more reasonable choice. For example, when offered a small, medium, and large drink, people often pick the medium size because it avoids the extremes of too little or too much. In the case of context effects, people tend to think deliberately when making a decision; system 2 is engaged. Thinking more leads to the bias. System 2 thinking can therefore also lead to biases.