Task 4 - The Sleep of Reason Produces Monsters PDF

Summary

This document discusses reasoning and hypothesis testing, specifically focusing on Wason's 2-4-6 task and confirmation bias. An analysis of research studies within psychology and cognitive science is included, examining the various ways individuals and scientists approach hypothesis testing.

Full Transcript

Task 4 - The sleep of reason produces monsters? Learning goals What are different types of reasoning? What are different theories of reasoning? Why does our reasoning fail sometimes? Reasoning and hypothesis testing - Eysenck chapter Hypothesis testing Confirmation - t...

Task 4 - The sleep of reason produces monsters? Learning goals What are different types of reasoning? What are different theories of reasoning? Why does our reasoning fail sometimes? Reasoning and hypothesis testing - Eysenck chapter Hypothesis testing Confirmation - the attempt to obtain evidence confirming one's hypothesis. Falsification - the attempt to falsify hypotheses by experimental tests. Wason's 2-4-6 task Wason's 2-4-6 task - participants are told 3 numbers 2-4-6 conformed to a simple relational rule (known by the experimenter). Their task is to generate sets of 3 numbers and provide reasons for generating each set. After each choice, the experimenter indicates whether the set of numbers conformed to the experimenter's rule. The rule is "Three numbers in ascending order" The participants can announce what they think the rule is on any trial and are told if it is correct. Only 21% of university students were correct with the first attempt. 72% of participants eventually solved it. Wason argued that the reason for low performance is confirmation bias (seeking information confirming one's original hypothesis). However, participants also produced disconfirmatory tests sometimes (e.g. when the hypothesis is that every number is twice the first, they generate 1-4-9 and expect to receive "Yes"). In Wason's task, participants tend to preserve as much of the information contained in the example triple (2-4-6) as possible in their initial hypothesis, making the hypothesis much more specific than the general rule. There is much less confirmation bias and more evidence of falsification tests when testing someone else's hypothesis. People tend to generate more falsifying hypothesis tests when they are told that the hypothesis is someone else's rather than their own. This is consistent with scientists' behavior. The correct hypothesis in the 2-4-6 task (3 ascending numbers) is very general because it applied to a high proportion of sets of 3 numbers. However, most hypotheses apply only to a small proportion of possible objects/events. While positive testing works poorly on the 2-4-6 task, this might not be the case for other forms of hypothesis testing. Hypothesis testing: simulated and real research environments Scientists generally adopt a confirmatory approach during hypothesis testing. Study: numerous research articles in psychology were analyzed. 77% sought confirmation by testing the hypothesis favored by the researcher(s) and 91% supported an existing theory. Only 22% discussed other hypotheses. Absolute hypotheses - hypotheses that claim a given phenomenon always occurs. Non-absolute hypotheses - hypotheses that claim a phenomenon only occurs in some conditions. The optimal approach for progressing in science is falsification only for absolute hypotheses (because a single negative observation would disprove an absolute theory) not for non-absolute hypotheses. Study: 96% of researchers in psychology indicated that their research was mostly driven by non-absolute hypotheses. With absolute hypotheses, 81% of researchers would use a disconfirmatory approach. With non-absolute hypotheses, 9% would use a disconfirmatory approach. Confirm early - disconfirm late heuristic - initially seeking confirmatory evidence for a theory, and then focusing more on disconfirming the theory to discover its breadth of application. 83% of scientists were most likely to use a confirmatory approach early in a research project and 87% were most likely to use a disconfirmatory approach subsequently. Therefore, describing the typical approach of scientists as confirmation bias is misleading. Karl Popper's approach that falsification is always the best option is too black-and-white and does not reflect how science is done in reality. Confirmation bias is also slightly observed in the analysis and interpretation of data. Complex Cognition Page 1 Confirmation bias is also slightly observed in the analysis and interpretation of data. Study: psychologists are anonymously asked to provide anonymous information about their questionable research practices. 78% of respondents had selectively reported studies that "worked"; 62% had excluded data (typically data inconsistent with their hypotheses), and 36% had stopped data collection after achieving the desired result. Study: statistical analyses in research journals were analyzed. Over 10% of p values were incorrect. In the great majority of cases such errors changed the statistical significance from non-significant to significant. Researchers often expect their meta-analyses to support their existing hypotheses, which influences e.g. their decision about studies to be included. Deductive reasoning Inductive reasoning - forming generalizations that might not be necessarily true from examples or sample phenomena. Deductive reasoning - drawing conclusions that are definitely true provided other statement are assumed to be true. Conditional reasoning Conditional reasoning - a form of deductive reasoning based on if…then propositions. Symbols represent sentences and logical operators are applied to them to reach conclusions. For example: P = "It is raining", Q = "Nancy gets wet", if P then Q. Propositions are either true or false, propositional logic does not allow for uncertainty (e.g. "It is raining a bit" to mean neither true nor false). Affirmation of the consequent - a logical error in which someone incorrectly assumes that if Q = true and "If P, then Q", then P = true. This is wrong because Q could be true due to other reasons, not just because of P = true. Denial of the antecedent - a logical error in which someone incorrectly assumes that if P = false and "If P, then Q", then Q = false. This is wrong because it's not necessary for P to be true so that Q is true. However, in natural language "If P, then Q" often means "If and only if P, then Q" - e.g. "If you mow the lawn, I will give you 5$" usually means "If you don't mow the lawn, I won't give you 5$." Modus ponens - a rule of inference, which says that given P = true and "If P, then Q", then we can infer Q = true. Modus tollens - a rule of inference, which says that given Q = false and "If P, then Q", then we can infer P = false. People consistently perform much better with modus ponens than modus tollens. They often argue that the conclusion above is invalid. Klauer et al.'s dual-source model of conditional reasoning - when people make decisions based on conditions, there are 2 processes at play: Knowledge-based process - influenced by premise content where the subjective probability of the conclusion depends on individuals' relevant knowledge (in simple words, the decision is influenced by what you already know - if you know a lot about a topic, the decision might be different from someone who doesn't know much about it). Form-based process - influenced only by the form of the premises (in simple words, not depending on what you know, but just on the presented information). In simple words, they say that when we make decisions based on conditions, what we know and how the information is presented to us both play a role in our choices. Verschueren et al.'s dual-process model - focuses more on how different people make decisions based on conditions. Counterexample strategy - finding an example that goes against a conclusion in order to consider it as invalid (e.g. if someone says "All birds can fly", someone with a counterexample strategy will think of penguins and refute the statement). Intuitive statistical strategy - relying on general knowledge and instincts to make decisions (e.g. knowing that most birds can fly, one will assume that a bird they encounter can fly unless they have a specific reason to think otherwise). Study: problems involving affirmation of the consequent were presented to participants. Examples: 1) If a rock is thrown at a window, then the window will break. A window is broken. Therefore, a rock was thrown at the window. 2) If a finger is cut, then it will bleed. A finger is bleeding. Therefore, the finger was cut. Reasoners using the statistical strategy accepted the invalid conclusions more often in problem (2) than (1). This is because the subjective probability that "If a finger is bleeding, it was cut" is greater than the probability "If a window is broken, it was broken by a rock." Reasoners using the counterexample strategy accepted the conclusion if no counterexample came to mind. They also accepted the invalid conclusion more often in problem (2), because it was easier to find counterexamples to (1) than to (2). Study: the counterexample strategy is used less when participants had limited time (because it is more cognitively demanding). Study: when presented with modus ponens (always valid) inferences, Complex Cognition Page 2 Study: when presented with modus ponens (always valid) inferences, with additional information indicating the relative strength of evidence supporting the inference (55%; 75%; 95%; or 100%), statistical reasoners were strongly influenced by relative strength. However, counterexample reasoners showed a sharp reduction in acceptance when some evidence failed to support the inference. Wason selection task Wason selection task - 4 tasks (R, G, 2, 7) lie on a table. Each card has a letter on one side and a number on the other, and there is a rule applying to the 4 cards ("If there is an R on one side of the card, then there is a 2 on the other side of the card") The participants need to select only those cards needing to be turned over to decide whether or not the rule is correct. Most people select the R and 2 cards, but that is wrong. You need to see if any cards fail to obey the rule => the correct answer is to select the R and 7 cards. Only 10% of university students give the correct answer. Performance is worse with abstract versions of the task (as above) compared to concrete versions referring to everyday events (e.g. "Every time I travel to Manchester, I travel by train."). Meta-analysis: percentage of correct answers increased from 7% with abstract versions to 21% with concrete versions. Matching bias - the tendency to select cards matching the items in the rule. Our everyday way of thinking does not always match formal logic. Probabilistic reasoning is usually more optimal. When testing "If p, then q", to get the most information one should pick q cards when the chance of q happening is low and not-q cards when the change of q happening is high. People tend to pick more q cards when the chance of q happening is low (17%) and fewer when it's high (83%). => Sometimes, we make choices based on what's more likely to give us useful information, even if it's not strictly logical. Motivation to disprove the rule improves performance in the Wason selection task. Study: participants are given the rule "Individuals high in emotional lability experience an early death." and some participants are led to believe that they have high emotional lability. The 4 cards showed high emotional lability, low emotional lability, early death and late death. 38% of the participants that were led to believe they had high emotional ability solved the problem (versus only 9% of control participants). Motivation is also involved with deontic rules (rules concerned with obligation or permission). Meta-analysis: 68% of participants who were motivated to detect cheating (deontic versions) solved a Wason selection task, compare d to only 7% when using abstract task versions. A general motivational approach was proposed: people concerned about potential costs focus on disconfirming evidence, while those concerned about potential benefits focus on confirming evidence. Johnson-Laird's mental-model theory - assumes that selections on Wason's selection task depend on 2 processes: Intuitive process - produces selections matching the reasoners' hypothesis (e.g. selection of R in the version shown on the figure above). Deliberate process - produces selections of potential counterexamples to the hypothesis (e.g. selection of 7 in the same version). Syllogistic reasoning Syllogism - consists of 2 statements followed by a conclusion (e.g. "All A are B; all B are C. Therefore, all A are C"). It contains 3 items (A, B, C) with one (B) occurring in both premises. The quantifiers all, some, no, some…not can be used. When presented with a syllogism, one must decide whether the conclusion is valid assuming the premises are valid. Belief bias - in syllogistic reasoning, the tendency to accept invalid but believable conclusions and reject valid but unbelievable ones. Study: Syllogisms were presented to participants. Half of the syllogisms were believable and half were unbelievable. Half were valid and half invalid. However, the participants were told only 1/6 of the syllogisms were valid, whereas others were told 5/6 were. There was a base-rate effect: syllogistic reasoning performance was influenced by the perceived probability of syllogisms being valid. There was also strong evidence for belief bias and a belief-by-logic interaction: performance on syllogisms with valid conclusions was better when those conclusions were believable, while performance on syllogisms with invalid Complex Cognition Page 3 conclusions were believable, while performance on syllogisms with invalid conclusions was worse when those conclusions were believable. Study: unbelievable premises were processed than believable ones => people experienced conflict between their beliefs and what they were asked to assume and resolving this conflict was time-consuming. Some problems in syllogistic reasoning occur because of differences in meanings of expressions in formal logic and everyday life. For example, we often assume "All As are Bs" means "All Bs are As" and "Some As are not Bs" means "Some Bs are not As." Study: such premises were spelled unambiguously (e.g. "All As are Bs, but some Bs are not As"), which greatly enhanced reasoning performance. Study: syllogistic reasoning improved when the meaning of "some" (which in everyday life can mean "some but not all") in formal logic was made explicit ("At least one and possibly all"). People's syllogistic reasoning is influenced by whether the conclusion matches the premises in surface or superficial features. Example (matching): no A are not B; no B are not C; therefore, no C are not A; Example (non-matching): all A are B; all B are C; therefore no A are not C. In spite of the irrelevance of matching vs non-matching to formal logic, people are more likely to accept conclusions matching the premises. Theories of "deductive" reasoning Mental models Mental model theory - reasoning involves constructing mental models. Mental model - an internal representation of some possible situation/event in the world having the same structure as that situation or event. Example of a mental model (here the conclusion that the clock is to the left of the vase clearly follows from the mental mode l): What is represented The mental model Premises book pad lamp The lamp is on the right of the pad. clock vase The book is on the left of the pad. The clock is in front of the book. The vase is in front of the lamp. Conclusion The clock is to the left of the vase. When faced with a problem or information to process, people construct mental models and generate the conclusions that follow them. An attempt is made to construct alternative models to falsify the conclusion by finding counterexamples to the conclusion. If a counterexample model is not found, the conclusion is deemed valid. Reasoning problems requiring the construction of several mental models are harder than those requiring only one mental model because the former impose greater demands on working memory. Principle of truth - mental models represent what is true, but not what is false - this minimizes demands on working memory. 20 studies used various reasoning tasks, in all of which individuals failing to represent what is false in their mental models would produce illusory inferences. Numerous illusory inferences were drawn. In contrast, performance was very good with similar problems where adherence to the principle of truth was sufficient to produce the correct answer. An example problem of the first type: Only one of the following premises is true about a particular hand of cards: □ There is a king in the hand or there is an ace, or both. □ There is a queen in the hand or there is an ace, or both. □ There is a jack in the hand or there is a 10, or both. Is it possible there is an ace in the hand? (Nearly everyone says yes, but that is wrong, because if there is an ace in the hand, both the first 2 premises would be true). Complex Cognition Page 4 true). People make illusory inferences because they ignore what is false. If they are instructed to falsify the premises of reasoning problems, they are less susceptible to such inferences. The central executive and visuo-spatial sketchpad components of the working memory system are heavily involved in constructing mental models. Working memory capacity is correlated (+.42) with syllogistic reasoning performance. The mental model theory compares well against other theories of reasoning and performs well when predicting performance on Wason's selection task. Even though the theory says that people search for counterexamples after generating a conclusion, studies find that people generate relatively few counterexamples. The theory often fails to predict people's answers with ambiguous reasoning problems. Dual-process theories Dual-process/dual-system theories - a family of theories that distinguish between 2 types of mental processing. Type 1 intuitive processing - characterized by autonomy (it is mandatory/necessary when the appropriate stimuli are encountered) and lack of involvement of working memory. These processes are often fast, high-capacity, parallel, unconscious, automatic and independent of cognitive ability. Type 2 reflective processing - characterized by involvement of working memory and cognitive decoupling (a.k.a. mental simulation) - hypothetical reasoning not constrained by the immediate environment. These processes are often slow, capacity-limited, serial, conscious, controlled and correlated with cognitive ability. Default-interventionist model - when faced with reasoning problems, people first use Type 1 processes to generate a rapid heuristic answer, which may be corrected by a subsequent, more deliberate answer produced by slow Type 2 processing. Assumes that reasoning performance will generally be superior when Type 2 processes are involved in addition to Type 1 processes. However, this assumption is not always true: sometimes Type 2 processes do not help in solving a problem, and sometimes we get things right using habit or intuition (type 1 processes). Early dual-process theories predict that various factors increase reasoners' use of type 2 processes: 1) High intelligence 2) Sufficient time available 3) Reasoners do not need to perform a simultaneous secondary demanding task Belief bias during syllogistic reasoning should be reduced when people use type 2 processing: More intelligent reasoners exhibit less belief bias than less intelligent ones. This can be due to their high cognitive ability or because of their choice to adopt an analytic cognitive style. There is less belief bias when time is not restricted, because restricting thinking time reduces reasoners' ability to use type 2 processes. Reasoners exhibit more belief bias when they have to perform a simultaneous secondary task. Later research suggested that the assumptions that type 1 processes are "dumber" and always occur serially before type 2 processes are oversimplified. 3 models of how these processes combine were identified: Serial model - traditional theoretical approach described above. Parallel model - type 1 and type 2 processes occur at the same time. This model is wasteful of cognitive resources because effortful processes are always used. Logical intuition model - 2 types of intuitive responses (a logical and a heuristic one) are activated in parallel. If these 2 responses conflict, deliberate (Type 2) processes resolve the conflict. There is empirical support for the logical intuition model in syllogistic reasoning research: Study: participants are presented with a syllogistic reasoning task. In one condition, there was a conflict between the logical validity and believability of the conclusions. There was strong belief bias - accuracy was only 52% on conflict trials compared to 89% on non-conflict trials. Participants had greater physiological arousal on conflict trials suggesting conflict was registered within the processing system below the conscious level => some logical processing can be intuitive rather than analytic. Study: participants are presented with syllogistic reasoning problems involving conflict between the believability and validity of the conclusion. Participants provided 2 responses to each problem: (1) a fast, intuitive response and (2) a slower, deliberate pr ocess. While the default-interventionist model predicts low levels of accurate fast responses and a much higher level of correct deliberate responses, that did not happen - 44% of the fast responses were accurate and only 7% of the initially inaccurate fast responses were followed by accurate slow responses. Complex Cognition Page 5 responses were followed by accurate slow responses. Study: the same task as the previous study was used, and participants had to perform a secondary demanding task at the same time to reduce their engagement in type 2 processing. 49% of fast problems on conflict problems were correct. Study: participants gave a slow and a fast response to conditional-reasoning problems involving belief bias. Fast responses were often correct based on logical validity and slow responses were often incorrect and exhibited belief bias. Study: participants had to solve conditional-reasoning problems involving a conflict between logical validity and believability. They are sometimes told to answer the question based on their beliefs rather than logic, and sometimes - the opposite. Response times were comparable on belief-based and logic-based trials. The logical validity of the conclusion interfered with belief- based processing. These findings are not consistent with the default-interventionist model, but are consistent with parallel-processing theories. Study: participants differing in cognitive ability received incongruent reasoning problems involving a conflict between belief and logic and had to respond on the basis of belie or logic. The more intelligent reasoners had greater difficulty resolving conflict when providing belief-based responses rather than logic-based responses. The less intelligent reasoners exhibited the opposite pattern. The findings suggest that more intelligent individuals generate logic-based responses faster than belief-based ones, whereas less intelligent individuals generate belief-based responses faster. Therefore, rather than assuming belief-based responses involve Type 1 processing and logic-based responses involve Type 2 processing, individual differences need to be considered. If both belief-based and logic-based responses can be generated both quickly and slowly, then perhaps they differ on a single dimension of complexity. Meta-reasoning - the processes that monitor the progress of our reasoning and problem-solving activities and regulate the time and effort devoted to them. Only when the feeling of rightness (the degree to which the first solution that comes to mind feels right) is weak, do reasoners engage in substantial type 2 processing. Study: participants provide an initial answer immediately after reading a syllogistic or conditional-reasoning task. Then they assess that answer's correctness (feeling of rightness). Then, they have unlimited time to reconsider their initial answer and provide a final type 2 answer. Participants spent longer reconsidering their intuitive answer and were more likely to change it when they had low feelings of rightness. Feelings of rightness ratings were higher when the first response was produced rapidly rather than slowly. Brain systems in reasoning A meta-analysis suggests that a core brain system centered in the left hemisphere, involving the frontal and parietal areas, is heavily involved in deductive reasoning. Specific areas include the inferior frontal gyrus, the medial frontal gyrus, the precentral gyrus and the basal ganglia. Study: fMRI was used while people performed deductive-reasoning tasks. Core regions (left rostrolateral cortex, medial PFC) were more strongly activated with complex deductive reasoning. Main language areas in the left hemisphere had very little involvement in deductive reasoning - their main role was in encoding problems presented verbally. Study: patients with damage to the left parietal cortex performed worse on reasoning tasks than patients with right-side damage. Study: patients with damage to the right frontal cortex had intact reasoning, while patients with left frontal damage showed deficits. There is a hypothesis that there is a single conscious system based in the left hemisphere (the interpreter), which tries to make a coherent sense of the information available to it. The core brain system found in the left hemisphere may depend at least in part on this interpreter. Individual differences Complex Cognition Page 6 Individual differences The ability to inhibit incorrect responses produced by type 1 thinking might be the reason why more intelligent individuals exhibit less belief bias. Study: participants had to solve a reasoning task that was accompanied by a secondary task involving low or high cognitive load. High performance accuracy (low belief bias) were associated with activation in the right inferior frontal cortex, regardless of the secondary task. The right inferior frontal cortex was less activated under high- than low-load conditions and there was much more belief bias in the high-load condition. Study: participants performed conditional reasoning tasks (modus ponens) under MEG. They engaged in anticipatory processing before the second premise and conclusion were presented. There was enhanced brain activity 300ms after presentation of the second premise when it failed to match the first one => participants expected the second premise to match the first one (e.g. P to follow If P then Q). When the second premise matched the first one, participants generated the inference that followed validly from the first 2 pr emises (activation in the parieto-frontal network at 400ms). This occurred a bit before the conclusion was presented. Informal reasoning Informal reasoning - a form of reasoning based on one's knowledge and experience rather than logic. It is a form of inductive reasoning that resembles our everyday reasoning. Contemporary research has transitioned from the artificial domain of deductive reasoning to a new paradigm, which views reaso ning not as relying on binary truth and classical logic, but instead focuses on probabilities and degrees of belief, and depends on knowledge and experience. The content of an argument is generally important in informal reasoning but irrelevant in formal deductive reasoning. For example: (a) Ghosts exist because no one has proved they do not. (b) The drug is safe because we have found no evidence that it is not. People find (a) much less persuasive than (b) due to the implausibility of ghosts. The reasoner's motives differ in deductive reasoning (where the motive is to reason accurately and logically) compared to inf ormal reasoning (where the motive is to produce arguments that persuade other people). Findings: motivation Motivational factors play a major role in informal reasoning. In the case of climate change, people’s views are often much more strongly influenced by their notions concerning the kind of person they regard themselves as being than by the available research evidence. Solution aversion - a bias in reasoning in which individuals deny the existence of a problem (e.g., climate change) because they dislike the Proposed solution (e.g., restricting damaging emissions). Study: Americans were asked to indicate if they believed in climate change in the context of 2 possible solutions (restrictiv e emission policies or green technology). Far fewer indicated a belief in climate change when the proposed solution was undesirable (restrictive emissions) rather than desirable: 22% vs 55%. Study: cultural values are more important than scientific literacy for predicting climate change awareness. Myside bias - in informal reasoning, the tendency to select and interpret information in terms of one’s own beliefs or to generate reasons or arguments supporting those beliefs. Study: students are asked to rate the accuracy of controversial (but factually correct) propositions such as the following: 1) College students who drink alcohol while in college are more likely to become alcoholic in later life. 2) The gap in salary between men and women generally disappears when they are employed in the same position. Students who regularly drank alcohol rated the accuracy of (1) lower than those who did not. Women rated the accuracy of (2) lower than men. The extent of myside bias was unrelated to cognitive ability suggesting participants made little use of analytical thinking. Study: participants, who had pro-life or pro-choice views with respect to abortion, were presented with abortion-relevant syllogisms. They were instructed to assume the 2 premises are valid and decide whether the conclusion followed logically from them. Participants found it hard to accept logically valid conclusions conflicting with their beliefs and to reject invalid conclusions coinciding with their beliefs. Perspective taking (e.g. adopting the perspective of a climate scientist when rating arguments concerning climate change) can produce a modest decrease in myside bias. Mercier's argumentative theory - claims that reason has evolved so that humans can exchange justifications and arguments with each other, and that the function of those arguments is to convince people. Argument production should therefore be marked by a strong myside bias. Study: participants solve syllogistic reasoning problems and produce arguments for their answers. Then participants evaluate other people's arguments and also some of their own arguments, but they are led to believe they are someone else's. Participants not detecting this deception were highly critical of their own arguments, rejecting them 56% of the time. Findings: probabilities Bayesian approach to reasoning - prior beliefs have subjective probabilities associated with them based on our knowledge and experience. These probabilities are changed as we encounter new evidence. 3 factors influence the perceived strength of a conclusion: strength of previous belief, positive arguments having more impact than Complex Cognition Page 7 3 factors influence the perceived strength of a conclusion: strength of previous belief, positive arguments having more impact than negative ones, and strength of evidence. Bayesian models account well for the perceived strength of arguments in fallacies (a type of error in reasoning or a mistaken belief that weakens an argument's validity or persuasiveness). Are humans rational? Bounded rationality Bounded rationality - the idea that people are as rational as the environment (e.g. information costs) and their limited processing capacity (e.g. limited attention) permit. This allows us to produce workable solutions to problems despite limited processing ability by using heuristics. Many "errors" in human thinking reflect limited processing capacity and environmental limitations rather than irrationality. Instrumental vs. broad rationality Normativism - the idea that human thinking should be regarded as “correct” or “incorrect” depending on how closely it follows certain norms or standards (e.g., those of classical logic). Deductive reasoning is not a proper normative system for evaluating human thinking, because everyday problems rarely have a deductive or a 'correct' solution. Instrumental (thin) rationality - maximizing the utility (subjective value) of one's choices or decisions with respect to achieving task-related goals. Broad rationality - involves considering the individual's personal goals and contextual factors (especially social ones) additional to the immediate task-related goals. Example situation: a parent has to decide whether to have their child vaccinated against a disease. A decision to select the option where the probability of harm is less, would be based on instrumental rationality. However, such decisions are often strongly influenced by anticipated regret (the option involving less regret is selected). Thus, the decision to select the option having less instrumental rationality may be selected because it is associated with less anticipated regret. Such a decision is inconsistent with thin rationality, but consistent with broad rationality. Limitations of human rationality The standard dual-process theories of reasoning (where the irrational type 1 process is followed by a rational type 2 process) are not very well empirically supported, which is why a radical new theoretical approach has come about, in which system 1 is rational and system 2 leads to error. This was based on the following arguments: 1) Other species rely primarily on system 1 but seem less prone than humans to serious cognitive biases. 2) System 2 depends heavily on working memory, which is limited in its functioning. 3) System 2 involves language often lacking in precision (e.g. because people mean different things when they say words such as "probable" and "likely") Dunning-Kruger effect - the finding that less skilled individuals overestimate their abilities more than those who are more skilled. It's possible that those showing the effect lack the knowledge and expertise to evaluate the correctness of their own thinking. Individual differences: intelligence Intelligence is only modestly related to task performance on judgment tasks. Stanovich's tripartite model of reasoning - a dual-process model, which distinguishes between 2 forms of type 2 processing: Algorithmic mind - contains information about rules, strategies and procedures that a person can retrieve from memory to aid decision making and problem solving. Can override the heuristic responses generated by the autonomous mind. Reflective mind - makes use of an individual's goals, beliefs and general knowledge. It makes the decision of whether to use type 2 processes. Individuals high in fluid intelligence possess more type 2 processes and use them more efficiently. According to the tripartite model, incorrect, intuition-based answers on problems can occur for 3 reasons: Individuals may lack the appropriate mindware within the algorithmic mind to override incorrect responses. Individuals with the appropriate mindware may have insufficient processing capacity to override incorrect Type 1 processing using the algorithmic mind. Individuals may have the appropriate mindware but fail to use it because its use is not triggered by the reflective mind. Complex Cognition Page 8 Article Summary - Religion, cognitive style, and rational thinking Article summary here. Article Summary - Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning Article summary here. Complex Cognition Page 9

Use Quizgecko on...
Browser
Browser