Confirmation Bias and Critical Thinking: Learn to Reduce Cognitive Biases - PDF

Document Details

AttractiveAppleTree9762

Uploaded by AttractiveAppleTree9762

University of Auckland

Tags

confirmation bias critical thinking cognitive biases worldview

Summary

This document explores confirmation bias, a psychological tendency affecting our thinking, and its impact on decision-making and reasoning. It also covers the concept of worldviews and how they shape our beliefs, decisions, and interactions. The guide offers practical advice and techniques, such as the Veil of Ignorance, to improve critical thinking skills and counteract cognitive biases. Finally, it presents techniques to help identify biases and make better arguments.

Full Transcript

Confirmation bias A psychological bias is a tendency to think in certain ways that can lead to systematic deviations from a standard of rationality or good judgment. Also known as a cognitive bias. These biases include Confirmation Bias (seeing the world so as to confirm what we already believe),...

Confirmation bias A psychological bias is a tendency to think in certain ways that can lead to systematic deviations from a standard of rationality or good judgment. Also known as a cognitive bias. These biases include Confirmation Bias (seeing the world so as to confirm what we already believe), the Availability Heuristic (remembering information because it's easy, rather than relevant or accurate), the Framing Effect (preferring information when it's presented in one form or context over another), and Anchoring & Adjustment (fixating on an arbitrary value, rather than looking at the true value of an action or good). We can never completely overcome these biases, but we can learn to recognise them and reduce their effects. Biases tend to be part of our heuristics (patterns of thought that can lead us to make good or good-enough decisions without using good reasons). For example, most of us judge a person when we first meet them by their clothing. This is obviously unreliable, but its quick, and it's sometimes useful. "The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate." - F. Bacon (1620) A confirmation bias is a psychological bias towards only seeking supporting evidence, interpreting all evidence as supporting your views when they don't, or ignoring evidence that doesn't. Assumptions: Worldviews🎬 Everyone has a worldview, a core set of beliefs and a filter through which we see the world. Your worldview might be that of an evangelical Christian, an anarcho-syndical Leninist, a conservative neo-liberal capitalist, a dude-bro, a surfer-hipper, a Sufi mystic, a green feminist, a dewy-eyed romanticist, a bitter cynical misanthrope, a brand-aware fashionista, a foodie hipster, or whatever. A worldview is a personal set of assumptions, defaults, heuristics, priorities, and values that constrains and constructs everyone's critical thinking. Religion, culture, ethnicity, gender, and family upbringing often define much of your world-view. But these influences aren't conclusively deterministic. For example, someone's immediate family might include a pragmatic scientific environmentalist, an entrepreneurial image-conscious materialist, a musical intellectual greenie, a Buddhist social activist, a rebellious academic philosopher, and a cynical corporate burnout. Everyone in the family thinks differently, makes different assumptions, and has different types of reasons for their actions. They often don't understand each other very well. Our worldview can affect every belief, decision, and action we take. It limits our ability to be critical with ourselves, and to identify and assess an appropriate range of options. It also limits our ability to communicate and negotiate with other people. When people have different values, motivations, and priorities, they will have trouble communicating. He rangi tā matawhāiti. He rangi tā matawhānui Sometimes differing worldviews lead to irresolvable differences that we call deep disagreements (more about this in a later module). But on most topics, different worldviews don't usually lead to this reasoning chasm. Instead, if we make an effort we can follow other people's reasoning, even when we don't agree with some of their assumptions, or don't find the topic particularly relevant or interesting. We can even identify some reasoning mistakes, and charitably help them improve their arguments. For example, we can engage with people with different religious beliefs, because we usually share common standards of reasoning. To help us reason across the gap of our different worldviews, here are two suggestions. First, we can avoid some bias by broadening our perspective, through techniques such as the Veil of Ignorance. And second, we can learn to identify when some of our assumptions aren't held in common with other people, and how that affects our reasoning. 1. The Veil of Ignorance To evaluate an argument we often need to look at it impartially – with no preference for one group of people over another. Humans aren't naturally impartial. We (almost) always prefer our own group, our own interests, our own beliefs over those of others. Whenever we come across an argument, we always have some kind of pre-conceptions that influence us. We all have a lot of ideas about how the world works and what is good or bad – even if we aren't aware of it. To become better Critical Thinkers we need to train ourselves to evaluate arguments in an impartial manner. A classic way to do this is to try to imagine how a neutral third party would evaluate the argument – someone who is a good critical thinker, but that has no particular prejudices or preferences about the subject matter of the argument they are evaluating. Imagine a person who is extremely good at critical thinking and also smart and kind, but has amnesia, making them forget everything about who they are. They don't know their gender, ethnicity or nationality; their political opinions, religious beliefs, or affiliations; their wealth, skills, or friends; what kind of things they like or don't like; and their interests and hobbies. They know nothing about themselves, which makes them impartial. Now imagine how this person would evaluate the argument you are trying to evaluate. The Veil of Ignorance originally came from an American philosopher named John Rawls, who ironically used it to push his particular political worldview. It takes practice and effort to use the Veil of Ignorance to help you evaluate an argument. Your worldview will still influence your evaluation to a degree, even with the Veil. But with practice and reflection, you can reduce its effect. The Veil of Ignorance is a useful tool for evaluating arguments where you have some preference for one conclusion over another, such as whether you should be allowed to borrow your parents' car. But it's hard to learn, and to apply consistently, so the Veil of Ignorance must be used together with the other tools for good critical thinking we teach in this course – it is not a panacea. 2. Assumptions Here's a sample of ranges of fundamental assumptions that will shape your reasoningDo you assume that fairness is about having equal opportunity, or equal resources, or equal outcomes, or equal assistance? Do you think that modern cultural institutions are tilted heavily in favour of men? or women? the wealthy? or the lazy and malingering? Is racism rare, or common but manageable, or ubiquitous and institutionalised? Which is more important: happiness or efficiency; money or time; love or power? Are these even comparable? Is God (or the gods) central to a fulfilling life, important in some areas, or a joke imposed by the deluded? Does hard work matter for judging someone's success, or just the end results? Are there many incompatible but important ways of understanding an issue, or only one (yours)? Is expertise or experience more important? Which carries more weight: your own perceptions, the latest theory, or historical precedents? Who should you trust: almost everyone, most of your local community, your own family, or only those you can control? Is prison primarily for justice, retribution, rehabilitation, isolation from society, or some other purpose? Is the environment ours to plunder as we wish, something we hold in stewardship, or something we share equally with all other life? These are primarily questions of values, many of which you may not have thought a lot about. Some have political connections; others moral or spiritual. They are part of your worldview, and mean you will evaluate arguments differently from other people. If you don't identify your assumptions, you'll be trapped inside your worldview, wondering why everyone else is always "wrong". What are Dispositions? You might now have some questions, like: Why didn't Tim mention dispositions? Isn't critical thinking a set of skills, not a moral code? What is this "moral indoctrination" doing in a course about good thinking? Critical thinking is a set of epistemic skills and attitudes ('epistemic' refers to knowledge). These attitudes to knowledge acquisition and curation are important, because they make it more likely that you will apply the skills in the right way at the right times and directed to the right aims. And here, by 'right', we mean where critical thinking will be most effective, rather than counter-productive. But attitudes come and go, and are affected by your mood, how busy you are, and the topic under discussion. Dispositions are the result of developing certain attitudes through repetition and habituation. A disposition is an underlying tendency to have an attitude; a part of your character or personality. For example, to be intellectually curious is to be disposed to display curiosity when exposed to new ideas. Some of the dispositions we'll cover in this course might sound moralistic, but they are intended to help you become a better thinker, not a moral saint. More technically, they are epistemic virtues, not moral virtues. Tim suggests that there are two types of reasons about dispositions that help you with critical thinking – the instrumental, and the moral. But in this course, we will only consider how they make you a better critical thinker. You might want to become a better critical thinker as a step on the way to being a better person; but equally well, it is an essential step to becoming a more effective supervillain. Dispositions aren't Capacities With a little effort, I could learn basic Russian. One might say that I have the capacity to learn basic Russian. To have a capacity to do something does not require a person to actually ever do that thing. Perhaps one day I'll live in Russia and won't actually learn Russian – maybe I just could not be bothered. I would still have the capacity to learn it. To take another example, an irascible and impulsive person may have the capacity to control their fits of anger, even though she may typically fail to do so. Contrast this with qualities such as generosity, kindness, or patience. When we attribute these characteristics to people, we do not merely mean that they could possibly act kindly, generously, or patiently. Rather, we mean that, under the appropriate circumstances, the generous person will show generosity, the patient person patience, and the kind person kindness. These qualities are not mere capacities, but rather dispositions. Both the impulsive person and the patient one have the capacity to keep calm. What the impulsive person lacks is the disposition to do so when appropriate. Dispositions are things you do automatically when appropriate, even in difficult circumstances. We all have the potential to develop the dispositions we want. However, it takes practice and a little self-awareness. The best approach is to decide on a couple of dispositions that you want to develop, and work on them steadily for some times. After a couple of months or even years, you might be happy with one disposition, and ready to improve another. Open-mindedness Our first epistemic disposition is relatively uncontroversial. To be a good critical thinker, it helps to be open-minded. Having an open mind means that you are prepared to consider new ideas, and alternative points of view, and that you are (in principle at least) open to changing your mind on most topics. Open-mindedness is the disposition to consider different ideas, beliefs, points of view, approaches, and assumptions. However, it is possible to have a mind that is open in the wrong way. For instance, you might be overly attracted to novelty, accepting new ideas simply because they are new. Or you might be prepared to contemplate anything, no matter how incoherent it is. Or perhaps you are prepared to consider only ideas that share some common assumption, such as being open to all kinds of psychic phenomena, but not to psychological explanations of these events. Or only open to ideas from science (we'll look at that one much later), or any idea involving conspiracies (another topic we'll return to). Like all dispositions, there are several ways you can have too much or too little Open- mindedness, or use it in the wrong way, at the wrong time, or on the wrong areas. The Availability Heuristic is a psychological bias that conflates how easy something is to recall with its frequency of occurrence and relevance to the current argument. The availability heuristic is also associated with stereotyping, and defaults. But it's not quite the same. So don't blame the availability heuristic for all humanity's bigotry or hasty generalisations. Here are some things that you can do to reduce the deleterious effects of availability heuristics: avoid snap judgements; question your default behaviour; get a wide range of information; mix with different types of people; notice when other people have different assumptions than you; learn to spot your own assumptions. Collectively, they will help to reduce some of the problems of relying too much on the availability heuristic. However, it seems fundamental to human psychology, and to our use of language. Fallacies: of Appeal A Fallacy ( 谬论 ) is a flaw in an argument that is psychologically convincing, and follows a well-known reasoning pattern. There are hundreds of named fallacies, and thousands of unnamed ways that arguments go wrong. The fallacies that we include in this course are quite common and easy to describe, but they are not particularly special. However, even just reading a description of a fallacy is often enough to raise awareness of the reasoning error, and help people to avoid it in the future. A fallacy is a commonly used, psychologically convincing but unreliable, pattern of reasoning. One reason why fallacies can seem convincing is that they are often quite reliable ways of reasoning. Most of the fallacies we will consider are sometimes appropriate ways of reasoning; just not as often as we'd like. You can see our list of fallacy videos here. Here are some good sources, if you want to find out more about fallacies: An Illustrated Book of Bad ArgumentsLinks to an external site. A Taxonomy of FallaciesLinks to an external site. Wikipedia's ListLinks to an external site. The first class of fallacies we will look at are Fallacies of Appeal. Fallacies of Appeal offer reasons to believe a claim or conclusion by appealing to external sources (rather than facts) when they are irrelevant to the reasons for believing the conclusion. A fallacy of appeal is an unreasonable appeal to an external factor, such as emotion or popularity. 1. Appeal to AuthorityDescription: Relying upon the view of apparent (as opposed to genuine) authorities to settle the truth of a statement or argument. Example: Richard Long, a respected retired New Zealand newsreader featured in advertising campaigns for Hanover Finance. Long had no financial expertise. Newsreaders look well informed, but they are essentially presenters. They are well known because they're on the news: not because they know about investments. If we rely upon a newsreader's endorsement to settle which investment fund we should trust, we would be accepting a claim without adequate evidence. That would be a fallacious appeal to authority. Appeals to authority also conflict with the basic tenet of good logical and critical thinking which calls upon us to take responsibility for evaluating the grounds for our beliefs. Adopting a belief merely because someone else told us it was true is a way of avoiding good logical and critical thinking. Sometimes, however, good logical and critical thinking will itself lead us to rely on genuineauthorities. If I can't assess the investment option for myself, I might reason that I should trust the advice of a genuine investment advisor. That's not avoiding logical and critical thinking: it's reasoning about a matter related indirectly to the question I'm trying to settle. When I consider whether I should rely on an authority, I should consider the following questions: Is the authority a genuine authority: are they an expert? Are they giving advice in the areas within which they are a genuine authority? (We should listen to actors about acting; not so much about investing or medicine). Is there a broad consensus among authorities in the area? If not, we should not decide to believe X solely because an authority says X is true, since other genuine authorities say that X isn't true. Is the authority speaking sincerely (they might be giving an endorsement because they're paid to do so) and are they free of obvious bias? Only if the answer to all four of these questions is "yes" might we accept a claim because an authority endorses it, and even then, we should only do so if we are not in a position to evaluate the evidence for the claim ourselves. 2. Appeal to Ignorance Description: The arguer asserts that a claim must be true because no one has proven it false, or that a claim must be false because no one has proven it to be true. Examples: There must be intelligent life on other planets, as no one has proven there isn't. There isn't any intelligent life on other planets, as no one has proven there is. Both claims assume that the lack of evidence for (or against) a claim is good reason to believe that the claim is false (or true). Ignorance – in the sense of a lack of knowledge – features as part of the proof of the conclusion. But in general, the mere fact that a claim has not yet been proven is not enough reason to think that claim is false. However, there are some non-fallacious appeals to ignorance. If qualified researchers have used well-designed methods to search for something for a long time, without success, and it's the kind of thing people ought to be able to find, then the fact that they haven't found it is evidence that it doesn't exist. For example, many decades of research into ESP, mind-reading, spiritual healing, and other psychic powers, have found no convincing cases. That's a strong, if not conclusive, case that these psychic powers are rare to non-existent. 3. Appeal to Consequences Description: An argument is considered good if good consequences will follow from the conclusion; similarly an argument is bad if undesirable consequences follow. Example: If there was Anthropogenic Global Warming, then we'd have to change how we drive our cars. But we like driving our cars fast, and have paid lots of hard-earned money for them. So Global Warming is rubbish. Here, the conclusion being true would be inconvenient for us, so we claim that it is false. One place this argument form may not be fallacious is in certain types of moral argument, where the moral consequences of a decision are part of the decision making process. For instance, if you lie to an assassin, you might prevent her from killing her target; here you might be able to justify the moral harm of lying because of the greater moral harm caused by the consequences of not lying. 4. Appeal to Popularity Description: Arguing that a claim must be true because lots of people believe it. Examples: Essential Bible Blog's "Top 10 Reason the Bible is True": Reason 8. Leader Acceptance. A majority of the greatest leaders and thinkers in history have affirmed the truth and impact of the Bible. Reason 9. Global Influence. The Bible has had a greater influence on the laws, art, ethics, music and literature of world civilization than any other book in history. Perhaps the Bible is true, but the fact lots of people believe it to be so is irrelevant to whether it is or not. We should investigate and evaluate their reasons for believing it, rather than taking the mere fact that they believe it as a reason to do so. But … sometimes a consensus among properly informed people may be a fairly good guide to the truth of a claim: see the circumstances in which an appeal to authority might not be fallacious. 5. Appeals to Tradition Description: Like appeals to popularity, except the appeal is to how long something has been believed, rather than to the number of people who have believed it. Examples: People have believed in astrology for a very long time, therefore it must be true. We've always eaten large hot roast meals for Christmas, so we will keep doing this even though we live in the southern hemisphere now. But all of the objections to arguments from majority belief apply here, too. 6. Appeals to Emotion (e.g., pity, love, anger). Description: An attempt to evoke feelings or emotions, when such feelings are not logically relevant to their conclusion. Examples: Student to Lecturer: I know I missed most of the lectures and all of my tutorials. But my family will be really upset if I fail this course. Can't you find a few more marks? That is an appeal to pity, but even though what the student says is true, it is irrelevant to the marks they should be given. The University has strict rules designed to guarantee fair treatment for all students which do not allow exceptions to be made in cases like this. Daughter: Can we get a puppy? Father: No. Daughter: If you loved me, we'd get a puppy. That would be an appeal to emotion, in this case love. Note that the persistent child might continue by appealing to other emotions: Daughter: A puppy would grow up and protect us. Can't we get a puppy? Father: No. Daughter: If you wanted to keep us safe you'd get a puppy! You don't care about us! Being able to spot common fallacies can be very useful in the home. But pointing them out in the heat of argument is less useful. 7. Appeals to Aspiration Description: Evidence, information, or advice is presented by someone the audience aspires to be like, or admires. Examples: All influencer posts. All advertisements with celebrities. Many political speeches. We choose to follow their advice, and often believe it, because we want to be like the celebrity, be seen to be like them, or even liked by them. It's a variation on the appeal to popularity – it's not that we want to be popular/desirable - it's that we are following someone because they are popular/desirable. All celebrity endorsements, and the entire career of influencing, prey on people through this fallacy. It's ubiquitous, and evil. Although critical thinking is often studied as an individualistic academic discipline, one of the most important dispositions for a good critical thinker to develop is to be able to discuss ideas and think as part of a greater whole, even if you disagree with some or all of the ideas. This is something that employers, committees, group projects, relationships, etc., all require: Constructive contributions make everyone better thinkers Being Constructive is the disposition to engage with people and build on their ideas, rather than treat them as obstacles or targets. Constructive criticism is perfectly coherent. You can disagree with someone, and even point out their mistakes, as long as you are doing it at their side, in a way that helps them. There is also the chance that you are wrong, and being constructively helped is better than being simply corrected. The markers in this course try to be constructive in our feedback, even if we don't always succeed (let us know if we are cruel or unhelpful!). Being constructive – that is, making our critical thinking interactions useful or beneficial – takes practice. You'll get a chance every week in this course, as you can comment on each others' discussion posts. It's harder to do when you feel strongly about a topic, or have a complicated relationship with the other people. Family members are great for practicing this disposition; many siblings seem to exist simply for the purpose of getting under our skin. People are not objects to score points off so you can look good or to make them look bad. Finding a mutually agreeable resolution is more important than winning or displaying power. It's easier to explain why being constructive is important by considering the alternative: Don't be an arsehole Arseholery is the disposition to be an 'arsehole' – to score points off people to look good or make them look bad. For 'arseholes', winning, dominating, or displaying power is more important than truth or finding a resolution. But why should we care how critical thinkers behave? 1. Critical thinking is partially a community endeavour, and certainly this critical thinking course involves a community of inquirers. Alienating parts of a community reduces their chance to learn, and your chance to learn from them. It raises emotions and reduces rationality. It increases the chance that neither you nor they are engaging in good arguments. It makes people defensive, others triumphant, and both will fail to consider the quality of arguments on offer. 2. No one should lose when everyone uses Critical Thinking. This isn't because it's soft and fluffy and non-judgmental, and everyone gets a prize. It's because we judge arguments and reasons, but we don't judge the arguers. As critical thinkers, we should believe the conclusions of the best arguments. So if your argument 'loses', then it's not your argumentany more, and you've improved your thinking – you win! Critical Thinking is not about your ego, or being right, or making someone else look bad. It's about everyone working together to improve everyone's beliefs and their reasons for those beliefs. The only real way to lose when you are thinking critically is if (a) someone deceives you, convincing you that a bad argument is good, or that a good argument is bad, or (b) someone engages in belittling or abusive behaviour, rather than critical thinking. The former problem can be reduced by becoming more skilled. This disposition addresses the latter problem. 3. Arseholes try to win, and if they can't win, they make other people lose. They keep score, make personal attacks, gloat, preen, and lie to look good. Truth and facts are often not their friends. Even people with excellent critical thinking skills but a disposition towards arseholery tend to cause bad critical thinking in a group or team environment. So really, you are doing them a favour as well as everyone else when you ask them politely to curb their behaviour. Just don't expect them to thank you. 4. The term's very crudity serves as a reminder that we can be blunt and truthful, but not offensive. We don't need to sugar-coat or use euphemisms. We try to be clear, concise, simple, and direct when we try to construct arguments together. We avoid both loaded (thick) terms and vacuous niceties – but we don't attack or belittle people, because we are focused on argument quality, not ownership or victory. In this course, I expect us all to be constructive when engaging with each other, even though we may not agree with many of each other's ideas. Any instance of belittling or abusive behaviour is unacceptable, because it reduces the critical thinking capabilities of the abuser, the target, and me. And frankly, we all need all the help we can get. If you feel that a student or staff member has been an arsehole, let the lecturer know. Even if it's the lecturer. Dispositions: Systematicity🎬 Being systematic improves your ability to learn, which is advantageous for critical thinking. But far more importantly, it improves your chance of reasoning well consistently, and minimising reasoning errors. Systematicity is thinking in a structured manner Sure, systematic people may not be the most exciting to be around. But ask yourself who completes their tasks on time, without last-minute panics or missing important elements? Systematicity is the disposition to approach learning, problem solving, and similar cognitive activities in an orderly and focused way. The major tool we teach in this course is the standard form. It may be boring, and suck the life out of arguments, but it allows you to construct, analyse, and evaluate arguments in a systematic manner. Everything is labelled, and the interaction between elements is clear and easy to check. There are algorithms (step-by-step instructions) for each step of analysing and evaluating complex arguments. If you can reliably follow this procedure for one argument, you should (with practice) be able to do this for any argument. And by following the same method, you get consistent results. If there's a flaw in your procedure, it's a consistent flaw, and once it's spotted and fixed, it's fixed for all arguments. Using the same methods, in the same way, with the same steps for all arguments is initially more cumbersome; but the repetition leads to more rapid and sustained improvement, and consistency. Organized approaches to problem-solving and decision-making are hallmarks of a thoughtful person regardless of the problem domain being addressed. The inclination to approach problems in an orderly and focused way is an indispensable part of competent clinical (accountancy, managerial, psychological, scientific) practice, and deficits in systematicity might particularly predispose a nurse (CPA, pharmacist, attorney, physician) to the possibility of negligence in practice. (Facione et al. 1995) You need to be able to apply every method or technique that you learn in all relevant situations. People who aren't systematic apply whichever tools first come to mind (availability heuristic), or try the trick that solved the last problem, and hope it works again, or compare a situation to another (argument by analogy), and hope it will work the same way. Being organised, diligent, reliable, and consistent isn't likely to win you friends at a party. But it will help you to complete many kinds of tasks. Successful application of systematicity requires the development of several skills including categorization, analysis, and interpretation, and the mastery of various algorithms or processes. Over time, these form single units of action (that is, you don't need to follow all the steps – you just do the whole complex series as a single action, like sending a text). You know that if you use a complex strategy, it will be done properly, in the same way whether it is the first or hundredth problem you face today. This allows you to be cognitively flexible, where you can draw upon several different problem-solving strategies when challenged by a difficult task. Each strategy is reliable, and doesn't require thought to be done properly and carefully. You can start using that brain power to be imaginative and creative in your problem solving. If a strategy fails, you know it was unlikely to be due to a mistake, and can move on, rather than trying again and again, hoping that doing the same thing over and over again might lead to different results. The point of developing systematicity is to be reliable at following the right steps, and so free your brain up for higher-level thinking. If you spend all your effort and concentration in following a procedure, or you follow a procedure in a systematic but flawed manner, you've gained no advantage, and forsaken the intuitions and inspiration that often leads to novel and effective solutions. So, as with all virtuous dispositions, being systematic isn't enough. You need to do it in the right way, at the right time, and in combination with other appropriate skills and attributes. Blindly following lists of rules like an assembly-line robot isn't a great way to become a Critical Thinker. Assumptions: Deduction Most people, most of the time, assume their arguments are Deductive. This is usually an unreasonably high standard to hold them to. People are simply wrong about the amount of support their premises can provide. Note that this is not just a matter of lack of education or awareness; most philosophers state their own arguments in a Deductive form, and treat them as valid. Law is often taught as being Deductive, as is much of Science. Deduction is the assumption that one's arguments are usually Deductive. Good arguments about reality are very rarely Deductive. But recall what we said in our discussion of counter-examples: "an argument is valid if it's impossible for the premises to be true and the conclusionfalse"; that is, if there is no possible counter-example. Impossibility is a really high standard. Avoiding this assumption is yet another excellent reason to get really good at creating counter-examples. The Problem with Reasoning about the World There's a disconnect between Deductive reasoning and the world: Things go wrong. There are exceptions. Surprises happen. Some 'rules' conflict with others. Life is messy, and so is the universe. If we drop an apple, it will fall. Except when we are orbiting, or it's rocket-powered, or magnetic, or suspended by a string, or... If we chop off an animal's head, it will die. Except that there are scientific studies of chickens living for days or even weeks after being beheaded, and they are still integral members of the chicken community, wandering around, trying to peck grain, engaging in status fights, taking dust baths, and so on – all without a head. Adult NZ citizens can vote. Except if you have been overseas for more than 3 years (except as a diplomat or solider), or are in prison (for more than 3 years), or detained in an institution under the Mental Health (1992) or Intellectual Disability (2003) Acts for a period of more than 3 years, or are on the Corrupt Practices List... [Electoral Act 1993, §80.1] Liquid stays in a bowl in a gravity field. Except if it's a superfluid, or it's siphoned, or the liquid evaporates or combusts, or the air is moving fast, or the bowl has a hole or crack, or is magnetised or spun, or is submerged in another liquid... The Deductive standard permits no exceptions, ever. Under any, even extraordinary or crazy, circumstances. None. No matter what other information may be true, or whatever premises may be added. But life is unpredictable. For example, a friend who is always on- time may be late today for any number of perfectly reasonable, normal, and unpredictable reasons. Empirical Generalisations We have seen that empirical generalisations, those about physical reality, are best couched in tentative or defeasible forms: 1. Most apples, when dropped in a gravitational field, fall. 2. Most animals die within minutes of having their heads cut off. 3. In a gravitational field, liquids typically stay in a bowl. Our empirical generalisations have to allow exceptions to ensure they are true. And this matters when we use them in arguments, because we want our premises to be true, and our conclusion to follow from our premises. But allowing for exceptions also allows background information to become relevant, such as where the apple is, or how fast the bowl is spinning. And the potential for background information that is not part of the argumentto affect the truth of the conclusion is what makes most arguments Non- deductive. Deductive Arguments So when can we use Deduction? And if it's not often useful, why do we make such a big deal about it? To answer the second question first, often we can treat arguments as Deductive because the counter-examples to the argument (the exceptions) probably aren't important or relevant. So the resulting conclusion is not guaranteed to be true but it's good enough for the current circumstances, because we believe nothing weird is going on in the background. The other place where Deductive reasoning is useful – and in this case it is completely reliable – is when we reason with conceptual definitions. This happens in almost all of mathematics, as we saw with Pythagoras' theorem. If an argument is dealing with a complex situation where you don't have all the information, or there are some premises that use terms like 'usually', or 'mostly', or broad generalisations, then use Non-deductive reasoning. Also, if an argument is invalid when treated as Deductive, but strong when treated as Non-deductive, then use Non- deductive reasoning. Don't let your confidence in the strength of your argument betray you into deciding that it's perfect; being Deductive is usually not an achievable standard in everyday life. Fallacies: of Relevance Fallacies of relevance offer reasons to believe a claim or conclusion that turn out to be irrelevant to the claim. We also have videos on most of these fallacies. A fallacy of relevance is a pattern of reasoning that appears to offer support, but which is irrelevant to the entire argument. 1. The Ad Hominem (Attacking the Person) Fallacy Description: Rejecting someone's argument by attacking the person rather than evaluating their argument on its merits. Example: "Dear Editor, The current campaign against combining drinking with driving is terrorising law-abiding people. Many law-abiding people are cutting their alcohol consumption because they are afraid of being caught by random breath testing. But research shows that the average drink-driver in a fatal accident has an average blood alcohol level of more than twice the legal limit. The current campaign against drinking and driving is failing to achieve what should be our top priority; getting the heavy and hardened drinkers of the road." Douglas Myers. CEO, Dominion Breweries. "Dear Editor, I read Doug Myers' letter yesterday but he is the CEO of a major brewing company! He has a vested interest in keeping alcohol sales up, and the anti- drink-driving campaign threatens to reduce alcohol sales. We shouldn't take any notice of his views about drinking and driving". But if Myers has given arguments in favour of his view, we should evaluate them like any other argument, rather than writing them off because of facts about him. But facts about the arguer can be a good reason to examine their arguments more carefully than we might otherwise. The following does not appear fallacious: "Burton Wexler, spokesperson for the American Tobacco Growers Association, has argued that there is no credible scientific evidence that cigarette smoking causes cancer. Given Wexler's obvious bias in the matter, his arguments should be treated with care." 2. Equivocation Description: A key word is used in two or more senses in the same argument and the apparent success of the argument depends on the shift in meaning. Example: Any law can be repealed by the proper legal authority. The law of gravity is a law. Therefore the law of gravity can be repealed by the proper legal authority. When the two senses of 'law' (laws regulating human conduct vs. uniformities of nature) are made explicit, it is apparent that the first premise is irrelevant, hence a fallacious argument. And, again showing that famous philosophers are not immune, John Stuart Mill arguing that happiness is desirable: "The only proof capable of being given that an object is visible, is that people actually see it. The only proof that a sound is audible, is that people hear it… In like manner, I apprehend, the sole evidence it is possible to produce that anything is desirable, is that people do actually desire it… [T]his being a fact, we have not only all the proof which the case admits of, but all which it is possible to require, that happiness is a good". [John Stuart Mill, Utilitarianism] But 'desirable' is used in two different ways in this passage, to mean 'able to be desired' (just like 'visible' means 'able to be seen') and 'worthy of being desired'. 3. The Hypocrisy Fallacy Description: Rejecting an argument because the person advancing it fails to practice what they preach. Example: Doctor: You should quit smoking. It's a serious health risk. Patient: Look who's talking! I'll quit when you quit Responses like that probably sound familiar. But the doctor's failure to look after her own health is irrelevant to the argument, resting on a concern for the patient's health, that the patient should quit smoking. 4. The Red Herring Fallacy Description: An arguer tries to side-track their audience by raising an irrelevant issue and then claims that the original issue has effectively been settled by the irrelevant diversion. There are a number of stories about where the term "red herring" comes from. One is that in fox hunting, cooked herring was dragged across the trail of the fox when the hunt was over: the smell made the hounds lose the trail, so they could be caught and brought home. Another is that prison escapees rubbed rotten herring on their bodies because the smell made it impossible for dogs to follow their trails. The general idea is clear: red herring fallacies lead you off the scent. Example: "There is a good deal of talk these days about the need to eliminate pesticides from our fruits and vegetables. But many of these foods are essential to our health. Carrots are an excellent source of vitamin A, broccoli is rich in iron, and oranges and grapefruits have lots of Vitamin C." Plans to eliminate or reduce pesticides probably don't entail stopping the production of common vegetables: the suggestion they do is an irrelevant red herring. 5. The Strawman Fallacy Description: Someone distorts or caricatures an opponent's arguments or views, and then attacks the weakened version rather than the real argument. Example: Margaret: "We have to do something about greenhouse gases. The government should raise vehicle fuel efficiency standards to cut down the amount of CO2 we release over the next 20 years". Roger: "Margaret's solution would be a disaster. It would kill the economy. How would people get to work without cars?" (Roger claims that Margaret is proposing measures that would eliminate cars. Margaret has not said anything equivalent to that. It's a strawman.) There is a positive message from the Strawman Fallacy: the importance of being charitable. Showing that a strawman version of a position is flawed may help us win a debate, but it is unlikely to move us toward the truth. If we can show that even the strongest version of a position we oppose is flawed, we may make progress. So our goals for critical thinking lead us to adopt the principle of charity. When representing an argument that you do not agree with, it is important to represent that argument in a way that is reasonably faithful to the original argument, while being as strong as possible Charity🎬 We have just learned about the Principle of Charity, which tells you to treat arguments as they should have been presented, not as they actually were. As well as mastering the technical skill of applying the principle of Charity, it's also important to internalise Charity as a disposition, as we tend to need to apply charity at exactly the times we are most loathe to consider it. Be nice to flawed but repairable arguments But how nice? There's got to be a limit to this, of course. You don't want to make some drongo look like Einstein. So there's a limit to what you should be prepared to put in to their arguments. You're trying to work out what their argument is, not produce the best possible argument for their conclusion. Charity is the disposition to apply the Principle of Charity regardless of how you feel about the arguer or conclusion. Here's an analogy: if you are looking after someone's child for a few weeks, you want to help the child grow and improve, and hopefully return a slightly improved human being to their parents when you are done. But giving them a different, better-behaved child instead is frowned upon. So is replacing someone's argument with an unrelated, better one. In both cases, you are being a little too charitable. Still, there are several reasons to be charitable. For one, if you actually believe the conclusion of the argument, you want the argument to make a good case for it. If you like the argument, then you'll benefit from giving it a strong interpretation. But more importantly – especially if you don't believe the conclusion – you are better off attacking a stronger version of the argument. If you're in a debate with someone and you attack a version of their argument which isn't as strong as it could be, the person will just say: "That wasn't what I meant. You're not attacking my actual argument; you're caricaturing my argument". So you won't have got anywhere. By contrast, if you can show that even the best version of your opponent's argument is false, then you've made some progress. Quite often, what you see in good argumentation is an opponent actually improving the position they are going to attack. Then once you've made the argument as good as it can be – bearing in mind that you're trying to evaluate their argument rather than your own re-creation – you can point out the ways in which, even repaired charitably, the argument is flawed. Now you've really shown something useful; that even the best version of their argument won't work. It is pointless to show that a distorted version of an argument is bad; but it is highly valuable to show that an argument isn't repairable.Being charitable, by the way, is the opposite of applying the Strawman fallacy (which we just introduced). The Strawman fallacy consists of distorting or misinterpreting someone's view so that it can easily be attacked. Committing the Strawman fallacy doesn't move us toward truth, because it involves rebutting an argument that nobody was putting forward in the first place. Sometimes being charitable is called "Steel-manning" an argument and described as the opposite of the Strawman fallacy. But I've always thought that the strawman just lacked a brain, while the Steel-man (or at least tin-man) was searching desperately for a heart, trying to be as kind as possible within the strict mechanical rules of reason. So being Charitable is a very different and much harder mental process. The Framing effect is another psychological bias from our friends Tversky & Kahneman. The framing effect is a psychological bias where our preferences are influenced by whether options are presented (framed) in a positive or negative manner. Being Wrong🎬 Sometimes we get things wrong. It happens to everyone. And it should – even if we could be always perfectly rational and reasonable (which no one is!), because we deal with complex situations, incomplete information, and Non-deductive reasoning, unforeseen circumstances can mean that we end up believing or doing the wrong thing. So why do some people have a really hard time admitting that they are wrong? But I'm never Wrong (about Important Stuff) If you have trouble admitting you are wrong, have a think about why. There are several common causes: Insecurity and Anxiety. Large but fragile Ego. Social Embarrassment / Shame. Fear of Punishment for Errors. Arrogance. Inability to Question your own Assumptions. Privilege and Entitlement. None of these characteristics are easy to admit. And that's part of the problem. To learn to admit you can sometimes be wrong when it's not natural for you to do so, you need to admit that you have a significant character flaw, which is a harder version of the same problem. And then to start changing yourself. In this module, we also talk about Epistemic Humility, which is the disposition of having the right degree of confidence in your beliefs, not too much nor too little. We'll discuss how to change your attitude. But for the moment, it's enough to acknowledge that you sometimes need to admit you were wrong. It's simply part of being rational in a complex, exception-riddled world. Fallacies: of Unacceptable Premises Fallacies of Unacceptable Premises attempt to introduce premises that, while they may be relevant, we shouldn't accept as part of the argument. We also have videos on most of these fallacies. Fallacies of unacceptable premises occur when an argument contains an unacceptable premise, or a Weak sub-argument containing such premises, that make it psychologically compelling – it appears to be a Strong argument, but is not. 1. Circular Reasoning Description: Circular reasoning (sometimes known as 'begging the question') is using a premise or assumption which is what you are trying to prove as a conclusion. Example: Arthur: God exists. Barbara: How do you know? Arthur: Because it says so in the Bible. Barbara: How do you know what the Bible says is true? Arthur: Because the Bible is divinely inspired by God. The Bible could only be divinely inspired if God existed. So Arthur's appeal to the Bible to prove the existence of God assumes the very thing he's trying to prove. 2. The Slippery Slope Fallacy Description: Arguers say that an innocent-looking first step should not be taken because once taken, it will be impossible not to take the next, and the next, and so on, until you end up in a position you don't want to be in. Example: Don't get a credit card. If you do, you'll be tempted to spend money you don't have. Then you'll max out your card. Then you'll be in real debt. You'll have to start gambling in the hope of getting a big win. But you'll normally lose. Then you'll have to steal money to cover your loses. Then your partner will leave you. And you won't be able to feed the dog, and it'll die. And it would be bad if the dog died. So you mustn't get a credit card. Slippery Slope arguments are fallacious if it is possible to stop at one of the steps: couldn't I get a credit card with a maximum, or exercise a bit of control, or get the local animal protection society to help me feed the dog? 3. Decision Point Fallacy or the Sorites Paradox Description: Sometimes the conditions that make the use of a term appropriate vary along a continuum and there is no sharp cut off between circumstances in which the term is correctly applied and those in which it is not. If an arguer claims that because we cannot identify a precise cut-off or decision point, we cannot distinguish between correct and incorrect uses of the term, they are arguing fallaciously. Example: One grain of wheat doesn't make a heap. Suppose 1 million does. Take one away. Surely we still have a heap: if a million makes a heap, surely 999,999 does too. One grain can't turn a heap into a non-heap. Take another away. We still have a heap: if 999,999 does, surely 999,998 does too. One grain... etc. Take another away. We still have a heap.... etc. But if one grain doesn't make a difference, then it seems that we will be forced to conclude that 1 grain does make a heap. But that means we can't talk about heaps of wheat at all: we don't know when we can describe a collection of grains of wheat as a heap and when we can't. Example: At conception an embryo is not a person. At birth, a baby is a person. There is no non-arbitrary way of determining exactly when the embryo became a person. Therefore, there is no moral difference between the embryo and the baby at birth. But we can tell the difference between people who are bald and not-bald, between heaps and non-heaps, and embryos and babies, even if we can't tell exactly when something stopped being one thing and became the other. 4. False Dilemma or False Dichotomy Description: occurs when an argument presents two options and gives the impression that there are no other possible options: one of them must be true. Examples: Either Shakespeare wrote all the plays attributed to him, or Bacon did. There's good reason to think Shakespeare didn't write all the plays attributed to him. Therefore Bacon wrote all the plays attributed to Shakespeare. It's possible that Shakespeare didn't write all of the plays attributed to him, but that doesn't mean Bacon did: there are other possibilities. In the Shakespeare/Bacon case the false dilemma was explicit (either Shakespeare wrote all the plays... or Bacon did), but often the dilemma is implicit. If I spend all of the week partying, I won't have time to study and I'll fail. If I spend all week studying, I'll be over-prepared and stressed and I'll fail. So I'm going to fail either way. I might as well spend the week partying. Here the dilemma is unstated – "The only options are to spend all week studying or to spend all week partying" – and once stated it surely isn't plausible: the student could spend some of the week studying and some of the week partying? 5. Hasty GeneralisationsDescription: Arguer draws a general conclusion from a sample that is biased or too small. Example: The oldest woman in the world, Jeanne Calment (122 years, 164 days) smoked until her early 110s. Therefore smoking isn't really bad for you. Example: Andrew Wakefield claimed to have shown a correlation between the MMR vaccine, bowel disorders and autism, but – among other flaws – his research focused on children already thought to have the conditions he claimed were caused by the vaccine. The claim that smoking carries significant health risks isn't falsified by a single case and trials drawing population-wide conclusions must recruit representative study- populations. 6. Faulty Analogies Description: The conclusion of an argument depends upon a comparison between two (or more) things that are not actually similar in relevant respects, or without pointing out how the two differ and why it does or does not matter. (See our later discussion of reasoning by analogy). Example: I need a new car. My last three cars have all been reliable, and they were blue. So I'm going to buy a blue car. Colour is usually not a relevant aspect of car reliability. Example: A letter to the editor following a report someone had been turned away from an after-hours medical clinic because she couldn't pay for treatment for her feverish, vomiting child: "Why do people attend private clinics for medical treatment with insufficient funds to cover fees? Do these same people go to the petrol station, fill up, toss $5 out the window and say "I'll be back with the rest later," or perhaps after dining out one evening, pay for the meal and promise to return next week, month or year to pay for the wine? I think not. The answer is simple – don't go to a private clinic." There is a relevant dissimilarity between the need for urgent medical attention, and the desire for fine dining.Example: Carrots and people are both living organisms. You shouldn't eat humans. So you shouldn't eat carrots either. The relevant differences between carrots and humans include that we can't hear carrots scream; they don't have big eyes to look at us knowingly; we don't name and befriend them; and they cannot talk or plead their case. But these probably aren't the most relevant differences, morally speaking. Carefully specify the similarities and dissimilarities between visits to an after-hours medical clinic with a sick child and visits to a fine restaurant. Can you tweak the analogy slightly to make it much stronger, or much weaker? 7. The Fallacy Fallacy! Description: The fallacy of inferring that merely because an argument contains a fallacy, its conclusion must be false. Example: Bob told me that I shouldn't steal because everyone knows that stealing was wrong, but I recognised immediately that argumentcontained an Appeal to Popularity fallacy, so I concluded that it was OK to steal the apple. The conclusion of an argument may be true, even if the argument contains a fallacy. Finding a fallacy just means that the arguer needs to look for other, better reasons in support of their conclusion. Dispositions: Epistemic Humility🎬 We all 'know' a lot of facts. Many of these are wrong. The less we know about a subject, the less information that we have to help us to determine whether we are wrong. And even if we know quite a bit on a subject, there's usually someone else who knows more. And they still could be wrong. Assume you might be wrong. Especially when you are sure you are right. Epistemic humility includes admitting that many of your strongly held beliefs are likely to be false. And that they probably aren't the ones that you think are suspect. Epistemic humility is the disposition or attitude that you could always be wrong, and so are open to correction or more learning, even when you are confident of your beliefs. Your blind spots often aren't in the places you are looking. Epistemic humility is a more subtle form of being open-minded, that takes into account relative knowledge, and the possibility of your own poor evaluation of your relative knowledge. Epistemic humility doesn't require that you listen to idiots, or give everyone the same amount of time and attention; but it does require you to allow for the possibility that it's you who is the idiot this time. We can't reliably know about a topic just by skimming an article on Wikipedia or half- remembering some conversation from last year. We can't assume that because we are a third-year med student, or look older than the other person, or have an uncle who used to do something related, that we won't be spouting a pile of crap. In fact, there is a disturbing correlation between a lack of epistemic humility, and a lack of knowledge. There are several explanations for this. One is that people who don't question themselves tend not to correct themselves, so stick with whatever they first learned, which is likely to be an incorrect, misunderstood out-of-date oversimplification. Another is that it often takes a certain amount of knowledge in an area to realise that your 'common sense' gives the wrong answers, and that you don't know very much at all. Another disturbing correlation is between the level of education in one area, and confidence in completely unrelated areas. That is, the more you know about one topic, the more you assume you know about everything else. Put that way, it sounds pretty stupid. But it's very common. The same doubt and scepticism should apply to pronouncements by other people, but only to exactly the same extent. Yes, some people are experts on a subject, and their statements on that subject should carry more weight, but there are very few people who are experts on more than one subject. Expertise is really difficult to get, and takes a lot of time to get, and if someone has taken the time to become an expert on, say, evolutionary biology they are extremely unlikely to have had the time, energy, and talent to devote to also becoming an expert on, say, creationist theology. Or vice versa. Also, there are different standards of expertise, such that by one standard you are an expert and by another standard you aren't. You never know whether someone actually knows a whole lot more than you in the relevant field, even if you think you are an expert, unless you listen and are open to changing your mind. Several aspects of epistemic humility are emphasised in the current philosophical literature. They include: Having proper beliefs about your own beliefs (higher order beliefs). Having an emotional insensitivity or indifference to social importance or status (i.e. loss of face) Owning your intellectual and dispositional limitations, and so revising your beliefs or quarantining the bad effects of your beliefs, while regretting your current limitations and trying to reduce or overcome them. Preferring slow, careful, systematic (System 2) thinking over the intuitive, heuristic- based, snap judgements of rapid (System 1) thinking. Dispositions: Discretion🎬 There are times when we should think critically, and times when we should not. And there are times when we should argue, and times we should not. Some of us are naturally argumentative, and some shy away from confrontation. Discretion is the disposition to only argue when it is appropriate, productive, and likely to help, not hurt, other people. Developing the disposition to stand your ground and argue, but only when it will be productive for all, is important for the effective critical thinker. Know when not to use Critical Thinking Here are some circumstances where we believe that you shouldn't express the "criticism" part of your critical thinking skills: When you are angry, frustrated, or upset. An angry thinker tends to have less awareness of their biases, and poorer control over their behaviour & dispositions, and particularly in being constructive when they disagree. They also make far stronger claims than is justified, and forget the nuances and subtleties. Finally, angry people often want to cause pain, and will say things, justified or not, that they will regret later. When the other person isn't interested in changing their mind. If you can't convince the other person, and there is no neutral audience to be swayed, you are wasting your time talking. When you are not interested in changing your own mind. If you won't be convinced by the other person, you are wasting your time listening. When feelings and relationships are more important than reasoning Never argue with your grandmother; even if she's wrong, she won't change her mind, but she might change her will :). Never argue at a funeral – especially that there is no Heaven, or that the deceased will probably go to Hell; an argument for these claims isn't what people need to hear right then. Never publicly humiliate someone or cause them to lose face in front of people whose opinion they value. It's not (just) that this is rude – they are also likely to hold on to their positions more strongly, and shortly neither of you will be reasoning well. When you really want to win, rather than to discover the truth. If your real aim is to win, you should fight dirty – using fallacies, heuristics, etc., to convince the person – rather than building the best argumentyou can. While you might win, you will have no idea whether the best argument won. But being convinced of your argument, or righteousnessbefore you begin, is no guarantee of your argument's strength; in fact it's a negative indicator. You can be passionate about a topic, and still be glad that the best argument won, whether it was yours or another's. When you want to hurt the other party. It's easy to hurt people when you argue. This critical thinking course gives you a couple of extra tools that can cause damage, but the harshest strikes are often emotional. If you want to hurt someone, bringing critical thinking into the picture will usually just slow you down. It's ineffective. Also, as a side thought, if you want to hurt someone, maybe your goal might not be the best? When arguing with an idiot Sometimes the other person is an idiot. Or just ignorant and naive, such as a five year old. But before you decide you are the expert in a situation, remember "When you are arguing with an idiot, make sure the other person isn't doing the same thing". When getting them to accept your conclusion may do more harm than good Sometimes knowing the truth isn't useful. If a friend is sick and treating themselves with fakery (e.g. taking massive doses of vitamin C, which is not absorbed by the body), but is getting better due to a placebo effect, maybe don't point out that the treatment doesn't work, unless you can suggest a more effective treatment that they are likely to take. If someone is on a ledge threatening to jump, don't argue that all life is pointless, or that most people have no impact on the world, and so won't be missed. If someone is failing an easy paper, don't point out it's because they are too dumb; give constructive advice about study habits and realistic goals instead. Truth can hurt, but when it actually harms, think twice, then shut your mouth. When your social relationships are likely to be significantly harmed. Don't try to win every battle with your friends and relatives; this will mean you lose the war. No one likes someone who thinks they are right all the time, or argues all the time. And if you have this combative attitude, you are probably failing in several important critical thinking areas – starting with a lack of epistemic humility, and a strong confirmation bias. Fallacies: Miscellaneous These are some of our favourite fallacies from the hundreds that have been studied. You can see our list of fallacy videos here. Here are some good sources, if you want to discover more fallacies: An Illustrated Book of Bad ArgumentsLinks to an external site. A Taxonomy of FallaciesLinks to an external site. Wikipedia's ListLinks to an external site. 1. Sunk Cost Fallacy Description: Taking into account the amount of time, effort, or money already spent on a task or goal when deciding whether to continue with it. The relevant information is the amount of time, effort, or money required to complete it, and whether we still desire the outcome. Example: I've spent 4 semesters already on my BA in Philosophy. Even though I have only passed three Philosophy papers, and I hate Philosophy, I'm going to complete my major – I've spent too much time and money already. The time and money is gone, already. And you don't want the degree, so stop. Although this is fallacious reasoning, there can be other reasons for indirectly taking into account the effort expended so far. For instance, you might be emotionally committed to a task once it is underway, or its accomplishment might now be part of your personal identity; or perhaps stopping carries a significant penalty (loss of face, prestige, or political power). 2. Special Pleading Description: Claiming that the current case is an exception to a generally accepted rule, without providing relevant justification. Example: "I know that the deadline was three days ago, but please, please, pleeease can you let me hand it in late". *flutters his eyelashes* This is not a particularly interesting fallacy by itself, but it is one that students appear particularly prone to when asking for extensions, re-marks, or other special treatment. Sadly, introducing this fallacy early in the course did not decrease the number of students who asked for special treatment; it appears they thought they were exceptions to the Special Pleading Fallacy as well. 3. No True Scotsman Description: Exceptions and counter-examples are excluded from a claim by continually shifting the relevant criteria until only the desired class can satisfy them. Example: "That Englishman lied. No Scotsman would ever lie". "What about Judith Campbell?" "The Campbells don't count, they betrayed us all long ago". "And Angus McHaggis?" "He's a low-lander; he doesn't count". "What about Aileen McGraw?" "She lied to the London police, that's not a lie for a Scot". "And Titch McGregor?" "He lied alright, so he's not a true Scotsman – no true Scot would tell a lie". It's tempting to have a universal rule, particularly when judging others against a standard. But if there are exceptions, the rule isn't universal. Sure, there might be reasons why each exception occurred, but rather than justifying each one, just switch to a non- deductive argument. It might be more plausible to claim that most Scots don't lie, while most English do (good luck with that one!). 4. Etymological Fallacy Description: The claim that the original meaning or history of a concept or word is required to understand its 'true meaning'. Example: Someone might say: "When you said 'I literally died laughing', you were wrong. You are still alive. 'Literally' means 'this actually happened with no metaphors'." But words change their meaning over time. Since at least the time of Charlotte Bronte, 'literally' has also meant 'figuratively'; there's nothing wrong with the original sentence 'I literally died laughing'. Similarly, 'cleave' used to mean to split apart, but now it also means 'to join together'. We know what it means via context. We don't need to be faithful to the initial meaning of a word to use it appropriately. For example, 'History' comes from the Greek historia, "to inquire", while 'Philosophy' comes from philosophía, "lover of wisdom". But we don't need to know either of those to understand how we use the words today. This is different from the Entomological Fallacy, which is where you confuse insects with arguments. Assumptions: Deep Disagreement🎬 Some arguments aren't worth having. If someone is massively misinformed, biased, or just not interested in alternative views, don't bother. In these cases there usually is a rational solution to the argument; just not a solution both sides are prepared to see. However, sometimes both sides might be reflective, patient, and intelligent, yet disagree about the quality of an argument. This indicates a more substantial barrier. If our judgement of what counts as evidence – or which evidence is significant – diverges, we might rationally disagree with each other. This rationally unresolvable, deep disagreement can occur when the disagreement is built on divergent worldviews Deep disagreement occurs when we lack the common ground of shared principles of reasoning to start constructing arguments that both parties can reasonably engage with. Deep disagreements can occur in many contexts, including the political, religious, philosophical, familial, moral, and scientific. But what makes them unresolvable, rather than just difficult or annoying? It's not like we think the truth about God's existence, the harms of child vaccinations, the existence of free will, or the rate of climate change are merely matters of taste; there are objective answers to all these questions. Nor is the question just "will this disagreement be resolved?". For example, all the evidence might point to my mother being a murderer, but my bias might stop me from seeing this. This disagreement is not 'deep'. There needs to be some reason that both sides are barred from understanding each other's arguments, given the evidence at hand. In the video, Tim talks about how Moskowitz and Last come from different worldviews. They have disparate pictures of how humans and nature relate together. But the disagreement can't be resolved simply by assessing which worldview is better. Because we all assess from within a worldview. The basic idea of a deep disagreement is that while we can check the truth of a claim, and check whether our methods of checking are likely to work, two people can have acquired different methods of doing both. When this happens, the current beliefs of both parties can make it irrational for them to accept each other's reasoning. If my method of checking a claim, and my method of checking my method, diverge from yours, then there might be no way for either of us to reason about the other's position, no matter whether either of our claims is actually true. Deep disagreements are different from disagreements due to stubbornness or bias. Someone who deeply disagrees with us can still engage with our views in ways that help us test and refine our views, and engaging with someone we deeply disagree with us can give us insight to how other people view the world. Additionally, we can only discover which disagreements are deep by investigating what reasons someone has for holding their view. Sometimes, there really is a resolution, even if it's hard to find. So don't be too quick to refuse discussion on the basis of divergent worldviews! Psych: Intentionality Bias🎬 Everything that happens to us, happens for a reason. Or so we like to believe. Let's put aside any theological implications. Perhaps everything we do is part of God's ineffable plan, or has some deep cosmic purpose. But we shouldn't presume we can reliably identify this purpose, or reason accurately about it. Nor can we reliably intuit what is the result of other human intentions, and what happens by chance. It's this very human type of intentionality, not divine guidance, that we are interested in here. Intentionality bias is the bias towards believing that events happen for a discernible purpose, or with intent, rather than by accident. Children attribute intentions and deliberate purpose to many things that in modern times most adults regard as non-intentional, or due to natural laws. But the movement of shadows is due to the rotation of the Earth relative to the Sun; not the talons of darkness reaching for them. The traffic accident that causes them to be late for their sports event is just an accident; and not the universe being mad for them not tidying their room. Adults (usually) forget events because they are busy and tired, not because they want to punish the child, and so on. But adults are still susceptible to this bias. If someone's on their cellphone while driving, and cuts you off or forces you to swerve, it's usually not a personal affront – they just didn't see you there. It's as impersonal as another human's actions can get; they didn't register that you existed. But most people take it personally, attributing malignant intentions to the other driver. Or if someone seems to ignore you at a party, they probably just didn't see you, or failed to recognise you. That's why I don't often wave back if we meet on the street. *Sorry*. When you are watching this video, you can see a story being told, and assign motivations and intentions to each of the characters in the story. Of course, you can also see that these are just geometric shapes, not people. But still, every time I watch the video I learn more about each of the characters. his bias towards perceiving events as intended by someone (or something) rather than happening by chance, or at least via intentions not directed at the observer, seems to be evolutionarily advantageous. Understanding a rare, random event as intentional means you can take some actions to prevent it – and lo! it is averted. You gain a small benefit from feeling you have some control over your life, and thus self-determination. But if an intentional action is dismissed as chance, you won't predict that the event will re-occur, and have less chance of either taking advantage of, or avoiding, it next time. The tendency to perceive intentions also allows mutual activities with others by recognising our common intentions, and thus the potential for cooperation. Human civilisation and even social behaviour by mammals, seems to depend on having some degree of intentionality bias. Intentionality bias often occurs with, and reinforces, our egoistic narrative - the belief that "it's all about me". When nature acts directly to assist or thwart you, you must matter. And there's something or someone to blame for your misfortune. You become the protagonist, the hero, of your life story. We can see the psychological benefits of this, as opposed to thinking of ourselves as being mere flotsam to whom things happen without rhyme or reason. Most cultures have myths or religions about animate natural forces, such as the Sun, or Storms. These give form and purpose, and a sense of understanding or predictability, to random natural forces. Assuming that these various anima and spirits don't exist, the myths are another instance of the Intentionality bias. Note that believing that God (or nature) has a plan for everyone and is actively willing events to bring about this plan isn't Intentionality bias. It's an intentionality belief that coheres with the rest of that religious worldview. Sceptics might think the worldview was formed partially from Intentionality bias, but that's just another form of the claim that if God didn't exist, we'd have to invent him. Science & Intentionality There's an interesting sort of tension between science and intentionality. Most of the time, we regard teleology (the attribution of 'purpose') to be inappropriate in scientific theories. But that doesn't stop us from using metaphorical and often teleological explanations. We'll talk about objects 'taking the path of least resistance' as though they were making a choice, or that evolution 'designed' or 'adapted' humans in certain ways, even though evolution is in direct opposition to intelligent design. Sometimes, this linguistic intentionality bias has harmful effects. Consider, for instance, the case of science regarding the egg and sperm. Emily Martin (1991) highlights asymmetries in how scientific explanations attribute agency and intentionality to males and females. For example, females shed eggs, while males produce sperm. Menstruation is construed as a passive 'chaotic disintegration of form', in which the female body is ceasing, or expelling. Fertilisation is described in terms of the sperm carrying out a perilous journey and assaulting the egg. Although scientists don't believe that gametes have intentions, the commonly used metaphors of intentionality ascribed gendered dynamics to these entities. This language choice reveals certain unjustified presuppositions in the research. For instance, it has been assumed that the sperm actively interacts with the passive, receiving egg (which, once someone thought to look, turns out to be false). It may also have harmful consequences in how the scope of the theory is interpreted - is the behaviour of egg and sperm indicative of innate, gendered natures? Does the 'humanisation' of gametes have implications on policy, such as the permissibility of abortion? Having intentions is one of the criteria for consciousness, self-awareness, and moral status. Perhaps it isn't always possible to give simple explanations of phenomena without appeal to some kind of intentionality. But it might be worth considering what sort of factors influence us to choose those representations A downside of Empathy Having higher cognitive empathy, the ability to predict how someone is feeling, leads to a greater Intentionality bias. It seems that the better we are at reading emotions and intentions from behaviour, the more we do this, whether the "agent" is a storm, a tree branch, a cat, or another human being. Empathy is not just the ability to "predict how someone is feeling" but is a cognitive ability to recognize and replicate another's feelings within oneself. It does, to a certain extent, entail a process of prediction or assumption, but what makes these predictions strong enough to act on (by way of an intentionality bias) is the very embodiment of the feelings they see in others. Empathy's mirror of feelings means its possessor becomes accustomed to absorbing many other fictional emotions. Thus, the prolonged practice of empathy leads one not simply to attribute intention to other people and events but causes one to actively search for emotional intent. This hunt for intent is necessary because when synthesizing life through what others feel, one becomes dependent on finding meaning within one's life through external perspectives. Once an empath has found an emotional explanation, regardless of its reliability, they feel they can respond accordingly. In this way, empaths unknowingly rely on intentionality biasto have the confidence to respond to events and arguments. The opposite form of this bias is the disinclination to attribute intention to other human beings, the failure to develop or apply a Theory of Mind. Some people – including many on the autism spectrum – have this opposite bias, which often improves their objective analysis by avoiding attributing purpose when there is none. The downside to this is the failure to attribute intentions to other humans when we should. When we are evaluating explanations we need to be aware that we tend to over- emphasise intentional explanations, particularly hostile intentions. In extreme cases, this can contribute to feelings of persecution, social isolation or alienation, conspiracy thinking, or paranoia. Instead of being caught up by this bias, dance like no one is watching. Accept that accidents happen. And if you can't bring yourself to accept that something bad was accidental, forgive the person or let it go, for your sake (not theirs). This will avoid much of the damage that Intentionality bias is doing to your thinking. Assumptions: Fact & Opinion🎬 We often classify statements into "fact" and "just an opinion". However, there is a lot of middle ground, and most interesting judgements fall somewhere in-between these two poles. A reasonable judgement is the result of a cogent argument based on publicly accessible premises and the best available information. It may not be true, but is not mere opinion FactTwice two will always be four, but the current prime minister of New Zealand might not always be female. With respect to reasoning, not all facts are equal (实事求是). "Truth" connotes "unchanging", but we want our reasoning to capture the dynamics of investigations, plus the simple observation that with the introduction of new evidence we often change our minds. In short, a system open to change is best built on a vocabulary that doesn't suggest answers that transcend time and context. So we will not limit ourselves to absolute assessments, such as "true" or "fact". By also including terms like "reasonable", "plausible", and "justified" we allow for shades of better, not-so-good, unlikely, and so forth. Opinions Opinions are distinguished by an absence of publicly accessible and assessable evidence. Without shared evidence, there can be no reasoning. Mere opinions are not supported by argument. Of course, should you want to convert an opinion to an argument, you can add some evidence, or some explanation (in Module 8, we will talk about how explanations can feature as part of an argument. But beware: many opinions do not survive the scrutiny of a well-structured argumentFor example, if I say "I really like your shoes", that is my opinion; I have offered nothing more than a preference. I have given you no reason to think anything about those shoes, except that I like them. You might be tempted to think I could add: "I really like your shoes because I like red shoes", and then I would be giving reasons to think something. Notice here we are still missing publicly accessible evidence, and that's what we are looking for in arguments. After all, others may need to check our evidence before they are convinced. In contrast, if I say "Those are the wrong shoes for walking around the zoo", I can show you evidence of what happens to (other) people's feet after walking all day in shoes like that. This evidence might give you reason to think that boots-made-for-walking would be superior to furry slippers on a full day out. Reasonable Judgements This distinction between "reasonable" and "true" can make this part of the course seem very subjective. However, our judgements of informal arguments are not capricious. There are standards against which we can judge arguments, and by and large, this process resembles judgements we make in our ordinary lives quite effortlessly. Surprisingly often, we can't get access to the truth, or if we do, we have no way to determine that it is the truth. This most clearly happens when we only have partial evidence. In these circumstances we concern ourselves not with truth but with reasonable judgements and best available information. When the reasons given in an inference are good, and better than those for alternatives, it is reasonable to accept a conclusion as the best currently available or the most reliable given the current evidence. The best conclusion from current evidence is not necessarily the truth, though sometimes it might be so good that we say it describes what trulyhappened. This is a colloquial use of "truth", different from the kinds of assessments we make about present facts. Our use of conclusions should be modest. We need to allow for the possibility that another conclusion might eventually supplant our preferred conclusion, should new information be found, or should circumstances change. This opens up the possibilities of both exercising judgements and adjusting beliefs as further evidence arises. In the arguments we've already looked at, a conclusion follows from a set of premises. We can see that the premises and the conclusion are related in the right sorts of way so that the relationships between the premises (assuming the premises are true) license us to assert that the conclusion is probably true. On the other hand, when we release ourselves from the burden of truth, we allow ourselves to assess nuances and alternatives. Our task becomes to clearly present good reasoning that licenses us to assert that one of the many answers to an investigative question is the best given what we currently know. Dispositions: Bullshit Sensitivity🎬 'Bullshit', as well as being a slang term, is also a philosophical one. In this course, we follow Harry Frankfurt, and distinguish it from lying or deceit: It is impossible for someone to lie unless they think they know the truth. Producing bullshit requires no such conviction. Frankfurt (2005) Bullshitting is an indifference to whether statements are true or false. Our concern as Critical Thinkers is to be able to identify and manage our own tendency towards bullshit. Developing this attitude will also help you to spot bullshit from others, but it is important to avoid adding to the piles of bullshit out there, as well as avoiding stepping in someone else's steaming pile. Detect and reduce your own bullshit, your use of unreliable statements Here's a quote from pp.29-31 of Frankfurt, which describes both the beauty and danger of bullshitting: The liar is inescapably concerned with truth-values. In order to invent a lie at all, he must think he knows what is true. And in order to invent an effective lie, he must design his falsehood under the guidance of the truth. On the other hand, a person who undertakes to bullshit his way through has much more freedom. His focus is panoramic rather than particular. He does not limit himself to inserting a particular falsehood at a specific point, and thus he is not constrained by the truths surrounding that point or intersecting it. He is prepared, so far as required, to fake the context as well. This freedom from the constraints to which the liar must submit does not necessarily mean, of course, that his task is easier than that of the liar. But the mode of creativity upon which it relies is less analytical and less deliberative than that which is mobilised in lying. It is more expansive and independent, with more spacious opportunities for improvisation, colour, and imaginative play. This is less a matter of craft than of art. Hence the familiar notion of the "bullshit artist". Here are five ways that, with the best will in the world, we produce bullshit: We believe overheard, 'found', or inherited views and those that are based on conventional wisdom more than those acquired through investigation, when we should believe them less. In our arguments, this is when we fail to identify that our statements are not obvious, and may require referencing. We fail to find sources to support dubious statements. In our arguments, this takes the form of failing to reference premises that we know our audience is likely to doubt. We fail to distinguish between reliable and non-reliable sources. In our arguments, this takes the form of referencing from media sources, blogs, wikipedia, etc. (Note, I do reference wikipedia articles several times in the course material, but always to give you a gentle overview of a historical topic, where I have already checked the source for reliability against higher quality sources.) We rely on 'experts' who are spouting off outside their areas of true expertise - which are almost always much narrower than you might initially expect. Or on other people distorting the expert views via reporting, popularisation, or simple word of mouth, even with the best of intentions. In our arguments, this is citing secondary sources, or people who are not authorities on the topic. We favour the salacious, exciting, scandalous or interesting over the mundane. In our arguments, this can be cherry-picking the most extreme or misleading information or statistics, rather than representing the content as a balanced whole. Bullshitting, or making unfounded claims, is a form of bluff. As a critical thinker, it's our role to call that bluff, and seek appropriate evidence. Demanding the evidence from the bullshitter is inherently confrontational; this is an archetypical response from someone who has yet to master the disposition of Respecting people. This is why we ask you to start with detecting and reducing your own bullshit. Once you've developed this mind-set, you will easily be able to spot others engaged in this behaviour, and yet have less overwhelming need to call them out on it. Bullshit Sensitivity is a disposition to detect and reduce your own bullshit or reliance on unreliable statements. The desire to identify and reject bullshit in ourselves and others is predicated on the desire for truth. This is a related disposition. Not only must we value truth, but we must be motivated to seek out truth, even at considerable effort and discomfort. Critical thinking, as you have discovered, is not an easy or natural process, and it requires continual striving. To seek truth, and to be willing to expunge bullshit, we need the enthusiasm to face challenges to truth head on, to gain the unvarnished and unadulterated truth, even the hard truths that no one likes to hear. To do this, we need to remain impartial, rather than focused on the desirability of outcomes, and diligently objective throughout the process. This may not be possible in some contexts, or on some topics, but this is an aspirational goal - one that is worth striving for, even though we will fall short, because it will reduce our epistemic harms, the amount of bullshit and misinformation that we will believe. Without this need to act, to seek, to suffer for truth, there is no need to be sensitive to bullshit. There's also a practical aspect to truth-seeking. A Swedish study of nurses showed that those newly graduated nurses that scored lowest on their truth seeking scale also had a lack of adaptability to new and changing medical practices, independent of their test scores or other clinical ratings. They also showed a steady, though minor, relative drop in performance over their initial year of work compared with more truth-seeking medical practitioners. If these study results generalise, then truth seeking is a useful disposition for any person who expects their skills, circumstances, or employment to change over their career - that is, pretty much everyone.

Use Quizgecko on...
Browser
Browser