Life Tool Post Remainders - Critical Thinking

Summary

This document delves into critical thinking concepts, exploring epistemic courage, biases in decision-making, moral reasoning, and fallacies. It covers topics such as pattern seeking, the nature of arguments, and recognizing cognitive biases. The text also highlights the importance of dispositions like tolerance and helpfulness.

Full Transcript

Dispositions: Epistemic Courage🎬 Courageous thinkers take risks with ideas and reasoning, fail, and admit their failure so they can learn. Repeatedly, publicly, failing to reason well takes courage. Or recklessness and blind narcissistic ego. The most courageous act may be to express your...

Dispositions: Epistemic Courage🎬 Courageous thinkers take risks with ideas and reasoning, fail, and admit their failure so they can learn. Repeatedly, publicly, failing to reason well takes courage. Or recklessness and blind narcissistic ego. The most courageous act may be to express your own thoughts in public, as you may be scorned or threatened. Epistemic courage is persisting in an inquiry, or reasoning process, despite (perceived) risk of harm or adversity. Reminder: "Epistemic" means to do with beliefs and knowledge, rather than physical action. Epistemic Courage is the disposition to take intellectual and emotional risks and leave yourself vulnerable to discomfort, criticism, or scorn in order to improve your knowledge. There are many aspects of epistemic courage. All of them seek a middle ground between not risking failure and humiliation, and being rash or indifferent to consequences. In this course, we ask you to post to discussions every week, under your own name, sometimes in a language you don't know very well, to be publicly critiqued by tutors and fellow students. To do so, when you are scared and uncertain, takes epistemic courage. Well done. Thinking while Threatened We should react to intellectual challenges and threatening ideas by not rashly rejecting (or accepting) them, but also not being too fearful to examine them, their justifications, their history and context, and their consequences. This is not to say that we should accept or tolerate people who threaten us. But if we encounter an idea that challenges or even repulses us, we shouldn't immediately reject it without further thought (at least the first time we meet it). We should be cautious to a degree, so that we don't expose too much of our core identity to change or examination too often. And we should be much more cautious in some environments. Some places and people are safer to think around, and some are less so. But we owe it to ourselves as critical thinkers to – at a minimum – wrap up any new idea and pocket it, with the intention of considering it later in a relaxed, less hostile environment. Another aspect of courage under threat is to publicly explore a novel, confronting idea, inquiring without interrogating, and engaging with the idea without accepting or scorning it. The more emotionally or foundationally challenging or alien the idea, the harder this is to do. If you like the intellectual challenge of exploring ideas, but aren't moved or emotionally confronted by them, you may be technically able, but you aren't (needing to be) courageous. Public Dissent Epistemic Courage can also be a willingness to state a dissenting view when you are likely to be criticised for it, or risk a degree of social opprobrium or isolation. Of course, risking physical harm, imprisonment, or death for your opinion, requires a kind of physical rather than epistemic courage (or probably foolhardiness, except perhaps in special circumstances), although some civil rights protests require both sorts of courage. And stating a dissenting view that's obnoxious, hateful, or intolerant isn't courageous – it's spiteful. A similar situation occurs when you have access to some important, restricted knowledge, and there is social or institutional pressure not to share it. Being prepared to speak truth to power, to be a whistle-blower, to transmit information rather than merely seek it for your own sake, and having the prudence to know when and how to do so, is a socially important form of epistemic courage. Autonomy An oft-overlooked aspect of epistemic courage is its interaction with the moral virtue of autonomy (self-determination). Blindly following someone else's teachings is not intellectually courageous, even if sharing their ideas under threat may be physically courageous. Whether you follow the Third Reich's blood-soaked vision or Te Whiti o Rongomai's principles of non-violence, parroting ideas isn't being intellectually courageous. Thinking for yourself is. Inquiring into the Forbidden A final kind of epistemic courage is investigating the proscribed or taboo, asking questions that shall not be asked, looking into unspeakable tragedies, the unmentionable, and the shameful – with the intention of understanding and helping. Issues such as child abuse, domestic violence, and elder abuse, are often not discussed because they are too shameful – and so the harm continues. Epistemic Recklessness & Cowardice If you have no fear, you aren't being courageous – there is either very low risk, so you are merely being competent and prudent, or there is a high risk, and you are being reckless or rash. If your fear paralyses you, when most people would act, you may be disposed towards cowardice. Psych: Pattern Seeking Francis Bacon said that humans "suppose the existence of more order and regularity in the world than it finds". This element of our psychology is both essential for science, and highly dangerous if not tempered and refined. We are adept at telling stories about patterns, whether they exist or not Patterns are the result of finding the same, or similar, explanations for several events in similar circumstances; they are used to make predictions. In science, the patterns are often described as hypotheses or even laws, and the predictions are how we test these hypotheses. To come up with patterns, we need to find an explanation that fits the evidence (data) across a number of different experiments; indeed, across almost all experiments of the same class. And that means we need to create a natural grouping of phenomena to analyse. It turns out that humans are very good at finding similarities and grouping things, but not necessarily in the same way that nature seems to. Or, as Michael Shermer says "Humans are pattern-seekingstory-telling animals, and we are quite adept at telling stories about patterns, whether they exist or not". Pattern-seeking is a bias to find human-intelligible order and regularity in the world. Consider Aristotle, one of the most successful pattern seekers we know (he is the father of western biology and logic, and step-father of physics). He thought that the correct similarity grouping for analysing patterns of falling objects was objects of the same weight. That is, he claimed that objects fall at a speed relative to their weight. Certainly a feather falls slower than a hammerLinks to an external site.. QED, right?But 2000 years later, Galileo thought otherwise. He conducted a famous thought experiment showing that falling speed couldn't be proportional to weight, and decided that in ideal circumstances, all objects fall at the same rate. In fact, gravity (the downwards pull) accelerates all objects (including feathers, bricks, air, and even light) equally, but air resistance (the other significant force on most falling objects in Earth's atmosphere) is affected by the object's shape and size, and the relative size of these two forces is thus affected by the object's (weight ÷ size), or density. So, a better similarity for grouping for determining the correct pattern about objects falling on Earth is objects of the same shape and density. After Galileo discovered this new pattern, Newton broadened the pattern to include planets falling around the sun, and then Einstein tweaked the pattern for very rapidly moving objects. Having the right similarity grouping is like asking the right question, or collecting the right type of data. We can often find a pattern to explain particular data. But if we can't find the correct pattern, we usually can still find some pattern. The skill to carve nature at the joints, so that it can be dissected cleanly and accurately, is something we've valued since at least Plato discussed it in the Phaedrus. But we still have trouble teaching or even describing it. So now we know that pattern seeking is important, and that it can be hard to know when we've found the real pattern. But why might pattern seeking be dangerous when used carelessly? Here's two reasons: First, a strong attraction to complete patterns and dislike of deviance or flaws in patterns is strongly linkedLinks to an external site. with prejudice against minority groups, particularly those easily labelled 'deviant' rather than ''special'. Both types of variation from the norm are described using similar loaded terms e.g. 'weird', 'flawed', 'mistake', 'deviant', and 'fixable'. Unsurprisingly, this is also highly correlated with a conservative mindset (here, 'conservative' is a psychological, not political, term indicating a preference for constancy, similarity, and predictability; there is weak correlation between psychological and political conservativism). Second, psychologists have shown that all humans have some degree of Apophenia: Apophenia is the tendency to perceive meaningful patterns within random data. We tend to see patterns in random data as well as where there really are patterns. And the patterns we tend to see are based on our existing beliefs and desires. For instance, seeing a portrait of the standard western representation of Jesus in toast is quite common. To the right are two examples. It's common enough that there are a raft of psychology papers on this phenomenon (e.g. Seeing Jesus in ToastLinks to an external site., or Seeing Jesus in Toast is NormalLinks to an external site.). This is not to say that no images of Jesus on toast could be genuine, but the vast majority are certainly examples of pareidolia. A similar experience occurs when we look at clouds and see a rabbit, or sheep, or other shape. We know the cloud isn't really the same shape as an animal, but it is easy to see some likeness. So, we have a psychological bias towards seeing patterns; this helps us to detect them more rapidly when they are there, but also means that we get many "false positives"; that is, we also detect patterns that don't exist. And we also judge people and events that fail to conform to our chosen patterns. It is still an open question in philosophy, psychology, and the foundations of science whether any of the regularities that we see and agree on about the outside world are actually part of objective reality, or whether they are simply imposed on our perceptions by our pattern-seeking minds. After all, how could we tell, in any particular instance? Perhaps we should think of Science as the study of the patterns we perceive, not the underlying truths of the universe. Moral Arguments Some people think that once an argument involves some reference to a moral judgement or principle, we must stop our critical thinking argumentanalysis work – because it is either too subjective or uncertain. This is a mistake. Most of critical thinking is about examining reasoning (our own and others') in a careful and systematic manner, and we can't simply stop doing that when some reference to morality pops up. I will illustrate how we can approach moral content in a careful and systematic way, using part of a real argument: The government just has no right restricting the relatively harmless pleasures of consenting adults. Even if marijuana is harmful – and that is by no means clear – it is the right of every individual to decide whether to take it. Smoking weed is a "victimless crime" where only the user is taking any risk. It is immoral to tell people how they can, or cannot enjoy themselves. Charity We first identify the moral content. This paragraph seems to refer to moral values or principles, but it is unclear what principle. The last sentence ("It is immoral to tell people how they can, or cannot enjoy themselves") looks like it might be a moral principle – but it is not likely that the author intends for it to be. Why? Because it would be a (stupidly) bad moral principle. Why would be it be stupidly bad? Because it would mean that if someone enjoyed murdering people or setting fire to kittens, it would be immoral for anyone to tell them that they shouldn't do that. And so it would not be a reasonable – or charitable – interpretation of the argument. So, some interpretation work is needed to identify the moral principle of the paragraph. By combining the first and last sentence I formulate a rough draft of the moral principle that the passage appeals to. It would be something like this: (*) It is wrong for governments/anyone to hinder/restrict consenting adults from doing something that isn't harmful to (themselves or) other people. Notice that I have changed "enjoying themselves" to the vaguer "doing something". This is because the argument mentions people choosing (deciding) to do something, and I interpret this choice as the morally important thing that the author is gesturing towards. I could have interpreted it as meaning that it's only wrong to hinder people from doing things they enjoy, and that if it's something they feel neutral about (e.g., going to work) it would be completely morally okay to hinder them. But that would not be charitable. There are at least 4 ways to interpret this moral principle (*): 1. It is wrong for anyone to restrict consenting adults from doing something that isn't harmful to other people. 2. It is wrong for governments to restrict consenting adults from doing something that isn't harmful to other people. 3. It is wrong for anyone to restrict consenting adults from doing something that isn't harmful to themselves or other people. 4. It is wrong for governments to restrict consenting adults from doing something that isn't harmful to themselves or other people. #1 is the strongest, most expansive moral principle here because it identifies more stuff as wrong/immoral than the others. A possible exception to both #1 and #2 would be things like suicide prevention. If we think that it is sometimes acceptable to restrict people from harming themselves, then we'd probably go with #3 or #4. And a possible exception to #3 could be to say that sometimes it is okay for a private person to restrict what others can do when they are in that person house (e.g. "no eating chocolate under my roof"). So #4 is probably the best interpretation of those alternatives. There is no guarantee that this is really the principle the author intended, but it is a reasonable and charitable interpretation. People (even moral philosophers) often use references to moral principles in vague ways without knowing exactly what they mean, so we need to be patient and cooperative when trying to identify what the argument really means. Evaluation of Moral Principles Now, assuming that #4 is the moral principle that the author has in mind in the original text, how do we evaluate whether it is acceptable? (Truth is a much harder standard, so it's lucky we only need premises to be acceptable!) We don't just say "apparently this is the moral principle they are appealing to, and there's nothing we can say about that because it's all subjective." Instead we test whether it is acceptable by asking two questions: 1. Is this moral principle consistent with other moral principles that the author/arguer appeals to elsewhere? Is the author/arguer saying something that contradicts this principle elsewhere? 2. What are the consequences of this principle? Are the consequences acceptable to the author/arguer and/or to other people? For example, a consequence of the principle in #4 might be that the government should not stop people from walking around naked, or having sex, in public (this depends on what we interpret 'harm' to mean, but let's say discomfort/shock/embarrassment don't qualify, and that's all the public would feel). Does the arguer find that to be an acceptable implication of the principle? If not, they may have to change the principle – if they want to be consistent. They may not care about being consistent of course; but if they don't, we need not consider their argument. Suppose the author is happy to accept the consequences of the moral principle, and that it is consistent with their other views. At that point we can either accept or reject the principle the author is asserting based on our own beliefs, or we can try to have a new discussion about the principle itself and why it should/shouldn't be accepted. If you want to be able to have good arguments that involve some moral content, you will have to be prepared to look at the moral principles or values you are referring to and consider what consequences they have, and if they clash with other moral principles of values you assert elsewhere. Quite often you'll find that there are inconsistencies in your position, and then you will have to go back and refine the moral principles you are using by making them more precise, or by changing some of them to be consistent with other, more important principles. Once you have a clearly formulated moral principle of value, and you understand its consequences, some people may not agree with it, or your argument might be weak. But you can't know that until you have done the work to figure out what principles or values you actually hold. Assumptions: Conspiracy Thinking🎬 Conspiracies happen. Sometimes governments and other powerful shady forces conspire against each other, and against us. So conspiracy theories are a viable rival explanation for many types of events. But they are relatively rarely the best explanation. Conspiracy Thinking is a set of assumptions that bias people towards conspiracies as explanations for a range of slightly mysterious, or even straight-forward, events. There is a lot in common between the conspiracy, sceptic, and critical thinking communities. We all like to challenge conventional wisdom, take critical attitudes towards media reports, and question the basis behind default opinions. We all loudly proclaim "think for yourselves". We also tend to pick up collections of 'factoids' and stories of our techniques being successful. That is, we all think that we've found a way of analysing the world that tends to provide insights, and is a somewhat reliable guide to truth. Here's one key difference: A well-woven conspiracy theory appears to be supported by all the premises of an inference to the best explanation. Evidence that appears to support a different explanation will also support the conspiracy. Evidence that weakens support for another explanation strengthens the conspiracy theory. And evidence against the conspiracy is actually evidence of a cover-up by the conspirators. Conspiracy theories often explain a much more diverse and wide-ranging set of evidence than their rival explanations. There is also some evidence that will support the conspiracy theory, and no other explanations (such as "conspiracies happen"). Therefore a conspiracy is the best explanation. For almost everything. So what's gone wrong? Maybe nothing. It's possible that almost everything, from chemtrails, to the fake moon landing, to the JFK assassination, to the UFO cover-up in Roswell, to the lizard people running the secret world government for the Illuminati, are all true. But it's not very likely. There are three primary reasons for this: 1. Most conspiracies have an easy answer for either "Why do they conspire?" or "How did they succeed?", but very rarely both. Without both a good explanation of why they would bother conspiring, and a good plan for how they could succeed, the conspiracy theory is simply not a good rival theory to consider. Try to provide reasonable answers to both these questions for any conspiracy theory you favour. 2. The standards for explanations include being consistent – fitting all the (indispensable) evidence; uncomplicated – lacking many unlikely details; and competitive – addressing the same set of data as their rivals. Conspiracy thinkers tend to instead concentrate on being comprehensive – explaining vast swathes of seemingly unrelated data by a single, complex explanation. They also tend to claim some evidence is indispensable which those investigating other explanations simply dismiss, and dismiss other evidence as "plants". This means that their theories are not competitive – they aren't trying to explain the same evidence. The theories also tend to lack internal coherence. They don't make sense even on their own terms. Again, most conspiracy theories are not up to our standards, and so not a good rival theory to consider. 3. The standards required of the conspirators' actions to succeed are often too high. A common and troubling theme to conspiracy theories, and one that ties into the common feelings of alienation and lack of control by conspiracy thinkers is the assumption of phenomenal competence, discipline, and technological expertise by the conspirators. Three examples: i) The alien spaceships in the North Head tunnels in Devonport, buried there in WWII, can't be spotted with scans because of their alien technology. ii) The mind control drugs in the water can't be detected because the government controls the university, and so we only get taught the "false" chemistry that can't detect it. iii) The New World Order runs a parallel secret global government, with no leaks, no signs of infrastructure, and no disgruntled or drunk or loose-lipped employees (although most large organisations can't even manage decent parking). In general, the larger scale conspiracies often require many, many people to be uncommonly competent, reliable, close-mouthed, and cooperative, often including unconnected scientists and bureaucrats from dozens of countries. This is unlikely; consider instead the aphorism: "three people can keep a secret if two of them are dead". These three reasons follow directly from our understanding of good hypotheses (IBEs) and good actions. Psych: Anchoring and Adjustment🎬 We have already seen that the way a problem or question is framed can have a dramatic impact on the answer that someone arrives at in our discussion of the Framing Effect. We will now consider how exposure to information, even if that information is entirely arbitrary, can heavily influence a person's decision-making. This occurs through what is formally known as the Anchoring and Adjustment heuristic. Anchoring & Adjustment is the psychological bias to fixate on any arbitrary but specific number as an anchor, and vary your actions based on that anchor. This heuristic, discovered by the psychologists Amos Tversky and Daniel Kahneman, posits that in many scenarios, such as when estimating a number, people tend to start from an arbitrary but readily available value known as an 'anchor', and then adjust towards their final answer from that starting point. As we will see, an anchor can be anything from the sticker price on a car on sale in a yard, to the opening figure put forward during a salary negotiation. As a result of this heuristic, the final answers arrived at by individuals are more often than not heavily influenced or 'contaminated' by the anchor they started with. The Anchoring & Adjustment heuristic has been tested extensively. Experiments used to demonstrate the heuristic typically involve exposing people to an anchor, either explicitly or implicitly, and then having them answer some form of question. Assumptions: Scientism and Reductionism🎬 Science has been very successful at describing and predicting a wide range of phenomena. Over the last 300 years, the list of fields that it is successful in has expanded from Astronomy, Optics and Mechanics to Physics, Chemistry, Biology, Geology, Psychology, and large parts of Medicine. Engineering has used the discoveries of science to design and construct many wonderful technologies and devices. Meanwhile, Theology, the Arts and Humanities, and other areas of study appear to have made little progress. Science is (usually) considered to be reliable, politically neutral, and factual. This has led to two harmful but common assumptions: Scientism (that all knowledge is science, and only science can provide knowledge), and Reductionism (that each scientific area can be reduced to a more fundamental science, so these 'fundamental' sciences are all that matter). Scientism It seems a sensible assumption – based on this historical trend – that the methodology of science should be extended to other areas, such as the social sciences, business, the arts, religion. etc. As a consequence, any knowledge gained other than by scientific methods should be considered suspect, and little more than "folk theories" or mythology. This position is called Scientism (rhymes with 'racism'). Scientism is a set of erroneous notions including: that science identifies truth; that science is the only reliable way to find truth; that science will eventually replace or 'improve' most other subjects; that those subjects not included in science are worthless. Scientism regards only empirical, scientific methodology to be a reliable course of knowledge (with the exception of mathematics). Scientism blinds you to many ways of thinking and forms of knowledge. "The health of science is in fact jeopardized by scientism, not promoted by it. At the very least, scientism provokes a defensive, immunological, aggressive response from other intellectual communities, in return for its own arrogance and intellectual bullying. It taints science itself by association." Ian Hutchinson, Physicist Scientists are usually experts in one small corner of one narrow field, and are competent judges of information and methods appropriate to their field. But outside that field, scientists tend not to express authoritative views, because they can see the limitations of their approaches even within their fields; and also because they are aware of the effort required to gain the expertise in their small corner, and thus have some humility about the effort it might take to become competent in another field. But a growing proportion of followers of science, and even a few practicing scientists, seems to believe that the poor track record of many non-scientific fields is purely a result of not applying scientific methods, and that eventually science should, and maybe will, cover all of human knowledge (with the usual caveat of mathematics – and sometimes religion to avoid controversy). Sure, perhaps. But do you have any empirical evidence for your extraordinary claims? Or a reasonable time-frame for their success? Have you considered the changes and failures of science when adapting to psychology or the social sciences? Both economic 'science' and political 'science' have been attempting to use scientific methods for at least a century, and they have not made much progress. In the meantime, we have other methods that are producing workable theories, with different strengths and weaknesses than Science. One caveat: 'scientism' is also sometimes used by anti-science folks as a derogatory term for people using scientific methodology in an area of science that they reject, such as evolution, genetic modification, or climate change. If you are practicing science, or reporting scientific results, you aren't guilty of scientism. Reductionism One reason why some people think that science is powerful is that they see Psychology and Medicine as just specialist forms of Biology; Biology as just organic Chemistry; and Chemistry as just arranging atoms into molecules, which is Physics. Sufficiently advanced Physics will, under this story, eventually be able to 'solve' these other disciplines, with a level of rigour that the other 'sciences' can't hope to emulate. This view that some sciences can be reduced to a more fundamental science is Reductionism. Reductionism is the erroneous notion that every complex phenomenon, especially in biology or psychology, can be completely explained by analysing its simplest, most basic physical mechanisms. Reductionism is believing everything is a nail because you can hit it. This is true on one sense – the objects and events studied in each science are made up of objects and events studied in other sciences (ontological reduction). And sometimes discoveries in one science informs discoveries in others (for instance, quantum mechanics helps explain how some birds can navigate vast distances). But that doesn't mean that we can always understand these new objects and events purely by understanding the smaller ones (epistemic reduction). For example, we can't understand what a cat is purely by understanding how quarks and leptons work. This is because new properties and objects can sometimes emerge through the complex interactions of simpler components. Emergence is a counter-example to Reductionism. Dispositions: Inquisitiveness🎬 Inquisitiveness is the disposition to inquire. Not to ask a series of pointless or confrontational questions, or to show people up by asking tricky or point-scoring questions, but to genuinely inquire into areas with the goal of seeking truth. This disposition requires an enthusiasm for knowledge, and for explanations and connections, rather than just pub quiz trivia and factoids; to ask "why", "how", and "what-if" questions of yourself, of others, of the situations you find yourself in; and the skills to find answers, or to recognise when there aren't answers, and what significance that might have. Inquisitiveness is the disposition to continually seek answers, and to be imaginative, and curious in one's thinking. Inquisitiveness encompasses several other useful dispositions, including being truth- seeking, persistent, imaginative, and curious. Another associated disposition is the desire to share your questions (and less importantly answers) with like-minded people, for the joy of shared inquiry rather than to impress, indoctrinate, or intimidate. Being inquisitive at the wrong time, in the wrong way, or with the wrong people, can easily go awry. There are times to temper your enthusiasm for inquiry, such as at funerals and movie theatres. These are not good times to ask incisive, assumption- threatening questions. And in general, it's best to primarily direct these questions at yourself, or at experts who have time to talk about their subject. This helps reduce the accidental collateral damage that truth-seeking and assumption-challenging can cause. For most people, it is easy to simply accept the surface description of a situation or problem, and work within the stated or implicit constraints. This ruffles few feathers, and promotes good team work. It respects existing decisions and authority. It leads to expected and predictable outcomes. It leads to stasis, and stultification. If we value challenge, innovation, creativity, progress, insight, and novelty, we should sometimes question the assumptions in our practice. Of course, asking the wrong questions is as problematic as not asking any at all. Thomas Pynchon suggests: "If they can get you asking the wrong questions, they don't have to worry about the wrong answers". This misdirection is a standard skill for sales & marketing folk, illusionists, politicians, and hucksters and shysters of all kinds. And asking questions of the wrong people gives you misleading or uninformative answers, or can confuse and alienate them. Two important aspects of Inquisitiveness are imagination and curiosity. Curiosity drives us to investigate both the strange and the mundane, and imagination helps us to come up with interesting alternatives to consider. They combine to improve our capacity to question by offering positive alternatives. To oversimplify: If you aren't curious, you won't seek alternatives. If you aren't imaginative, you won't create a good range of alternatives. Curiosity and imagination are the drivers of almost all but the most incremental change and innovation. However, change isn't necessarily good, even though the potential for change, or at least re-assessment, is. Similarly, having ample curiosity or imagination isn't necessarily always helpful for Critical Thinking – it requires some discipline to put them to good use. We probably all know people who have very active imaginations, so continually spout a range of good, bad, and irrelevant ideas, but lack the capacity (or interest) to refine them. Or people who are interested in everything, and so learn nothing. Or people who are curious about one or two obscure narrow topics, and so focus on details to the loss of the big picture. These examples (and more) are why I don't suggest that Imagination and Curiosity are important Critical Thinking dispositions in their own right – it's only when they are harnessed and applied that they become valuable, rather than simply enjoyable frivolities. As with all our dispositions, Inquisitiveness can be taken too far, or used in inappropriate times and settings. We shouldn't question everything; at least not verbally. We should have doubts, and check suspicious claims, but only share the results when it's likely to be productive. But if we never question, we will never change, grow, or think. Psych: Survivorship Bias🎬 If you ask a successful athlete, or entrepreneur, or artist what made them successful, they'll often say something like "Believe in yourself. Never give up. Sometimes things get hard. Just ignore the nay-sayers, and you will succeed". This is terrible advice. Well, it's not bad advice. If you give up, you will usually fail. But if you believe in yourself, and dedicate your life to being a success in a highly competitive environment, you will usually fail, too. One of the few good things that American Idol has produced is the many auditions of terrible singers who believe they are great, despite what we can all hear and what the judges tell them. They have bought into the idea that because the successful have self-belief, that's all they need. This dramatically demonstrates the horrible reasoning flaw which is survivorship bias. Survivorship Bias is the bias of looking only at the 'survivors' to determine why they survived. You might think that houses were better built in the old days. Look at any of the very old houses in Auckland. The craftsmanship and care in building is much higher than most of today's houses. But don't forget that almost all the houses from that period have fallen down, rotted, been demolished, or rebuilt. Mostly, the very best built (and luckiest) houses survive. Judging all houses from 1920 by the standards of the surviving few is a good example of survivorship bias. We see the same with great music. Almost all the music of today is forgettable trash. Think about the music you liked last year. How much of it will you still be listening to in 10 or 20 years time? Not much. But the classic rock & easy listening stations are chock- full of unforgettable classics. We might think that music was much better in the 1960s, '70s, '80s, or whenever (your parents probably do). But the music that survived (a) is the best of the best from entire decades, and (b) helped to shape our standards of good music. The best songs from the last ten years are as good as those from any decade. And time will filter out most of the dross. Here's a different, and rather well-studied case of survivorship bias. When Britain and America were carpet-bombing German civilians in World War Two, their long-range bombers were shot at a lot. The military analysed the bombers after they landed to see which parts needed better armour (They couldn't put armour plating on the whole plane without reducing their ability to level towns efficiently). Looking at the damage diagram to the right, where would you put the armour plating? Where the holes aren't. The planes that returned to base could still fly with this damage. The planes that didn't return were presumably shot elsewhere. These are places that need to be protected. Perhaps you have a great idea, and you are thinking of dropping out of University to pursue your dream of becoming an entrepreneur. Fair enough. But saying that Bill Gates, or Oprah, or Owen G Glenn, or Mark Zuckerberg dropped out, and built their empires is missing a key fact. Millions of others dropped out too, and we don't know about them, because most failed terribly, many eventually made reasonable livings, some succeeded, while only a lucky (and talented, and hard-working, etc) few created empires and became known. Looking at the few who succeeded without noting how many – potentially as hard-working, inspired, and worthy – failed, is survivorship bias. Another good example of survivorship bias is how people present themselves on social media. Instagram is full of people doing exciting things, looking glamorous, or having just won awards (and pictures of their food, for some unknown reason – we get it, you eat!). It's success, success, success. You need to ask yourself what they were up to on the days they didn't post. A more controversial example is prayer. Many people recover from life-threatening illnesses after prayer. Unless you want to say that everyone else who prayed and died were not worthy in God's eyes, you might want to re-think attributing their recovery solely to prayer. (Note that we may still live according to God's ineffable plan, but it is irrational to only note the times that prayer worked, and not the many, very similar, times when it did not). A less confronting version of this bias is when athletes said that their faith in God helped them win. Generally the losing side also has devout players who put their trust in God, but we don't ask the losing side how their faith failed to help them during the game, or whether they doubted God, so lost. Fallacies: Statistical And Causal Statistical fallacies occur even among the highly mathematically literate. Our intuitions and heuristics for reasoning with statistics tend to be highly unreliable. Causal Fallacies are also commonplace, as we easily confuse the perception of patterns with there being an underlying reason why these (apparent) patterns occur. A statistical fallacy is a fallacy based on using intuition rather than mathematical reasoning when dealing with repeated events. A causal fallacy is a fallacy based on the reasoning gap between correlation and causation. 1. Base Rate Fallacy Description: Selecting the incorrect population to assess the probability of an event. Example: Given in the Covid testing section of the Probability page. 2A. Gambler's Fallacy Description: Taking past results of random events into account when predicting future events. In particular, assuming that recent events are less likely to reoccur. Example: If you've just rolled a 6 four time in a row on a fair die, then the chances of rolling another 6 might feel much lower, as it's bound to even out soon (by "the law of averages"). However, it's still 1 chance in 6. Similarly, if you are on a losing streak at any game of chance, the odds of winning the next round are neither higher nor lower than they were at the beginning of the night. The most famous example of this fallacy occurred in a Monte Carlo casino in 1913. A roulette wheel came up black (50:50 chance) 26 times in a row. By about the 15th time, gamblers were flocking from the rest of the casino and betting large sums on red, knowing that "it had to come up now, after so long". But there was still the same chance of red occurring for each spin of the roulette wheel. The wheel can't "remember" previous spins. If the random events are not independent, then this may not be fallacious reasoning. For instance, if you are betting whether the next card to be turned over from a pack of 52 cards is an Ace, and no Ace has turned up in the first 40 cards, then your odds of winning are now 4/12 or one-third, not the original one-thirteenth. However, if the revealed card were to be shuffled into the pack each time, the odds would stay at 1/13 no matter how many times or how recently an Ace had appeared. A real-world example is that many slot machines ('pokies') are programmed to increase the chance of large payouts the longer they have gone since the last big payout. 2B. Hot Hand Fallacy This fallacy is the other side of the Gambler's fallacy. The most general form of this fallacy is that people on a winning streak think that they are more likely to win. It was initially described in terms of basketball, but applies to most forms of sports, gambling, stock market trading, career progression, etc. In basketball shooters feel that they have streaks where almost every shot they take goes in. Their team mates then pass them the ball more, until their shooting streak ends. Statistical analysis shows that these streaks occur about as often, and last about as long, as would happen if shooting success was purely random, at least in the NBA. That is, there are very few actual hot shooting streaks in (NBA) basketball. The players reject this analysis out of hand. 3. The Correlation Fallacy Description: Two events seem to always occur together, one preceding the other. Therefore, the first event causes the second. (Also known as the causation fallacy, or Post Hoc Ergo Propter Hoc). Example: Whenever I see lightning, I will hear thunder shortly afterwards. So the lightning causes the thunder (this seems reasonable). Whenever I watch them play, the All Blacks lose unless I wear my lucky scarf (much less likely). Whenever I forget my umbrella, it rains (but it rains a lot). The more the US imports oil from Norway, the more drivers are killed by railway trains (details hereLinks to an external site.). Humans naturally look for common causes for events, and when we find something, whether it is the actual cause, is somewhat related, or is just a co-incidence, we usually believe that it is linked. This is also the basis of many superstitions. Causation is much more complex than continual correlation, and our observation of correlation is strongly affected by all sorts of heuristics and biases. 4. Single Cause Fallacy Description: We assume there is a single cause for an event when it is actually caused by a complex combination of factors, each of which is insufficient by itself. Example: Most of our more realistic example IBEs commit the Single Cause fallacy. In practice, many of them have multiple partial, overlapping explanations that collectively contribute to the overall explanation. Why do we teach you a fallacious argument method? Well, the IBE is as simple as we can make it, and a multi-factor IBE requires horrendously technical statistical analysis, but uses the same basic principles of Critical Thinking and good reasoning practice as our overly simplistic single-factor IBE. 5. Over-determination Fallacy Description: We assume there is a single cause for an event when it is actually caused by several different factors, any of which would have been sufficient. This is the opposite to the Single Cause Fallacy. Example: Grigori Rasputin was an immensely vigorous, charismatic, lustful priest who was the spiritual adviser (and lover) to most of the Russian royalty just before the Communist Revolution. In 1916, at a dinner in his honour, he was poisoned, shot, stabbed, clubbed, castrated, and finally drowned. Rasputin's death was over-determined. If we prevented any one (or several) of his causes of death, he would still have died; in this sense, none of them were a sole cause; yet any of them could have been the cause of his death. Over-determination is quite common when there are several possible causes for an event. Psych: Moral Biases🎬 There are several biases that specifically affect how we make moral judgements, in addition to the more general biases that affect all our reasoning. They are linked to some fundamental aspects of our moral reasoning, including who we feel most empathic towards, and the principles that we use to judge people. The five principles that are foundations of our moral judgements (according to psychologists Jonathan Haidt and Jesse Graham) are: harm avoidance, fairness, purity, respect for authority, and loyalty. The first two principles are universal – the last three vary more by individual and culture. Each principle has its own associated cognitive biases. Here are some of the cognitive biases related to the seemingly good notions of empathy, purity and fairness: Empathy Empathy is feeling what the other person feels. Compassion is feeling for their situation. Empathy introduces a lot more bias than Compassion. Purity & Disgust Our physiological processing of physically and morally disgusting behaviour are essentially identical. This means that our feelings of physical and moral disgust can become interwoven, with either triggering the other. Purity & Cleanliness Another piece of evidence indicating there is conflation between the perception of physical and moral cleanliness is that "clean" smells, such as the lemon scents associated with cleaning products, also promoted morally virtuous behaviour such as giving money to charity, and sharing money fairly with strangers. Our own sense of safety and comfort affects our moral condemnation and our willingness to help. Our degree of altruism is linked to our level of emotional security. A specific example is that smells with pleasant associations, such as gentle perfumes or the aroma of baking bread, appear to make people more charitable, and more willing to help strangers. Similarly, when there are unpleasant smells, people are more hostile, more judgemental, and less willing to give people the benefit of the doubt. Fairness If we think society is essentially fair and based on merit, and that people can overcome most obstacles based on hard work and good character, then we also tend to believe that people usually get what they deserve. And then we believe they usually deserve what they get. This is known as the "Just World Hypothesis". Believers in the Just World Hypothesis, and similar promoters of fairness, are typically more judgmental about the unsuccessful or unlucky. For instance, if someone is struck by a car when crossing a road, or contracts a rare form of cancer, the "fair" people will more often regard this as a reflection on the person's character, and judge the person negatively for their misfortune. Worse still, the better off a person is, the more they tend to think that life is fair, and that people get what they deserve (why might that be!). And the harsher they tend to be on those who have been unlucky. This is just a general tendency, of course – there are people from all walks of life who judge too harshly, and those who don't judge harshly enough. But in general, the moral judgement of a rich person who says life is fair, is likely to be highly suspect. Fallacies: Moral Moral or Normative Fallacies are common mistakes that are made when making normative or moral judgements. It's good to be aware of them, even if you agree with the conclusion of the argument. Fallacies of Morality are a class of common, convincing reasoning errors involving evaluating judging or people or actions. One reason why moral fallacies are common and basic is that we tend to be self- righteous when judging other people, and thus prone to extreme, black-and-white thinking. In addition, any criticism or reflection on our judgement of other people is often perceived as a criticism of our capacity for good judgement, that is, whether we ourselves are good. So we are not practiced at reflecting on how we reason when judging people. Even those of us who know we often make bad judgements in the heat of the moment don't often know why. And it's easier to "forgive" someone for a small trespass than to admit to ourselves we shouldn't have condemned them so strongly in the first place. Bearing all this in mind – and the several psychological biases that affect our moral reasoning that we've outlined in another article – you won't be surprised that this is a very partial list of common moral reasoning errors. 1. The Is/Ought Fallacy Description: An argument with a normative conclusion, but no normative supporting premises. One of the simplest and more common forms is where the premises say 'is', while the conclusion says 'ought'. Example: Jane got a low mark in her test. Working hard usually increase test marks. So Jane should work harder. We've assumed that getting a high mark on a test has value, and moreover that it has more value than whatever Jane would give up to work harder. These assumptions need to be stated explicitly as premises. In general, moving from an 'is' (descriptive) to 'ought' (normative) statement requires an explicit, agreed set of normative standards (such as the moral principles we used in some of our examples). Moral arguments are a type of normative argument, and so this fallacy can apply to any moral argumentwhere the moral principles aren't stated in the premises. You can read the Is/Ought article for more details. 2. Partial Judgement Description: An action, object, or person has an aspect that is good (or bad). Therefore it is good (bad). This fallacy is similar to Hasty Generalisation, but we generalise from an aspect to a whole, rather than from an individual to a group. Example: Avocados are high in fat. All fat is unhealthy. Therefore, probably, avocados are unhealthy. We are told that there is an aspect of avocados which is unhealthy, but we are not told that there are not other aspects of avocados which are healthy. This means we cannot rule out that avocados are healthy all things considered. It is easy to evaluate part of an action, object, or person and then apply that judgement to the whole. But most things have a mix of good and bad elements, and learning to weigh these elements appropriately is an important part of developing mature, nuanced judgement. 3. Fundamental Attribution Error Description: The tendency to assume that character flaws drive other people's poor decisions, while extenuating circumstances explain away one's own poor decisions. Example: You sped when driving, so you are a thoughtless and reckless person. Meanwhile, my speeding is justified because I'm running late for graduation. 4. The End justifies the Means Description: Whatever you do for a good cause is good. The religious version of this is known as Pious Fraud, and is preached against in many Holy texts, including Romans 3:5-8 in the Bible. Example: Lucy got money for the orphanage by embezzling it from the company where she worked. She was proud of her embezzlement, as it helped so many children. While helping the orphans was (presumably) good, Lucy's theft of money was bad and should be condemned. It is faintly possible that this fraud was the least wrong way to help the orphans; but even then, her action was wrong. Sometimes there are only bad choices, such as either taking a morally dubious action (the 'means') to achieve a morally important goal (the 'end') or not achieving that goal. Different ethical systems may have different advice – for example Consequentialist views will tend to support your action, while Deontological views will condemn it. But in most non-extreme situations this choice is a False Dichotomy, and there are other choices that can achieve the end via a better means. 5. The 'Natural' fallacy Description: Reasoning that assumes natural things are good, or that man-made things are bad. Example: This shampoo contains natural plant extracts such as coconut and aloe vera, and none of those chemicals that other brands use. It's so good for your hair and scalp, naturally. Coconut and aloe vera are plants. So are hemlock and poison ivy. Urushiol oil is one of many natural plant extracts that blisters and burns the skin. On the other hand, everything is a chemical. The only thing that is chemical-free is the inter-galactic void, and I don't suggest exposing your scalp to that! It is true that many man-made things are harmful, and that sometimes industrial processing damages or reduces the benefits of products. But the reverse can also be true. Also, some natural things are harmful and some artificial things are helpful. Also, if humans are part of nature, isn't whatever we do 'natural'? Or perhaps honey, produced by bees on an industrial scale, is 'artificial'? This is not the same as the 'Naturalistic fallacy' – which is a type of Is/Ought Fallacy. 6. Moral Self-licensing Description: Moral self-licensing is a psychological bias, but it's often used explicitly as a premise in moral reasoning, so we'll include it here. Moral self-licensing results from an 'accumulative' theory of moral judgement. That is, people accumulate "moral capital", and then can spend it. Example: I paid for a stranger's meal at the cafe, so I'm allowed to take a chocolate bar from the office fridge. Each deed is judged in isolation; that you've been good recently doesn't make your action acceptable. We don't add up your good and bad deeds, and zero them out like debts. Otherwise, doctors who regularly save lives would be allowed the occasional murder to blow off steam. Once someone's convinced themselves that they are a 'good person', or that they've done a good deed, they are more inclined to do bad deeds. For example, in one study people lied and cheated more after they made the decision to purchase products that were good for the environment. We tend not to apply the same latitude to other people; in fact we often judge 'good' people more harshly when they transgress. That's a fairly reliable sign that we are fooling ourselves with moral self-licensing. © Dispositions: Tolerance🎬 Tolerance is a disposition we need in order to live together in a world where people and groups of people have different moral convictions. However. Tolerance isn't Relativism. Tolerance does not mean we are allowed to say "well, it's all relative" or "it's all subjective" or something like that. Tolerance is the disposition to live with some (moral) differences, and pick your battles wisely. The disposition of Tolerance is a combination of three things: compromise, picking your battles, and (reasonable) humility. Here are three considerations that will help foster the disposition of Tolerance: 1. Know when to leave well enough alone! We have to live together with a lot of other people that have different moral convictions than we do, and we are even dependent on a lot of people who have different moral convictions than we do. Our community includes family members, friends, fellow students, teachers, workmates, employers, the bus driver, the shop assistants at Pak'n'Save, teammates, and so on. It is likely that many of these people you deal with every day – and that you have to be able to interact with in a civil and cooperative manner – have some moral convictions that are different from yours. We can live with other people because we can leave well enough alone. We can say to ourselves: "I don't agree with that, but I can live with it" or, "this isn't the time and place to have a debate about fundamental moral convictions". 2. Be (reasonably) humble Even if we believe that there are some universal moral truths, that doesn't mean that we should think that we ourselves know them – or that we can't get things wrong! To be a good critical thinker we need to be able to listen to other people and fairly consider other viewpoints. That goes for morality as well. Ethics is hard. People have been trying to figure it out for thousands of years, and although there has been some progress in some areas, there are still deep disagreements (probably because it involves people, and people are super-complicated). Considering another viewpoint fairly could undermine your own moral position, but it could also strengthen it, since you could realise better arguments for your position. However, there are some viewpoints that it is fair enough to dismiss out of hand: no one is saying that you need to be humble and seriously consider the Nazi's views. That debate is over. 3. Realize that moral convictions are fundamental to people's identity You can't expect people to be dispassionate about their moral convictions. And you probably shouldn't be dispassionate about your own moral convictions. For example, if you are committed to the moral view that all human beings have an inalienable value as human beings, then you might become upset if someone argues against that. In fact, it would be a bit strange if you did not become at least a little bit upset. Moral convictions are unlike other beliefs. They are a fundamental part of your personal identity; in a way we could say that what you value makes you the person you are. We can change our moral beliefs, and we do that a lot in our lives. Some moral beliefs are changed more easily than others when we encounter new information or a good argument because they weren't as deeply held. But changing our moral convictions on some matters takes more time and can be more painful because changing that conviction would mean that we would have to think about our entire way of life, how we see ourselves and maybe change a lot as a person. This might also be a reason not to expect other people to change their moral convictions easily or rapidly. Example: imagine you were to seriously consider changing your moral conviction on whether or not it was right to eat meat. Changing that moral stance might be very painful, because a) it would mean that you would have to change your way of life, and b) if you did change, it would mean that you would have to think that what you were doing before was in fact morally wrong. And people just don't like thinking that they are or have been acting in a morally bad way.