Intellectual Virtues and Vices PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Summary

This chapter explores intellectual virtues and vices, arguing that good thinking involves more than just following rules. It emphasizes the importance of intellectual virtues like curiosity and honesty and how these shape our judgments. The chapter also highlights potential intellectual vices and cautions against using the recognition of virtues and vices as weapons. This is a textbook chapter.

Full Transcript

4: INTELLECTUAL VIRTUES AND VICES Andrew Lavin Butte College CHAPTER OVERVIEW 4: Intellectual Virtues and Vices Michael Fitzpatrick contributed quite a bit to this chapter, so Chapter 4 should be seen as a collaboration between Lavin and Fitzpatrick. Note for Instructors Starting in editio...

4: INTELLECTUAL VIRTUES AND VICES Andrew Lavin Butte College CHAPTER OVERVIEW 4: Intellectual Virtues and Vices Michael Fitzpatrick contributed quite a bit to this chapter, so Chapter 4 should be seen as a collaboration between Lavin and Fitzpatrick. Note for Instructors Starting in edition 4, Chapter 4 on fallacies has been renamed and somewhat rewritten into a chapter on "Intellectual Virtues and Vices," incorporating material from the conclusion and the previous version of the chapter. Thinking about the traditional fallacies under a model of virtue epistemology seemed more in line with the values of this textbook, but they can still be taught as traditional fallacies if that is your preference. The most significant change is that the "Fallacies of Induction," which was 4.3, have all been moved to the new 8.5, forming part of the chapter on inductive reasoning. This allows those fallacies to be taught in the context in which they are mistakes in reasoning, as teaching fallacies in the context of their positive counterparts seems more pedagogically useful. Also, the fallacy of equivocation has been moved to 2.2 "Fallacy of Equivocation", since it is most naturally taught alongside the discussion of the role of language in critical thinking. All the other fallacies (Relevance and Presumption) remain here, described as intellectual vices. 4.1: What are Virtues and Vices? 4.2: Some Intellectual Virtues 4.3: Some Intellectual Vices 4.E: Chapter Four (Exercises) This page titled 4: Intellectual Virtues and Vices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform. 1 4.1: What are Virtues and Vices? It can be tempting to suppose that thinking well is simply a matter of following the right rules. As long as we evaluate our evidence in the right way, make sure our premises support our conclusions, and don’t make mistakes, then we’ll be good thinkers. But this actually isn’t the case. As much as we might wish, good thinking is not ultimately about following rules. It can seem that way, especially when we get to our later chapters on deductive logic, but in reality only a small number of situations are appropriate for deductive reasoning. We learn rules of reasoning primarily to train our brains how to think clearly and carefully, not because the rules will be always applicable. In most situations thinking well is a matter of good judgment, where we have to decide what makes the most sense to believe given these particular facts and values in this particular situation. How we reason in one context might not make sense in the next context when new information or methods of investigation arise. We learn how to map arguments and understand validity to discover basic patterns that generally lead towards truthful beliefs. Thinking well means learning when particular patterns apply and when they don’t. So how do we become people of good judgment? Good judgment requires combining our reasoning abilities and the techniques we learn in this class with a practice of building up in ourselves intellectual virtues while avoiding intellectual vices. Virtues are character traits or dispositions about a person that help them be a good overall person. Artistic virtues make one a good artist; social virtues make us likeable to others, and ethical virtues help us to promote flourishing in our own lives and the lives of others. The intellectual virtues are like these—they help us be better thinkers and to think well with others. It’s not just how we think that matters; it also matters the kind of person we are. Examples can help, so let’s take a quick glance at some artistic virtues to help us understand what we’re talking about. One artistic virtue is probably creativity. Artists must be creative people, who can take familiar representational materials and imagine new, purposeful ways to create those materials and present their creations as art. It’s difficult to flourish as an artist if one lacks creativity. Good desires are also important virtues; an artist who does not desire to create will find it difficult to employ their creativity. It’s not enough to have a creative mind; you also have to be motivated to use it. Finally, creating a piece of art, whether a painting or a collage or a sculpture or a theatre production, is hard work and takes an enormous amount of patience and perseverance. Creative people with a desire to create can still fall short of their artistic ambitions if they don’t have the patience and perseverance to see their project all the way through. I mentioned there are social virtues (sometimes called “social graces”) and ethical virtues. Can you think of what some of these might be, using the artistic examples as a guide? Of course, our textbook is about thinking well, and our focus will be on the intellectual virtues and their vices. Learning some intellectual virtues uses our character as thinkers to explain when we think well and when we don’t. To keep our topic manageable, we’re only going to focus on four central intellectual virtues, even though there are many, many more. Then, we’ll discuss some ways people can lack virtue in their thinking, what we’ll call intellectual vices. Vices are character traits or dispositions which inhibit our flourishing—so intellectual vices are those that make us think worse, rather than well.  A Word of Caution The skills you’ll pick up in this chapter—skills in identifying virtues and vices—can often be used as weapons. Especially online, where the goal is often to win and even humiliate rather than to connect and understand, charging someone with lacking virtue can be treated as a way of shutting someone out of a conversation. Don’t use them this way. The primary goal of learning how reasoning goes wrong is always to learn to think more clearly and to better yourself. When these tools are used to make you seem more worthy of having your voice heard, they are being misused. So instead of being on the lookout for bad reasoning in others and being quick to shout “VICES!!” when someone missteps in their reasoning, instead be sure that you’re having the discussion you’re having because you want to understand the viewpoint of another, and take great care with how you treat those who haven’t had the privilege of taking a class like logic and critical thinking with an amazing teacher like yours ;). In short, focus on your own reasoning, but when you feel you must educate someone else, do so gently and in a spirit of mutual understanding. 4.1.1 https://human.libretexts.org/@go/page/223833 This page titled 4.1: What are Virtues and Vices? is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform. 4.1.2 https://human.libretexts.org/@go/page/223833 4.2: Some Intellectual Virtues Curiosity As we mentioned in 1.1 "On Truth", truth matters because it satisfies our curiosity. But that means we need to be curious in the first place! Good thinkers have a disposition to pursue the truth for its own sake, solely out of a desire to know and not simply for the potential advantages it might afford us. Having a curious disposition means you are motived by a desire to learn about the world around you. You genuinely care about, say, the experience of black folk in urban housing, or whether a messenger mRNA molecule can be used in vaccines to produce an immune response. Of course, in both of these examples what we learn might turn out to be quite useful for improving the lives of minority communities or protecting human communities generally from harmful viral infections. But if we’re curious, we will value learning about these things even if they don’t lead to useful outcomes, simply because discovering the truth about our world enriches our lives all by itself, and learning more about how different communities struggle to create spaces to call home, or how various biochemical molecules work, enrich our lives whether or not we can develop policies or technologies as a result. Being motivated to learn for its own sake is important for how we think well with others. If all I care about is what my learning will get me, I will be tempted or biased to prefer evidence that favors the outcomes I want, or to discount evidence for the outcomes I oppose. Being curious helps to guard us against caring about truth only for its instrumental value, rather than its intrinsic value. Curiosity places our focus on the question or puzzle we’re interested in, and not on preserving a pre-determined position we already believe. The same goes for listening to other people. If I’m curious about how others see the world, it helps me to listen to their ideas and perspectives regardless of whether I can use what I learn to further my career or take advantage of them. Curiosity also helps us guard ourselves against manipulation; if we genuinely want to understand current events, we will be less interested in partisan or ideological news media and political commentary that “spins” events to serve a particular agenda. Finally, curiosity is important as a virtue because it reminds us that making mistakes and being wrong is okay. The British- American philosopher Alfred North Whitehead wrote, “panic of error is the death of progress; and love of truth is its safeguard.” Making mistakes and judging falsely are not bad as long as we see them as part of the larger learning process. Mistakes give us an opportunity to figure out why we were mistaken, and make corrections accordingly. If we are curious, that is, if we are people motivated by a love of truth, then we will care more about the discovery process than the fact that we believed the wrong things along the way. Intellectual Honesty Intellectual honesty is the disposition to be truthful and sober in your assessment of your own knowledge. It’s easy to claim that we know things and even to have confidence in what we know, but often we find that on reflection we shouldn’t have as much confidence as we do. Confidence is cheap. What is of higher worth is the ability and disposition to recognize the things we don’t know or shouldn’t be confident in and the things that we do know and do have reason to be confident in. Much of what we think we know we think we know really because we read a headline while scrolling through Facebook or Twitter or someone told us once sort of off-handedly. These, when we think about it, aren’t very good sources of knowledge. They aren’t really grounds or justifications for our beliefs—or at any rate aren’t very good justifications for our beliefs. Intellectual honesty is the disposition to take a beat, think about why it is that we feel confident in a belief and feel ready to assert it, and then proceed with a more honest assessment of what we know and why we think we know it. Intellectual Humility Intellectual humility goes hand in hand with intellectual honesty. What is means to be intellectually humble, though, is slightly different from being honest. Intellectual humility is a disposition to recognize that even when we have good grounds for knowing something, there might always be something that upsets that understanding or set of beliefs. To be intellectually humble is to remember that human beings have been very confident many times in the past and often for very good reason, but have turned out to be wrong due to some false assumption somewhere in their thinking. It’s the disposition to say “even if I have really good reason to believe what I believe, I still might be wrong.” The Search for Vulcan In the early 1800's, several astronomers including Alexis Bouvard noticed that the planet Uranus was not orbiting the sun in a manner consistent with current mathematical models about the laws of nature governing how planets move. This led Bouvard and 4.2.1 https://human.libretexts.org/@go/page/223842 another astronomer, Urbain Le Verrier, to postulate that there must be another planet in the vicinity whose gravitational pull was affecting the motion of Uranus. In 1846, using predictions sent to him by Le Verrier, Johan Gottfied Galle was able to spot a planet from the Berlin Observatory, which would become known as the planet Neptune. Shortly after the discovery, Le Verrier turned his attention to Mercury, another planet that astronmers had trouble applying current physical predictions to. Le Verrier made a complete model of Mercury’s motion with predictions to be tested when Mercury was next scheduled to orbit across the face of the Sun in 1848. Mercury failed to move in accordance with Le Verrier’s predictions. Rather than give up, Le Verrier spent the next decade creating some of the most rigorous and detailed calculations of the motion of Mercury to date, yet he could not come up with predictions that matched observation. Using inspiration from the successful prediction of Neptune as a planetary body affecting the motion of Uranus, Le Verrier predicted that there must be a planetary body affecting Mercury, and he postulated the existence of the planet Vulcan (same word that is used in some of the Star Trek stories!). Over the rest of his life, Le Verrier worked with observatories to confirm the existence of Vulcan, and while many alleged sightings were reported, none came in that could be confirmed. He died in 1877, firmly believing that Vulcan was out there. It wouldn’t be until 1915 that astrophysicists would finally be able to show that, in fact, there is no planet Vulcan—the behavior of Mercury can be explained by the curvature of spacetime, which was Albert Einstein’s new way to account for the effects of gravity. Einstein’s new theory correctly predicted the orbit of Mercury. Notice that Le Verrier was motivated in his postulates of Neptune and Vulcan to satisfy his curiosity. The existence of a new planet at that time would have almost no technological significance. He simply wanted to know why there were small deviations in the Mercurial orbit where there should not have been. Using classical Newtonian mechanics, a nearby object’s gravitational pull seemed like the most likely hypothesis. Since this hypothesis worked for deviations in the orbit of Uranus and were confirmed by the discovery of a new planet, Le Verrier had good reasons to think it would work in the case of Mercury as well. The planet Vulcan had many supporters long after Le Verrier’s death, but the discovery of a new way to think about spacetime and gravity put this support to rest. This required intellectual honesty, for as much as people wanted to find the planet which would explain Mercury’s orbit, they had to admit there search had not been successful. It’s hard to devote your life to a hypothesis that turns out to be wrong. This means it also required some intellectual humility to admit that they were wrong, and that the 50 year quest for Vulcan had been in vain. But this does not mean the end of curiosity! Curiosity is such that we can be mistaken in what we thought was true, and use our mistake as fuel to start moving a new direction to see what we can discover. Our curiosity should never be based on being right, but on wanting to figure things out. Le Verrier was equally virtuous in his search for Vulcan as he was in his search for Neptune, even though he was right in one case and wrong in the other. Charity All of the aforementioned virtues are worth cultivating. But there is one more worth reminding ourselves is a virtue: charity. Recall in section 1.1 "The Principle of Charity" that we discussed the Principle of Charity. Review that for a slightly more complete discussion of the virtue of charity. To be charitable is to attribute the best intentions and strongest justifications to someone else. To interpret a set of actions charitably is to try to see those actions in terms of the most reasonable set of motivations or intentions behind them. To interpret someone’s beliefs charitably is to attribute moral innocence to them as a person as far as is possible so as to give them the strongest possible benefit of the doubt. Only when you have really good reasons for doing so might you think of someone else as irrational, vicious (in the sense meaning the opposite of virtuous), or petty. Charity, then, is a habit of interpreting actions and beliefs in a good light —a rational and moral light. All of these dispositions have their appropriate limits, of course: many beliefs and actions are just wrongheaded or irrational or bigoted and we needn’t bend ourselves in pretzel knots trying to interpret them charitably. Many of our own beliefs are things we have really good reason for believing, so we don’t need to be so humble that we refuse to believe anything. Some of us, furthermore, are really in a better position to know things and to reason about them. A false sense of humility stops being honest at a certain point. Alfred North Whitehead, Modes of Thought. New York: The Macmillan Co., 1938, p. 16. This page titled 4.2: Some Intellectual Virtues is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform. 4.2.2 https://human.libretexts.org/@go/page/223842 4.3: Some Intellectual Vices A helpful way to practice these intellectual virtues is to see ways that we fail to practice them, so that we can learn from those and avoid them. Learning from how we think badly is a great way to learn how to think well. “Vice” is the traditional name for lacking in virtue. Typically vices are themselves character traits or dispositions, defects in who we are that cause us to act poorly. We’re going to focus on vices that have traditionally been called “fallacies,” a term that is less helpful because it suggests that such actions are always wrong. This is not true. With all virtues and vices, context matters for whether an action expresses virtue or vice in a particular situation. So in general, we prefer the term “vice” rather than “fallacy”. However, since some of the content that follows is excerpted from more traditional textbooks, you will see the language of fallacies used, but know that we’re always talking about intellectual vices, ways we fall short of being intellectually virtuous. Most of the following vices are not themselves character traits, but expressions of character traits. Using the term “vice” more expansively is not contrary to virtues and vices being fundamentally character traits. Philosopher Quassim Cassam notes that a vice is anything in us that is “likely to impede effective and responsible inquiry” (165),1 and this includes both bad character traits like dishonesty or arrogance, and their expression in behaviors like wishful thinking or stubbornly ignoring evidence contrary to one’s beliefs. The vices below are all ways of not thinking well because of failures in our intellectual character. Since we noted that these vices are also labeled fallacies, it’s important to realize that these are not the same kinds of fallacies we will encounter later in Chapter 7 when we discuss “logical fallacies.” Logical fallacies describe rules of logic in which an inference always leads to a conclusion that does not follow from the premises. Logical fallacies are always bad. The intellectual vices are not logical fallacies. They have to do with how we behave badly in our thinking or in our conversations with other people. This is another reason why labeling both types of mistakes “fallacies” is unhelpful, because it suggests they’re both the same kinds of things. Some textbooks call the logical fallacies “formal fallacies” and the vices in this chapter “informal fallacies,” to show that the first kind are mistakes of logical form and the second kind are not. We think it is most helpful to simply call them something else: intellectual vices. It’s worth noting that the presence of intellectual vices means we haven’t gone about reasoning with virtue. It does not mean that our conclusions, the things we believe, are in fact wrong. Vice describes our justification for what we believe, not the truth of what we believe. Even a broken clock is right twice a day, and thinking in an intellectually vicious manner doesn’t mean our conclusions are false. What it does mean is we have not thought well enough to be justified in believing our conclusions. Make sure you don’t use the vices to ignore the possible truth of a claim. The vices are reasons to reject a particular argument, but you should always ask yourself, “What would this argument look like if it was virtuously argued?” Okay, how will we progress from here? There are two sorts of vices we’ll discuss in this chapter: Vices of Relevance and Vices of Presumption. We’ll go through each vice and offer some examples. Most of the rest of this chapter is pulled from Van Cleave and Knachel. There are also fallacies in Chapter 2 and many more in Chapter 8. Vices of Relevance Vices of relevance are ways of making arguments or critiquing arguments that have no relevance to the arguments themselves. When we are not intellectually honest or humble, or when we lack curiosity and charity, then we tend to be more focused on winning arguments or proving our friends wrong than seeking which conclusions actually have the strongest justification. That tempts us to make arguments that are psychologically or socially successful, but not actually good arguments (because they depend on irrelevant details). Keep in mind that not every topic shift in an argument is a vice of relevance, because sometimes the new topic is relevant to the argument. We’ll see an excellent example of this in our first vice of relevance. Ad Hominem Attack/Argument Against the Person and Genetic Fallacy The vice of an Ad Hominem Attack occurs when someone unfairly attacks the character and motives of the arguer instead of their argument. Recall that as charitable thinkers, we are trying to separate the arguer from their argument and address the latter on its own merits. If even the most loathsome person makes a good argument, that argument remains valid or strong regardless of the failings of the person making it. But not every Ad Hominem is a vice; sometimes people put forward bad arguments because of a lack of virtue. For example, if a person makes an argument, not for the sake of truth but to prove that they are smarter than others, then it is an appropriate response to avoid the argument and instead criticize their lack of honesty or curiosity. Notice that this does not render their argument bad, it simply avoids addressing the merits of the argument until the arguer is prepared to debate those merits for the right reasons. 4.3.1 https://human.libretexts.org/@go/page/223851 Because intellectual virtue is an important part of thinking well, an ad hominem critique is appropriate when someone (including ourselves!) is not thinking virtuously. As Qassim Cassam writes, “The evaluation of the justificational status of a particular belief is closely related to the evaluation of the believer” (2016: 175). If we think someone has made a good argument, we’re saying they are thinking well, and this means they are thinking virtuously. Cassam elaborates, “A justified belief is characteristically one which arises through the exercise of intellectual virtue. In evaluating a belief as justified we are in effect commending the believer” (2016: 176). Okay, so when is an ad hominem attack a vice? If we think about Cassam’s proposal above, then ad hominem is a vice whenever we attacks someone’s character (instead of their argument) for reasons other than their lack of intellectual virtue. If I say of someone, “I don’t think your argument is honest about the reasons against your position,” I’m fairly criticizing a lack of virtue. But if I say of someone, “I don’t think your argument is good because you’re a loan shark,” I have exemplified a vice; I have made an argument without virtue. There are three main types of ad hominem attack: 1. Abusive: you simply attack the character or rationality of your opponent (or a group to which the opponent belongs like "liberals" or "pro-lifers") 2. Circumstantial: as above in our cartoon, you point to circumstances which make your opponent untrustworthy or suspect. 3. Tu Quoque: Latin meaning "you too!" You point to similar faults in your opponent when your actions or character is called into question. More generally: you point to a fault elsewhere to draw attention away from the fault being discussed. From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. “Ad hominem” is a Latin phrase that can be translated into English as the phrase, “against the man.” In an ad hominem fallacy, instead of responding to (or attacking) the argument a person has made, one attacks the person him or herself. In short, one attacks the person making the argument rather than the argument itself. Here is an anecdote that reveals an ad hominem fallacy (and that has actually occurred in my ethics class before). A philosopher named Peter Singer had made an argument that it is morally wrong to spend money on luxuries for oneself rather than give all of your money that you don’t strictly need away to charity. The argument is actually an argument from analogy (whose details I discussed in section 3.3), but the essence of the argument is that there are every day in this world children who die preventable deaths, and there are charities who could save the lives of these children if they are funded by individuals from wealthy countries like our own. Since there are things that we all regularly buy that we don’t need (e.g., Starbuck’s lattes, beer, movie tickets, or extra clothes or shoes we don’t really need), if we continue to purchase those things rather than using that money to save the lives of children, then we are essentially contributing to the deaths of those children if we choose to continue to live our lifestyle of buying things we don’t need, rather than donating the money to a charity that will save lives of children in need. In response to Singer’s argument, one student in the class asked: “Does Peter Singer give his money to charity? Does he do what he says we are all morally required to do?” The implication of this student’s question (which I confirmed by following up with her) was that if Peter Singer himself doesn’t donate all his extra money to charities, then his argument isn’t any good and can be dismissed. But that would be to commit an ad hominem fallacy. Instead of responding to the argument that Singer had made, this student attacked Singer himself. That is, they wanted to know how Singer lived and whether he was a hypocrite or not. Was he the kind of person who would tell us all that we had to live a certain way but fail to live that way himself? But all of this is irrelevant to assessing Singer’s argument. Suppose that Singer didn’t donate his excess money to charity and instead spent it on luxurious things for himself. Still, the argument that Singer has given can be assessed on its own merits. Even if it were true that Peter Singer was a total hypocrite, his argument may nevertheless be rationally compelling. And it is the quality of the argument that we are interested in, not Peter Singer’s personal life and whether or not he is hypocritical. Whether Singer is or isn’t a hypocrite, is irrelevant to whether the argument he has put forward is strong or weak, valid or invalid. The argument stands on its own and it is that argument rather than Peter Singer himself that we need to assess. Nonetheless, there is something psychologically compelling about the question: Does Peter Singer practice what he preaches? I think what makes this question seem compelling is that humans are very interested in finding “cheaters” or hypocrites—those 4.3.2 https://human.libretexts.org/@go/page/223851 who say one thing and then do another. Evolutionarily, our concern with cheaters makes sense because cheaters can’t be trusted and it is essential for us (as a group) to be able to pick out those who can’t be trusted. That said, whether or not a person giving an argument is a hypocrite is irrelevant to whether that person’s argument is good or bad. So there may be psychological reasons why humans are prone to find certain kinds of ad hominem fallacies psychologically compelling, even though ad hominem fallacies are not rationally compelling. Not every instance in which someone attacks a person’s character is an ad hominem fallacy. Suppose a witness is on the stand testifying against a defendant in a court of law. When the witness is cross examined by the defense lawyer, the defense lawyer tries to go for the witness’s credibility, perhaps by digging up things about the witness’s past. For example, the defense lawyer may find out that the witness cheated on her taxes five years ago or that the witness failed to pay her parking tickets. The reason this isn’t an ad hominem fallacy is that in this case the lawyer is trying to establish whether what the witness is saying is true or false and in order to determine that we have to know whether the witness is trustworthy. These facts about the witness’s past may be relevant to determining whether we can trust the witness’s word. In this case, the witness is making claims that are either true or false rather than giving an argument. In contrast, when we are assessing someone’s argument, the argument stands on its own in a way the witness’s testimony doesn’t. In assessing an argument, we want to know whether the argument is strong or weak and we can evaluate the argument using the logical techniques surveyed in this text. In contrast, when a witness is giving testimony, they aren’t trying to argue anything. Rather, they are simply making a claim about what did or didn’t happen. So although it may seem that a lawyer is committing an ad hominem fallacy in bringing up things about the witness’s past, these things are actually relevant to establishing the witness’s credibility. In contrast, when considering an argument that has been given, we don’t have to establish the arguer’s credibility because we can assess the argument they have given on its own merits. The arguer’s personal life is irrelevant. Figure 4.3.1 : “Look,” says the bird, “if we start walking around on the ground all the time, the cat will get us, I know I have a personal stake in this because I’d prefer not to be eaten, but my argument would stand even if I myself were a cat!” (Image Credit: Otto Speckter in Picture Fables) Tu Quoque Tu Quoque is a version of the Ad Hominem fallacy. Here’s Van Cleave again. “Tu quoque” is a Latin phrase that can be translated into English as “you too” or “you, also.” The tu quoque fallacy is a way of avoiding answering a criticism by bringing up a criticism of your opponent rather than answer the criticism. For example, suppose that two political candidates, A and B, are discussing their policies and A brings up a criticism of B’s policy. In response, B brings up her own criticism of A’s policy rather than respond to A’s criticism of her policy. B has here committed the tu quoque fallacy. The fallacy is best understood as a way of avoiding having to answer a tough criticism that one may not have a good answer to. This kind of thing happens all the time in political discourse. Tu quoque, as I have presented it, is fallacious when the criticism one raises is simply in order to avoid having to answer a difficult objection to one’s argument or view. However, there are circumstances in which a tu quoque kind of response is not fallacious. If the criticism that A brings toward B is a criticism that equally applies not only to A’s position but to any position, then B is right to point this fact out. For example, suppose that A criticizes B for taking money from special interest groups. In this case, B would be totally right (and there would be no tu quoque fallacy committed) to respond that not only does A take money from special interest groups, but every political candidate running for office does. That is just a fact of life in American politics today. So A really has no criticism at all to B since everyone does what B is doing and it is in many ways unavoidable. Thus, B could (and should) respond with a “you too” rebuttal and in this case that rebuttal is not a tu quoque fallacy. 4.3.3 https://human.libretexts.org/@go/page/223851 Attacking causes for belief rather than reasons for belief (Genetic Fallacy) The vice of attacking the causes for belief, sometimes called the Genetic Fallacy, requires learning the difference between causes and reasons. Perhaps I trust my physician because my best friend goes to the same physician. The ‘because’ here is an explanation for why I trust them. But if you were to ask me why I trust my physician, I might say, “Because she is the most highly-rated general practitioner in my area.” Now I have given you a reason for trusting her. Both can be true descriptions of my trust: the cause of my trust is that my best friend trusts her, and the reason I think my trust is justified is that she is so highly rated. When criticizing an argument, we want to criticize the reasons for belief, not the causes. The genetic fallacy occurs when, for example, instead of looking at your beliefs as they stand on their own, I look at the role those beliefs play in your psychology or the psychological origins of those beliefs. I might say that you only believe in the free market because your father believes in the free market. That’s not an attack against the belief itself. At best it amounts to the claim that you don’t have any justification for believing it, only an explanation for how you came to believe it. That’d be like critiquing a particular golf club because a bad brand name manufactured it. It’s still a perfectly good golf club no matter who made it. We should critique the golf club on the basis of its usefulness as a golf club, not on the basis of where it was made. Note that it might be reasonable not to trust a bad brand when making a purchase, but if the reviews come in and it’s a fine golf club, then its origin is irrelevant. From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. The genetic fallacy occurs when one argues (or, more commonly, implies) that the origin of something (e.g., a theory, idea, policy, etc.) is a reason for rejecting (or accepting) it. For example, suppose that Jack is arguing that we should allow physician assisted suicide and Jill responds that that idea first was used in Nazi Germany. Jill has just committed a genetic fallacy because she is implying that because the idea is associated with Nazi Germany, there must be something wrong with the idea itself. What she should have done instead is explain what, exactly, is wrong with the idea rather than simply assuming that there must be something wrong with it since it has a negative origin. The origin of an idea has nothing inherently to do with its truth or plausibility. Suppose that Hitler constructed a mathematical proof in his early adulthood (he didn’t, but just suppose). The validity of that mathematical proof stands on its own; the fact that Hitler was a horrible person has nothing to do with whether the proof is good. Likewise with any other idea: ideas must be assessed on their own merits and the origin of an idea is neither a merit nor demerit of the idea. Although genetic fallacies are most often committed when one associates an idea with a negative origin, it can also go the other way: one can imply that because the idea has a positive origin, the idea must be true or more plausible. For example, suppose that Jill argues that the Golden Rule is a good way to live one’s life because the Golden Rule originated with Jesus in the Sermon on the Mount (it didn’t, actually, even though Jesus does state a version of the Golden Rule). Jill has committed the genetic fallacy in assuming that the (presumed) fact that Jesus is the origin of the Golden Rule has anything to do with whether the Golden Rule is a good idea. I’ll end with an example from William James’s seminal work, The Varieties of Religious Experience. In that book (originally a set of lectures), James considers the idea that if religious experiences could be explained in terms of neurological causes, then the legitimacy of the religious experience is undermined. James, being a materialist who thinks that all mental states are physical states—ultimately a matter of complex brain chemistry, says that the fact that any religious experience has a physical cause does not undermine that veracity of that experience. Although he doesn’t use the term explicitly, James claims that the claim that the physical origin of some experience undermines the veracity of that experience is a genetic fallacy. Origin is irrelevant for assessing the veracity of an experience, James thinks. In fact, he thinks that religious dogmatists who take the origin of the Bible to be the word of God are making exactly the same mistake as those who think that a physical explanation of a religious experience would undermine its veracity. We must assess ideas for their merits, James thinks, not their origins. How do intellectually virtuous thinkers avoid making ad hominem attacks when they’re inappropriate? Well, if we’re intellectually honest, we will emphasize substance over motives. We will be slow to question someone’s motives behind an argument, and instead start by charitably focusing on the substance of what they have to say. Of course, “slow” does not mean never, and sometimes a person’s behavior and manner of argument will convince us that bad motives are a factor, but we should not start from a place of assuming ill-intent. 4.3.4 https://human.libretexts.org/@go/page/223851 In general, we should be slow to cast aspersions on another person’s character or intelligence. Just because we think they have made a bad argument does not mean we should attribute this to a lack of ability or integrity on their part. Some people (ourselves most of all!) simply make mistakes. By focusing on their argument, we continue to treat them as an equal dialogue partner, someone whose views are worthy of our curiosity and our charity. This often makes the dialogue proceed better and with more insight. Again, “slow” does not mean never, and sometimes a person who is behaving belligerently needs to be told that their conduct makes them unfit for continued dialogue. But if we resort to such “last measures,” it should always be in the hope of helping a person become more intellectually virtuous so that they can rejoin the conversation, and certainly not with the secret motive of getting them to agree with us or winning the debate. Mansplaining Sometimes we are so confident we’re right, we begin to explain why we’re right in a manner and tone that is aggressive, domineering, and keeps the other person from contributing. This has become known as Mansplaining, a term coined because many women have experienced their ideas ignored or discredited by men who speak at them in a condescending manner. But mansplaining can be practiced by persons of any gender towards persons of any other gender. It often involves telling the other person how they feel or should feel, what they believe, and why their perspective doesn’t matter. Consider Hanuni. Hanuni shares with a friend her anxieties concerning the Russian invasion of Ukraine. She tells her friend, “I have a hard time focusing on my daily responsibilities because I feel overwhelmed at the thought of Ukrainians right now fighting and dying just to be free enough to carry out their daily responsibilities.” Her friend, Hiari, replies to her, “Come’on, that’s not how you feel. You don’t know what it’s like to be a Ukrainian, and you’ve never been to war, so you really have no business assuming you know what they’re going through. You really should be counting your own blessings rather than worrying about things that are not your problem.” Think about how Hiari’s response shuts Hanuni down and makes her feel that she’s wrong to care about the situation in Ukraine or to empathize with people in other situations. Most significantly, Hiari’s comment erases Hanuni’s voice and contribution. A somewhat high profile instance of mansplaining occurred during the 2017 U. S. Senate debate on the confirmation of Senator Jeff Sessions (Alabama) to the office of the Attorney General of the United States. The confirmation process was contentious in part because of concerns about Senator Sessions record on civil rights. To speak to this issue, fellow Senator Elizabeth Warren (Massachusetts) reminded the Senate of former Senator Ted Kennedy’s (also of Massachusetts) objections back in 1986 to Sessions being appointed to a judgeship because of concerns over suppression of black votes in his area of authority. She then proceeded to read a letter on the Senate floor authored by Coretta Scott King, the widow of civil rights leader Martin Luther King, Jr., written to the Senate Judiciary Committee in 1986 opposing Sessions’ confirmation to a judgeship. As Senator Warren was reading from King’s letter, the presiding Senate Chair Steve Daines interrupted her twice to remind her that Senate rules prohibit casting aspersions on other Senators. After some back and forth, he permitted her to continue reading King’s letter. However, shortly after resuming her reading, then Senate Majority Leader Mitch McConnell interrupted her, insisting that she was slandering Senator Sessions character from the floor. He called for a vote on whether she would be allowed to continue her speech, and the Senate voted to terminate her speaking time. Later in the Senate debate, another male Senator read the letter by King without objection. Shortly after Elizabeth Warren was told to sit down, Majority Leader McConnell explained the events in the following manner, “Here is what transpired. Senator Warren was giving a lengthy speech. She had appeared to violate the rule. She was warned. She was given an explanation. Nevertheless, she persisted.” McConnell’s interruptions of Warren’s speech, and his domineering chastisement lecturing her on why she was not permitted to continue, was more focused on explaining at her why her voice wasn’t going to be included than dialoguing with her on what she had to say. Mansplaining is never good, but it’s important that we do not label everyone who criticizes what we believe as engaging in mansplaining. Intellectually virtuous people are teachable and allow others to help them see their mistakes. Sometimes people will dismiss the arguments of others as mansplaining when in fact they’re only voicing disagreement. Dismissing a reasonable counter- argument as mansplaining is in fact a type of mansplaining—another way to shut down someone’s voice. So it’s important to correctly identify cases of mansplaining and not use the concept as a means to avoid having to listen to anyone who challenges our thinking. Straw Argument 4.3.5 https://human.libretexts.org/@go/page/223851 Figure 4.3.2 : Do you want to build a snowman? And then critique his position on global warming? (Image Credit: Otto Speckter in Picture Fables) The vice of constructing a straw argument happens when someone (willfully or mistakenly) misinterprets someone else's argument or position. We also might call it creating a Straw Argument, Straw Figure, Straw Person, or Straw Man. The opponent's argument or position is characterized uncharitably so as to make it seem ridiculous or indefensible. It is a failure of charity because the person is attacking an irrelevant argument, rather than the argument they actually gave. Imagine someone building a straw doll and fighting that instead of their actual opponent. No one would think they had won the fight. From Matthew J. Van Cleave's Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. Suppose that my opponent has argued for a position, call it position A, and in response to his argument, I give a rationally compelling argument against position B, which is related to position A, but is much less plausible (and thus much easier to refute). What I have just done is attacked a straw man—a position that “looks like” the target position, but is actually not that position. When one attacks a straw man, one commits the straw man fallacy. The straw man fallacy misrepresents one’s opponent’s argument and is thus a kind of irrelevance. Here is an example. Two candidates for political office in Colorado, Tom and Fred, are having an exchange in a debate in which Tom has laid out his plan for putting more money into health care and education and Fred has laid out his plan which includes earmarking more state money for building more prisons which will create more jobs and, thus, strengthen Colorado’s economy. Fred responds to Tom’s argument that we need to increase funding to health care and education as follows: “I am surprised, Tom, that you are willing to put our state’s economic future at risk by sinking money into these programs that do not help to create jobs. You see, folks, Tom’s plan will risk sending our economy into a tailspin, risking harm to thousands of Coloradans. On the other hand, my plan supports a healthy and strong Colorado and would never bet our state’s economic security on idealistic notions that simply don’t work when the rubber meets the road.” Fred has committed the straw man fallacy. Just because Tom wants to increase funding to health care and education does not mean he does not want to help the economy. Furthermore, increasing funding to health care and education does not entail that fewer jobs will be created. Fred has attacked a position that is not the position that Tom holds, but is in fact a much less plausible, easier to refute position. However, it would be silly for any political candidate to run on a platform that included “harming the economy.” Presumably no political candidate would run on such a platform. Nonetheless, this exact kind of straw man is ubiquitous in political discourse in our country. Here is another example.  Example 4.3.1 Nancy has just argued that we should provide middle schoolers with sex education classes, including how to use contraceptives so that they can practice safe sex should they end up in the situation where they are having sex. Fran responds: “proponents of sex education try to encourage our children to a sex-with-no-strings-attached mentality, which is harmful to our children and to our society.” Fran has committed the straw man (or straw woman) fallacy by misrepresenting Nancy’s position. Nancy’s position is not that we should encourage children to have sex, but that we should make sure that they are fully informed about sex so that if they 4.3.6 https://human.libretexts.org/@go/page/223851 do have sex, they go into it at least a little less blindly and are able to make better decision regarding sex. As with other fallacies of relevance, straw man fallacies can be compelling on some level, even though they are irrelevant. It may be that part of the reason we are taken in by straw man fallacies is that humans are prone to “demonize” the “other”— including those who hold a moral or political position different from our own. It is easy to think bad things about those with whom we do not regularly interact. And it is easy to forget that people who are different than us are still people just like us in all the important respects. Many years ago, atheists were commonly thought of as highly immoral people and stories about the horrible things that atheists did in secret circulated widely. People believed that these strange “others” were capable of the most horrible savagery. After all, they may have reasoned, if you don’t believe there is a God holding us accountable, why be moral? The Jewish philosopher, Baruch Spinoza, was an atheist who lived in the Netherlands in the 17th century. He was accused of all sorts of things that were commonly believed about atheists. But he was in fact as upstanding and moral as any person you could imagine. The people who knew Spinoza knew better, but how could so many people be so wrong about Spinoza? I suspect that part of the reason is that since at that time there were very few atheists (or at least very few people actually admitted to it), very few people ever knowingly encountered an atheist. Because of this, the stories about atheists could proliferate without being put in check by the facts. I suspect the same kind of phenomenon explains why certain kinds of straw man fallacies proliferate. If you are a conservative and mostly only interact with other conservatives, you might be prone to holding lots of false beliefs about liberals. And so maybe you are less prone to notice straw man fallacies targeted at liberals because the false beliefs you hold about them incline you to see the straw man fallacies as true. Thinking with virtue means that when others explicitly deny a view, we should be slow to attribute this view to them. This does not mean we never do so; again, if someone is acting in bad faith and we think they are pretending to hold a view different than the one they assert, we might need to make clear their hidden agenda. But notice this is no longer a straw argument, if we’re right in our suspicion. Nonetheless, we start from a place of being slow to do this, wanting to take people at face value first before assuming they don’t believe what they are claiming. A related practice in virtue is to be slow to attribute to others views that are clearly false, implausible, or lie at the extremes of human belief. Again, sometimes we have to do this because there are people who believe false, implausible, or extremist views. But we start from a place of charitably assuming rationality and truth in people, being slow to change our assumption. A really useful way to assist with this is to summarize the other person’s views and arguments back to them before making a critique. If we stop ourselves and explain to someone else what we think they are arguing, it (a) gives them an opportunity to clarify first before we make objections, and (b) it shows them we are acting in good faith and that they can trust us to not construct straw arguments out of what they said. Red Herring Figure 4.3.3 : Even the goodest boiz get distracted easily. SQUIRREL! (Image Credit: Otto Speckter in Picture Fables) A herring is a pungent fish, especially in the days before refrigeration. William Cobbett claimed to have used this as a boy to lure unsuspecting hounds and their unsuspecting hunters away from their intended prey. Cobbett wanted the rabbit for himself, so he drug a herring on the ground to make a stench trail, drawing the hound away from the rabbit’s hole. Interesting trick! But what does this have to do with reasoning well? Simple: one way that people reason improperly is by not staying on topic. If you start talking about one thing, but end up talking about another thing, chances are either you or your conversant have committed the vice of a red herring. This is where you intentionally or unintentionally change the subject. Often it happens when a politician doesn’t want to answer a question. “I don’t want to talk about jobs, I want to talk about the brave men and women who serve in our nation’s proud military…” It’s a great way to get around having to answer a question. 4.3.7 https://human.libretexts.org/@go/page/223851 A Red Herring is sometimes hard to distinguish from a Straw Figure. Let’s focus on the key difference for one second. In a straw figure, the offender is attacking an irrelevant argument instead of the actual argument of their opponent. In a red herring, the offender is introducing an irrelevant topic and discussing that instead of the topic at hand. We don’t change topics in a straw figure, we just start talking about a different argument on the same topic. From: Knachel, Matthew, "Fundamental Methods of Logic" (2017). Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1 Creative Commons Attribution 4.0 International License A fictional example can illustrate the technique. Consider Frank, who, after a hard day at work, heads to the tavern to unwind. He has far too much to drink, and, unwisely, decides to drive home. Well, he’s swerving all over the road, and he gets pulled over by the police. Let’s suppose that Frank has been pulled over in a posh suburb where there’s not a lot of crime. When the police officer tells him he’s going to be arrested for drunk driving, Frank becomes belligerent: “Where do you get off? You’re barely even real cops out here in the ’burbs. All you do is sit around all day and pull people over for speeding and stuff. Why don’t you go investigate some real crimes? There’s probably some unsolved murders in the inner city they could use some help with. Why do you have to bother a hard-working citizen like me who just wants to go home and go to bed?” Frank is committing the red herring fallacy (and not very subtly). The issue at hand is whether or not he deserves to be arrested for driving drunk. He clearly does. Frank is not comfortable arguing against that position on the merits. So he changes the subject—to one about which he feels like he can score some debating points. He talks about the police out here in the suburbs, who, not having much serious crime to deal with, spend most of their time issuing traffic violations. Yes, maybe that’s not as taxing a job as policing in the city. Sure, there are lots of serious crimes in other jurisdictions that go unsolved. But that’s beside the point! It’s a distraction from the real issue of whether Frank should get a DUI. Politicians use the red herring fallacy all the time. Consider a debate about Social Security—a retirement stipend paid to all workers by the federal government. Suppose a politician makes the following argument: We need to cut Social Security benefits, raise the retirement age, or both. As the baby boom generation reaches retirement age, the amount of money set aside for their benefits will not be enough cover them while ensuring the same standard of living for future generations when they retire. The status quo will put enormous strains on the federal budget going forward, and we are already dealing with large, economically dangerous budget deficits now. We must reform Social Security. Now imagine an opponent of the proposed reforms offering the following reply: Social Security is a sacred trust, instituted during the Great Depression by FDR to insure that no hard-working American would have to spend their retirement years in poverty. I stand by that principle. Every citizen deserves a dignified retirement. Social Security is a more important part of that than ever these days, since the downturn in the stock market has left many retirees with very little investment income to supplement government support. The second speaker makes some good points, but notice that they do not speak to the assertion made by the first: Social Security is economically unsustainable in its current form. It’s possible to address that point head on, either by making the case that in fact the economic problems are exaggerated or non-existent, or by making the case that a tax increase could fix the problems. The respondent does neither of those things, though; he changes the subject, and talks about the importance of dignity in retirement. I’m sure he’s more comfortable talking about that subject than the economic questions raised by the first speaker, but it’s a distraction from that issue—a red herring. Perhaps the most blatant kind of red herring is evasive: used especially by politicians, this is the refusal to answer a direct question by changing the subject. Examples are almost too numerous to cite; to some degree, no politician ever answers 4.3.8 https://human.libretexts.org/@go/page/223851 difficult questions straightforwardly (there’s an old axiom in politics, put nicely by Robert McNamara: “Never answer the question that is asked of you. Answer the question that you wish had been asked of you.”). A particularly egregious example of this occurred in 2009 on CNN’s Larry King Live. Michele Bachmann, Republican Congresswoman from Minnesota, was the guest. The topic was “birtherism,” the (false) belief among some that Barack Obama was not in fact born in America and was therefore not constitutionally eligible for the presidency. After playing a clip of Senator Lindsey Graham (R, South Carolina) denouncing the myth and those who spread it, King asked Bachmann whether she agreed with Senator Graham. She responded thus: "You know, it's so interesting, this whole birther issue hasn't even been one that's ever been brought up to me by my constituents. They continually ask me, where's the jobs? That's what they want to know, where are the jobs?” Bachmann doesn’t want to respond directly to the question. If she outright declares that the “birthers” are right, she looks crazy for endorsing a clearly false belief. But if she denounces them, she alienates a lot of her potential voters who believe the falsehood. Tough bind. So she blatantly, and rather desperately, tries to change the subject. Jobs! Let’s talk about those instead. Please? Irrelevant Appeals Any kind of appeal to a factor, consideration, or reason that isn't relevant to the argument at hand (but is used as a reason rather than as a mere distraction—A Red Herring is a distraction, not an irrelevant reason) is called an Irrelevant Appeal. The premises aren’t relevant to the truth or falsity of the conclusion because whether or not the conclusion is true doesn’t depend at all on whether or not the premises are true. The core Irrelevant Appeals to Know: Appeal to Unqualified/False Authority Appeal to Force Appeal to Popularity/to the People/Bandwagon Appeal to Consequences Appeal to Unqualified Authority Note that this is sometimes called the "Appeal to Authority”, but we trust authorities all the time about lots of things and we're right to do so. The fallacy is when we trust an authority on one subject (or perhaps someone who is not an authority on anything at all) to speak on another subject. Figure 4.3.4 : No matter the fact that you’re my elder, Mr. Turkey, you’re no expert on Quantum Physics! (Image Credit: Otto Speckter in Picture Fables) From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. In a society like ours, we have to rely on authorities to get on in life. For example, the things I believe about electrons are not things that I have ever verified for myself. Rather, I have to rely on the testimony and authority of physicists to tell me what electrons are like. Likewise, when there is something wrong with my car, I have to rely on a mechanic (since I lack that expertise) to tell me what is wrong with it. Such is modern life. So there is nothing wrong with needing to rely on authority figures in certain fields (people with the relevant expertise in that field)—it is inescapable. The problem comes when we invoke someone whose expertise is not relevant to the issue for which we are invoking it. For example, suppose that a group of doctors sign a petition to prohibit abortions, claiming that abortions are morally wrong. If Bob cites that fact that these doctors 4.3.9 https://human.libretexts.org/@go/page/223851 are against abortion, therefore abortion must be morally wrong, then Bob has committed the appeal to authority fallacy. The problem is that doctors are not authorities on what is morally right or wrong. Even if they are authorities on how the body works and how to perform certain procedures (such as abortion), it doesn’t follow that they are authorities on whether or not these procedures should be performed—the ethical status of these procedures. It would be just as much an appeal to consequences fallacy if Melissa were to argue that since some other group of doctors supported abortion, that shows that it must be morally acceptable. In either case, since doctors are not authorities on moral issues, their opinions on a moral issue like abortion is irrelevant. In general, an appeal to authority fallacy occurs when someone takes what an individual says as evidence for some claim, when that individual has no particular expertise in the relevant domain (even if they do have expertise in some other, unrelated, domain). Appeal to Force An appeal to force is an irrelevant appeal because it apparently argues that some proposition is true, but uses as justification for that claim a threat on the listener. If you don’t believe this, then you will suffer bad consequences. But that’s not a reason to believe the proposition. That’s a reason to make yourself believe it or to act as if you believe it. A good argument actually gives you reason to believe the conclusion and an appeal to force does no such thing! The following is from: Knachel, Matthew, "Fundamental Methods of Logic" (2017). Philosophy Faculty Books. 1. http://dc.uwm.edu/phil_facbooks/1 Creative Commons Attribution 4.0 International License Perhaps the least subtle of the fallacies is the appeal to force, in which you attempt to convince your interlocutor to believe something by threatening him. Threats pretty clearly distract one from the business of dispassionately appraising premises’ support for conclusions, so it’s natural to classify this technique as a Fallacy of Distraction. There are many examples of this technique throughout history. In totalitarian regimes, there are often severe consequences for those who don’t toe the party line (see George Orwell’s 1984 for a vivid, though fictional, depiction of the phenomenon). The Catholic Church used this technique during the infamous Spanish Inquisition: the goal was to get non-believers to accept Christianity; the method was to torture them until they did. An example from much more recent history: when it became clear in 2016 that Donald Trump would be the Republican nominee for president, despite the fact that many rank-and-file Republicans thought he would be a disaster, the Chairman of the Republican National Committee (allegedly) sent a message to staffers informing them that they could either support Trump or leave their jobs. Not a threat of physical force, but a threat of being fired; same technique. Again, the appeal to force is not usually subtle. But there is a very common, very effective debating technique that belongs under this heading, one that is a bit less overt than explicitly threatening someone who fails to share your opinions. It involves the sub-conscious, rather than conscious, perception of a threat. Here’s what you do: during the course of a debate, make yourself physically imposing; sit up in your chair, move closer to your opponent, use hand gestures, like pointing right in their face; cut them off in the middle of a sentence, shout them down, be angry and combative. If you do these things, you’re likely to make your opponent very uncomfortable—physically and emotionally. They might start sweating a bit; their heart may beat a little faster. They’ll get flustered and maybe trip over their words. They may lose their train of thought; winning points they may have made in the debate will come out wrong or not at all. You’ll look like the more effective debater, and the audience’s perception will be that you made the better argument. But you didn’t. You came off better because your opponent was uncomfortable. The discomfort was not caused by an actual threat of violence; on a conscious level, they never believed you were going to attack them physically. But you behaved in a way that triggered, at the sub-conscious level, the types of physical/emotional reactions that occur in the presence of an actual physical threat. This is the more subtle version of the appeal to force. It’s very effective and quite common (watch cable news talk shows and you’ll see it; Bill O’Reilly is the master). Ad Populum 4.3.10 https://human.libretexts.org/@go/page/223851 Figure 4.3.5 : I don’t care how popular bear jousting is, it’s just wrong! (Image Credit: Otto Speckter in Picture Fables) Appeal to the People, to Popularity, Nose-Counting Fallacy, Bandwagon Fallacy, argumentum ad populum are all names for the same thing: appealing to the popularity of a thing or idea or practice in order to justify that thing or idea or practice. In an argument, one appeals to the popularity of a conclusion and then uses that popularity as a basis for inferring that the conclusion is true. The popularity of a new smartphone or computer might be used to justify it’s status as the best available. The popularity of a politician might be used to justify the claim that they should be President. The popularity of a person might be used to attempt to exonerate them from a crime or protect them from criticism. In each case, mere popularity doesn’t mean we should believe something is good or worthy of special consideration. The popularity of belief in God might be used as evidence that God exists. After all, that many people can’t be wrong, right? Alternatively, the popularity among scientists of belief in an atheistic universe might be used as evidence that God doesn’t exist. After all, that many scientists can’t be wrong, can they? In reality, the popularity of a belief doesn’t give us reason to think that belief is true. After all, there have been lots of popular ideas in the past that turned out to be not only false, but morally abhorrent! Appeal to Consequences Appeal to consequences is yet another “irrelevant appeal” vice. Again something which isn’t relevant to the truth or falsity of the conclusion is appealed to in arguing for that conclusion. It won’t help though, since it’s not relevant! From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. The appeal to consequences fallacy is like the reverse of the genetic fallacy: whereas the genetic fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the origin of the idea, the appeal to consequences fallacy consists in the mistake of trying to assess the truth or reasonableness of an idea based on the (typically negative) consequences of accepting that idea. For example, suppose that the results of a study revealed that there are IQ differences between different races (this is a fictitious example, there is no such study that I know of). In debating the results of this study, one researcher claims that if we were to accept these results, it would lead to increased racism in our society, which is not tolerable. Therefore, these results must not be right since if they were accepted, it would lead to increased racism. The researcher who responded in this way has committed the appeal to consequences fallacy. Again, we must assess the study on its own merits. If there is something wrong with the study, some flaw in its design, for example, then that would be a relevant criticism of the study. However, the fact that the results of the study, if widely circulated, would have a negative effect on society is not a reason for rejecting these results as false. The consequences of some idea (good or bad) are irrelevant to the truth or reasonableness of that idea. Notice that the researchers, being convinced of the negative consequences of the study on society, might rationally choose not to publish the study (for fear of the negative consequences). This is totally fine and is not a fallacy. The fallacy consists not in choosing not to publish something that could have adverse consequences, but in claiming that the results themselves are undermined by the negative consequences they could have. The fact is, sometimes truth can have negative consequences and falsehoods can have positive consequences. This just goes to show that the consequences of an idea are irrelevant to the truth or reasonableness of an idea. The Fallacy Fallacy Perhaps the most important vice to be aware of goes by the name: the Fallacy Fallacy! Remember that most other textbooks call these vices “fallacies” and remember that at the beginning of the chapter we said that whether or not one’s opponent argues 4.3.11 https://human.libretexts.org/@go/page/223851 virtuously is irrelevant to whether or not one’s opponent is in fact correct in their conclusion. They might believe the right thing for wrong reasons or they might have good reasons that just don’t come through clearly when they try to explain their beliefs. Here’s an example of the fallacy fallacy:  Example 4.3.2 Person E: My opponent has argued that we should lower taxes because it would stimulate commerce. I think we should be focusing on the war we’ve been fighting at great cost instead of arguing about whether or not lower taxes would stimulate the economy. Person F: Well clearly my opponent has never taken a logic and critical thinking class, because they have just committed a grievous sin against reasoning: the red herring fallacy. I, therefore, conclude that we should lower taxes. Person E is indeed guilty of a red herring: they changed the subject to something irrelevant to the original topic. We started talking about an inference from “lowering taxes would stimulate the economy” to “we should lower taxes.” But by the end of Person E’s speech, we were talking about something different: a costly war our nation is fighting. The topic has changed. That being true, though, doesn’t mean that Person E is wrong about their conclusion. If Person E wants to cut spending on wars or raise taxes to pay for them, their reasoning badly in one particular instance does not mean that their position is wrong. It may well be that we should raise taxes. Person E just isn’t the best representative of the view. Person F doesn’t get my vote either, though, because they don’t understand a basic truth of reasoning: just because an argument for a position is bad, doesn’t mean that position is wrongheaded or incorrect. The Fallacy Fallacy happens when someone uses the fact that a fallacy was committed to justify rejecting the conclusion of the fallacious argument. Avoid this sort of thinking. The fallacy fallacy might count as a vice of relevance, so we’ll include it in that category for our purposes here. Vices of Presumption The vices in the previous section were all various examples of failing to make arguments that are relevant to the topic or argument at hand. The vices in this section have a similar unifying theme, in which something is being presumed in the premises that allows the conclusion to be inferred. That something—the presumption of the argument—is in each case not warranted. If we sneak in an assumption without actually justifying that assumption, then we’re creating the illusion that we’ve given good reasons for what we believe, when in fact we have only presumed what we believe. Try not to presume! Vices of presumption are all shortfalls in thinking which problematically presume their conclusion to be true in the set up or assumptions of the argument. A funny example of presumption (my classmates did this as a joke when I was in elementary school): the complex question. For instance, you could ask "does your mom know that you do drugs?" You would be presuming that the recipient of the question does drugs because you're only asking about their mother's knowledge. Other examples are "when are you going to stop stealing my food?" and "how do you justify to yourself that you lie to everyone all the time?". In each case, facts are being presumed that have not been agreed on as facts! This helps us get a sense of what presumption is and why it might be a problem. Inequity in Evaluating Evidence Consider someone who thinks whole milk ice cream is superior to frozen yogurt. Whenever someone presents evidence of the health benefits or excellent flavor in frozen yogurt, they scrutinize the evidence with great skepticism, looking for every little reason to reject the evidence. They demand near scientific thresholds to make the case for frozen yogurt. But when it comes to evidence for their own love of whole milk ice cream, they are willing to accept even anecdotal testimony or hasty statistics as bolstering their argument. What has gone wrong in this situation? The ice cream lover is someone who applies one standard of evidence to evidence against their position, and another standard of evidence to evidence that favors their position. This is a way of presuming one is right before the evidence has been heard, such that the evaluation of the evidence serves to make sure the “right” conclusion results. This is not how an intellectually honest and humble thinker approaches matters. They want the truth, even if it requires admitting their mistake. Thinkers disposed to virtue will be even-handed when assessing evidence, especially evidence supporting views 4.3.12 https://human.libretexts.org/@go/page/223851 different from their own. They will not favor evidence that supports their belief simply because it supports their belief, nor will they discount evidence that undermines their belief simply because it undermines it. Inequity in evaluating evidence is typically is an expression of a deeper character vice in humans, confirmation bias. We will learn more about confirmation bias in Chapter 8.1 "Confirmation Bias". Confirmation bias is a psychological handicap in humans that once we believe something, it is easier for us to keep believing it rather than change our minds. Thus we evaluate evidence unequally because our brains are predisposed to hold on to what we already believe rather than give credence to possibilities that would require us to change our minds. Also in Chapter 8.5 "Texas Sharpshooter" we’ll learn about a fallacy of inductive reasoning nicknamed after a tall tale about a Texas sharpshooter. This fallacy is related to inequity in evaluating evidence, but the two vices are subtly different. Inequity in evaluating evidence is primarily about how we presume the evidence should be judged—evidence against us should be judged more stringently, while evidence in our favor should be judged more leniently. As we’ll see when we learn about the Texas sharpshooter fallacy, that vice more describes a pattern of (fallacious) inductive reasoning in which we start from our conclusion and select evidence that supports it (rather than virtuous induction, where we start from our evidence and infer a conclusion). A virtuous thinker allows new evidence to dictate how they understand what conclusion is the most reasonable one. But you should see all these vices as a family: they’re different ways of not thinking well about evidence. They are also different ways of not displaying the virtues of curiosity and honesty. False Dilemma/Black and White From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. Suppose I were to argue as follows: Raising taxes on the wealthy will either hurt the economy or it will help it. But it won’t help the economy. Therefore it will hurt the economy. The standard form of this argument is: 1. Either raising taxes on the wealthy will hurt the economy or it will help it. 2. Raising taxes on the wealthy won’t help the economy. 3. Therefore, raising taxes on the wealthy will hurt the economy. This argument contains a fallacy called a “false dichotomy.” A false dichotomy is simply a disjunction that does not exhaust all of the possible options. In this case, the problematic disjunction is the first premise: either raising the taxes on the wealthy will hurt the economy or it will help it. But these aren’t the only options. Another option is that raising taxes on the wealthy will have no effect on the economy. Notice that the argument above has the form of a disjunctive syllogism: A∨B ∼A ∴ B However, since the first premise presents two options as if they were the only two options, when in fact they aren’t, the first premise is false and the argument fails. Notice that the form of the argument is perfectly good—the argument is valid. The problem is that this argument isn’t sound because the first premise of the argument commits the false dichotomy fallacy. False dichotomies are commonly encountered in the context of a disjunctive syllogism or constructive dilemma (see chapter 2). In a speech made on April 5, 2004, President Bush made the following remarks about the causes of the Iraq war: Saddam Hussein once again defied the demands of the world. And so I had a choice: Do I take the word of a madman, do I trust a person who had used weapons of mass destruction on his own people, plus people in the neighborhood, or do I take the steps necessary to defend the country? Given that choice, I will defend America every time. The false dichotomy here is the claim that: Either I trust the word of a madman or I defend America (by going to war against Saddam Hussein’s regime). 4.3.13 https://human.libretexts.org/@go/page/223851 The problem is that these aren’t the only options. Other options include ongoing diplomacy and economic sanctions. Thus, even if it true that Bush shouldn’t have trusted the word of Hussein, it doesn’t follow that the only other option is going to war against Hussein’s regime. (Furthermore, it isn’t clear in what sense this was needed to defend America.) That is a false dichotomy. As with all the previous informal fallacies we’ve considered, the false dichotomy fallacy requires an understanding of the concepts involved. Thus, we have to use our understanding of world in order to assess whether a false dichotomy fallacy is being committed or not. Begging the Question From Matthew J. Van Cleave’s Introduction to Logic and Critical Thinking, version 1.4, pp. 189-195. Creative Commons Attribution 4.0 International License. Consider the following argument: Capital punishment is justified for crimes such as rape and murder because it is quite legitimate and appropriate for the state to put to death someone who has committed such heinous and inhuman acts. The premise indicator, “because” denotes the premise and (derivatively) the conclusion of this argument. In standard form, the argument is this: 1. It is legitimate and appropriate for the state to put to death someone who commits rape or murder. 2. Therefore, capital punishment is justified for crimes such as rape and murder. You should notice something peculiar about this argument: the premise is essentially the same claim as the conclusion. The only difference is that the premise spells out what capital punishment means (the state putting criminals to death) whereas the conclusion just refers to capital punishment by name, and the premise uses terms like “legitimate” and “appropriate” whereas the conclusion uses the related term, “justified.” But these differences don’t add up to any real differences in meaning. Thus, the premise is essentially saying the same thing as the conclusion. This is a problem: we want our premise to provide a reason for accepting the conclusion. But if the premise is the same claim as the conclusion, then it can’t possibly provide a reason for accepting the conclusion! Begging the question occurs when one (either explicitly or implicitly) assumes the truth of the conclusion in one or more of the premises. Begging the question is thus a kind of circular reasoning. One interesting feature of this fallacy is that formally there is nothing wrong with arguments of this form. Here is what I mean. Consider an argument that explicitly commits the fallacy of begging the question. For example, 1. Capital punishment is morally permissible 2. Therefore, capital punishment is morally permissible Now, apply any method of assessing validity to this argument and you will see that it is valid by any method. If we use the informal test (by trying to imagine that the premises are true while the conclusion is false), then the argument passes the test, since any time the premise is true, the conclusion will have to be true as well (since it is the exact same statement). Likewise, the argument is valid by our formal test of validity, truth tables. But while this argument is technically valid, it is still a really bad argument. Why? Because the point of giving an argument in the first place is to provide some reason for thinking the conclusion is true for those who don’t already accept the conclusion. But if one doesn’t already accept the conclusion, then simply restating the conclusion in a different way isn’t going to convince them. Rather, a good argument will provide some reason for accepting the conclusion that is sufficiently independent of that conclusion itself. Begging the question utterly fails to do this and this is why it counts as an informal fallacy. What is interesting about begging the question is that there is absolutely nothing wrong with the argument formally. 4.3.14 https://human.libretexts.org/@go/page/223851 Figure 4.3.6 : C’mon dog, you should trust me, my friend Rosco will tell you I’m trustworthy. I can vouch for Rosco. He’s a good guy. (Image Credit: Otto Speckter in Picture Fables) Whether or not an argument begs the question is not always an easy matter to sort out. As with all informal fallacies, detecting it requires a careful understanding of the meaning of the statements involved in the argument. Here is an example of an argument where it is not as clear whether there is a fallacy of begging the question: Christian belief is warranted because according to Christianity there exists a being called “the Holy Spirit” which reliably guides Christians towards the truth regarding the central claims of Christianity.1 One might think that there is a kind of circularity (or begging the question) involved in this argument since the argument appears to assume the truth of Christianity in justifying the claim that Christianity is true. But whether or not this argument really does beg the question is something on which there is much debate within the sub-field of philosophy called epistemology (“study of knowledge”). The philosopher Alvin Plantinga argues persuasively that the argument does not beg the question, but being able to assess that argument takes patient years of study in the field of epistemology (not to mention a careful engagement with Plantinga’s work). As this example illustrates, the issue of whether an argument begs the question requires us to draw on our general knowledge of the world. This is the mark of an informal, rather than formal, fallacy. Burden of Proof Shifting Sometimes we have a responsibility to offer evidence or proof for a claim we believe in. If I believe in dragons, then most people would think I’m responsible for proving that they exist if I expect anyone else to join me in believing in them. Alternatively, if I believe that drivers must obey the rules of the road, most people wouldn’t think I’d have to offer any justification for that belief if I brought it up in normal conversation. Sometimes we have the burden of proof, but other times we do not. Here’s a conversation: Aisha: I think an alien spacecraft came and kidnapped my dog last night. Rashid: What makes you think that? Aisha: Well, can you prove that they didn’t? Figure 4.3.7 : (Image Credit: Otto Speckter in Picture Fables) Something has gone wrong here, right? Aisha is making a sort of mistake: she’s making an outlandish claim, but refuses to defend it or offer evidence or reasons for believing it. The vice of Burden Shifting is when one decides that someone else must prove them wrong when in reality they are the person with the burden of proof: one should prove oneself right! As a general rule, whenever someone makes a positive claim about the world (like aliens kidnapped my dog), they should offer evidence or reason for believing that claim. When one makes a negative claim (like aliens didn’t kidnap your dog), it most of the time doesn’t feel like they’re in the same position. It seems like they don’t have to prove the negative claim unless there’s already some good reason to believe in the positive claim. This rule isn’t perfect, since sometimes a belief is so commonsense that it need not be proved, but it seems to be a good general norm for where the burden of proof lies. 4.3.15 https://human.libretexts.org/@go/page/223851 Figure 4.3.8 : (Credit: Phil Stilwell CC-License) Alternatively, as a general rule the least plausible claim has the highest burden of proof. Since plausibility of a claim depends on all of our other beliefs, though, this is hard to adjudicate sometimes. That is fancy speak for the following idea: whoever is making the wilder claim or the claim that we’re less likely to believe right away is the one with the burden of proof. This is a matter, though, of the norms of the culture we live in. In a racist society, egalitarian ideals are the ones which are “less plausible” to the elites, so they would demand more proof from someone making a claim that to us is obviously correct: that human beings are essentially equal regardless of their race. This presents a bit of a problem for those who want to use “plausibility” to decide who has the burden of proof. It suffices to say, for now, that this is simply complex and difficult to figure out. Quassim Cassam, Vice Epistemology, The Monist, Vol. 99, No. 2, Virtues (April, 2016), pp. 159-180 This page titled 4.3: Some Intellectual Vices is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform. 4.3.16 https://human.libretexts.org/@go/page/223851 4.E: Chapter Four (Exercises)  Exercise 4.E. 1 : Vices of Relevance Identify the vice of relevance being illustrated by each argument. Remember that “no vice present” is always an option. Try to decide whether this is virtuous reasoning or not. A. You’ve attack me instead of my argument, so clearly you don’t know how to reason and I’m correct after all. B. You’ve claimed that it’s wrong to use animals and so we should all become vegans, so you’re claiming that all living things are things we can’t eat? Does that mean that we can’t eat plants even? We have to eat something!!! C. She’s from Kentucky, so clearly she doesn’t know a danged thing about sailing! D. Look, you might as well just admit you’re wrong. Everyone will shun you if you don’t. E. My opponent has argued that the death penalty is costly and so should be abolished, but she also supports cutting taxes! We can’t cut taxes in the middle of a budget crisis. F. You’ve mentioned before that you reject the tenets of capitalism, but you went to a public school, so you’re not exactly an impartial judge of whether or not socialism is a good thing!  Exercise 4.E. 2 : Vices of Presumption Identify the vice of presumption being illustrated by each argument. Remember that “no vice present” is always an option. Try to decide whether this is virtuous reasoning or not. A. I believe that the government is poisoning us through breakfast cereals. If you want me to eat that, you’re going to have to prove to me that it’s safe. B. There’s either no reason to go to space, or we should put billions into technologies which allow us to go into space. All or nothing. C. You’re either a Raiders fan or you’re not a Raiders fan. Those are the only two options. D. Look if it’s bad to steal things, then it’s wrong to take food that doesn’t belong to you. It is bad to steal things, so it follows that you shouldn’t take food from that vendor at the market. E. The Republicans haven’t championed a single non-cynical or moral policy in decades. I invite you to come up with a single example. F. We need to bolster our space travel infrastructure, because we need to have easy and cheap access to space in the next forty years. Look, there’s going to be an increased need for space travel in the near future, so we’ll need cheaper access to space. A space elevator would fit the bill, and we should build one since we need to have more robust space travel infrastructure. G. Nobody likes you. I asked everyone on the playground and not a single person said they wanted to be friends with you. H. There will always be income inequality since there will always be rich and poor no matter what we do. I. We shouldn’t invade Iran since we shouldn’t pre-emptively attack a relatively non-violent sovereign nation.  Exercise 4.E. 3 : General Vices Try to decide whether this is virtuous reasoning or not. If not, try to diagnose what specifically is going wrong in your own words. Then, identify the vice illustrated by each argument (can be vices of relevance or presumption). Remember that “no vice present” is always an option—it could be an example of basically virtuous reasoning! A. You can’t be a half-hearted vegetarian. You have to choose sides: either you’re a vegan and an abolitionist or you’re a murderer and an enslaver. B. Eating meat is wrong because it’s wrong to consume the flesh of another sentient (feeling, experiencing) being. C. Written on a park table in Portland: “My bus costs $2.50. Does that mean I own it now?” 4.E.1 https://human.libretexts.org/@go/page/223857 D. I saw some young folks at the park yesterday and they seemed to be on drugs. Isn’t it terrible what is happening to our youth these days? E. I’m pretty sure we shouldn’t go to war, so that evidence that Assad is using nerve gas against his own citizens must be met with extreme suspicion. F. We have the lowest prices since we always have lower prices than our competitors. You can be sure we always have lower prices than our competitors because we have the lowest prices available. G. Andrew Lavin is the best textbook author because he wrote the best textbook and the author of the best textbook must be the best textbook author. H. I won’t be manipulated into believing that Area 51 isn’t a storage facility for alien artifacts and specimens, you’ll have to prove it to me using evidence and reasons. I. I wouldn’t want you to lose the next election, and I would know how to make that happen, so I expect you’ll be agreeing with our policy proposal. J. I want to go to North Korea on vacation. You’ll have to prove to me it’s a bad idea if you don’t want me to go. K. You want to watch the new Transformers movie? You know Michael Bay directed it, right? It’s going to be terrible. L. That cheese comes from Turkey, where they don’t require pasteurization. I wouldn’t recommend eating it while pregnant since listeria and other bacterial infections can be deadly to a developing fetus. M. I understand you’re frustrated with my habits, but you have some bad habits too, you know? N. I understand that you don’t want me to go on this vacation, and I respect that, but remember when you went on that vacation to visit your nephew last summer? That was a good time, right? I’m so glad you got to go on that vacation. Good times. O. That’s a slippery slope. I don’t think your position can possibly be correct with reasoning like that behind it! P. I understand you have a history of mental illness, so tell me how are we to trust your reasoning when you argue based on evidence and reasons that the Democratic Party is hopelessly corrupt and must be dissolved? Q. I don’t know. Lots of people seem pretty convinced that marriage is a love-based bond between two consenting adults, so it seems like that’s what marriage is. R. Bieber can’t be the best musician. He’s from Canada! They don’t make good music in Canada. S. Alanis Morrisette didn’t understand the concept of irony when she wrote “Ironic”. She’s clearly not the most astute student of the linguistic arts. T. That car won’t run well. It was built in Russia. Cars from Russia don’t tend to run well. U. Which color do you want your car to be? Black or Gray? V. If everyone starts believing in the tooth fairy, we’ll have folks ripping out their teeth for money, so we can’t encourage people to start believing in the tooth fairy. W. Miley Cyrus said that D’Addario strings are the best guitar strings. She’s a famous guitar player and musician, so I supposed D’Addario strings are really the best. X. Rambo wasn’t the greatest movie of all time. Did you know that Sylvester Stallone had a role in creating the characters and story for Creed? It was Ryan Coogler’s break out film and he later went on to direct Black Panther. Y. Veronica: I think I saw something out of the corner of my eye right now that may have been a ghost. Hypatia: Are you saying there was definitely a ghost over there? Do you have any idea how implausible that is? Z. Franz: There may be some reason to suspect that the threat from global warming has been overblown. Valeria: Are you kidding me? You’re a climate denier? All of the evidence points to the fact that humans have played the decisive role in warming the global climate. I can’t believe you’d deny that! AA. You’ve seen a ghost? That’s pretty spooky. But you take anti-depressants, right? So I guess you’re not that reliable [note: there is no known connection between anti-depressants and hallucinations]. 4.E.2 https://human.libretexts.org/@go/page/223857 BB. The CEO of Exxon Mobil has recently admitted that because of the overwhelming consensus among climate experts, we have to admit that global warming is real. But obviously they’re not an impartial person so we can reject their position. They’re probably doing this just for good press. CC. My biology teacher says global warming was caused by humans burning fossil fuels and their deforestation practices. She’s a scientist, so she must be right about this. This page titled 4.E: Chapter Four (Exercises) is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Andrew Lavin via source content that was edited to the style and standards of the LibreTexts platform. 4.E.3 https://human.libretexts.org/@go/page/223857

Use Quizgecko on...
Browser
Browser