Lecture 1 - Introduction & Substance Dualism PDF

Document Details

HalcyonHouston

Uploaded by HalcyonHouston

Tags

substance dualism philosophy of mind mind-body problem psychology

Summary

This lecture introduces the philosophy of mind, focusing on substance dualism and the mind-body problem. It explores conceptual analysis, the relationship between the manifest and scientific world views and various positions on how the mind fits into the physical world. The document also covers Descartes' substance dualism and the interaction problem.

Full Transcript

Lecture 1 - Introduction & Substance Dualism Philosophy for psychologists is: 1- Conceptual Analysis Ø Manifest world view = everyday world view (Wilfrid Sellars) Ø Scientific world view = provided by scientific research Ø what do you mean by concept …? Ø E.g....

Lecture 1 - Introduction & Substance Dualism Philosophy for psychologists is: 1- Conceptual Analysis Ø Manifest world view = everyday world view (Wilfrid Sellars) Ø Scientific world view = provided by scientific research Ø what do you mean by concept …? Ø E.g. mind or intelligence Ø Philosophy can tell us about the relation between manifest & scientific world view = Philosophers ask how the everyday, subjective experience (manifest) fits with the more objective, scientific understanding (scientific) 2- Conceptual clarification Ø like conceptual analysis: ask what someone mean by their concept of e.g. memory Ø BUT goes further: science tells us more about the concept in question Ø e.g. distinction between short- and long-term memory 3- Science of validity Ø scientists often use concepts like causality without questioning them Ø BUT are these concepts well applied? Is this a valid use of the concepts? Can we draw these inferences? These concepts must be applied correctly and validly in scientific contexts. Ø e.g. David Hume, Judea Pearl 4- Search for truth Ø Conception of philosophy in Ancient Greece Ø Sophists: trained to win an argument Ø Socrates: rejection of this practice 5- Changing perspectives Ø High-school conception of philosophy: 1. Philosophy can train us into taking other´s perspectives into account and changing our own perspective 2. Training for a systematic change in perspective 3. Necessary tool to partake in debates Philosophy is all of this: Philosophy is not: - philosophers want to know what is meant by our 1. Just chatting concepts -> we want to apply them correctly in the 2. Fact-free relevant contexts and to draw valid inferences 3. Requiring radical - often involves an exercise into changing our skepticism perspective and considering that of others 4. Requiring pure - sometimes this is done with the aim of relativism approximating objective truth Philosophy for psychologists Ø Academic critical thinking = engaging in meta-science by asking questions about your field (psychology) Ø Ethical questions: Am I allowed to do this? Ø Philosophy (theory) of science - reflecting on the scientific methods you use is this supported by my evidence? Ø Foundational concepts used in the discipline (Philosophical thinking helps to clarify and question concepts) Thinking critically about foundational concepts: Ø What is the mind/psyche? Ø What is consciousness/conscious mind? Ø How does consciousness fit in the physical world? Ø What is the role of the body in consciousness and cognition? THE MIND-BODY PROBLEM Conscious beings & Physical creatures with a mental life Experiences The relationship: Bodies Thoughts Emotions mental desire for food (hunger) The mental and physical aspects are you eat & your body motivates you to eat deeply intertwined and influence each physically responds: other you feel full Example: Philosophy of mind: Mind and body seem to work together… but how? => Philosophers have been thinking about this for a while: we are studying the suggestions Question 1: What is the conscious mind? THE CONSCIOUS MIND (an initial taxonomy or classification): 1. Conscious experiences Ø Thomas Nagel: what-it-is-likeness Ø There is sth: it is like to be a bat (echolocation) Ø Consider our experiences of taste, color, … Ø Qualia (sing.: quale): all conscious experiences have a qualitative aspect 2. Cognitive states Ø States with intentionality = aboutness Ø Proportional attitudes (PAs) = mental states that involve thinking about a proposition: belief, desire, know… Ø E.g. John believes that it is raining Ø Proportional attitudes (PAs) = discrete entities 3. Emotions Ø Mental states that have: 1. Qualitative character AND 2. Intentionality Ø e.g. being mad at a bad driver the mind-body problem: How does the conscious mind, constituted by these states, fit in with the body and in the physical world? SUBPROBLEMS: 1. How do conscious experiences fit in the physical world? 2. How do cognitive states fit in the physical world? 3. How do emotions fit in the physical world? è The 3 SUBPROBLEMS can be reduced to two problems: 1. How so qualia fit in the physical world? 2. How does intentionality fit in the physical world? OVERVIEW positions discussed in Pt.: 1- SUBSTANCE IDEALISM BEHAVIORISM REDUCTIONISM/ID DUALISM ENTITY THEORY = physical world = the mind is = mind and body as depends upon the behavior = mental states are independent from mental world identical to brain each other states ELIMINATIVISM FUNCTIONALISM CONNECTIONISM EMBODIED, EMBEDDED & = there is no mind = mental states are = mental states are EXTENDED MIND realized by brain states in a neural states network = there is more mind that the brain 1- Substance Dualism (SD) René Descartes: 17th century french philosopher proponent of SUBSTANCE DUALISM rationalist saying the source of knowledge is your thoughts Question 2: Can the mind function separately from the brain? Substance = it can exist on its own (≠ properties) Ø E.g. a wood ball Substance dualism: there are 2 substances: 1. Res cogitans = thinking substance essential property of res cogitans is thinking 2. Res extensa: extended (physical) substance essential property of res extensa is having extension è To be extended is to take up a place in space è Movement is the result of collisions between extended objects How does Descartes arrive at this view? = 2 methods Descartes 1st method: Destructive method: Radical doubt - Mathematics as the prototype of science: a foundation to build on - What is a foundation you cannot doubt? Ø Teachers? Ø Observation (the senses)? What is something you cannot doubt? - Are you awake? - Do you have a body? - Does 2+2=4? - What if there is malign demon trying to fool me? ð Cogito ergo sum. = I think therefore I am. The only thing you cannot doubt is your own existence, because the act of doubting proves you exist. Descartes 2nd method: Constructive method: reliance on clear & distinct insights Descartes is… but what is he? è He is a res cogitans, a thinking substance. è With the essential property of thinking. How does Descartes know this? è This is perceived clearly and distinctly Clear & distinct truths: Ø God exists Ø God is good Ø Consequently, God does not deceive me (or at least not all the time) Ø Given my clear and distinct perceptions, I am also a body è A res extensa: a physical substance… è with the essential property of being extended The interaction problem: how does the mind and the body interact? “Given that the soul of a human being is only a thinking substance, how can it aiect the body, in order to bring voluntary actions? The question arises because it seems how a thing moves depends solely on how much it is pushed, the manner in which it is pushed, or the surface-texture and shape of the thing that pushes it. Letter of princess Elisabeth of Bohemia to Descartes 06.05.1643 Ø Causal Closure (CC) of the physical world: no energy (=no mass) gets in OR out of the system Ø every physical event has a physical cause: e.g. ball Ø if CC applies: non-physical (mental) causes are impossible to understand Ø The “Patrick Swayze” (BOOK) problem: how can a non-physical substance collide with a physical substance? Descartes answer: doesn’t provide satisfactory answers Ø On one hand: we are (a) clearly two substances Ø On the other hand: relationship between mind and body is not like a sailor who is on a ship, where the mind (the sailor) could be independent of the body (the ship) ð instead (b) mind and body are closely connected ð we cannot think of (a) and (b) together: (a) and (b) are inconsistent Descartes answer: mind and brain are connected via the pineal gland => this is not satisfactory explanation! Another suggestion by Descartes: maybe its god! Two ideas that developed this idea: occasionalism and parallelism Ø perhaps it is that God takes care of the interaction è God could have made us in a way that stepping in a nail would not cause us pain, but a pleasant experience (e.g. the taste of chocolate) Ø How does this work: occasionalism & parallelism Occasionalism: God as the true cause of events => It seems to me that my thought causes me to act Ø seems to me that my thought about raising my arm causes my arm to raise Ø INSTEAD, my thought is the occasion for God to raise my arm Ø every mental event becomes an opportunity for God to intervene in the world Ø E.g. you are hungry -> now God can push you to the pizza place Parallelism (defended by Spinoza) Ø two parallel series of events: mental & physical Ø run in synchrony like two clocks (they have been made that way) Ø God is the organizer who is behind the order of the world: clearing things out that this series of events occur in synchrony BUT occasionalism & parallelism face the same problem: how does God do it? One problem: how do mind and body interact? is replaced by another problem: how does God intervene? Ø Our minds are no longer interacting with our bodies because it is God that mediates A fatal challenge for Substance Dualism: Princess Elizabeth questioned substance dualism & replied to Descartes: “I must admit that it would be easier for me to attribute matter and extension to the soul, than to attribute to an immaterial being the capacity to move and be moved by a body.” è she is suggesting accepting a kind of monism: conceiving mind as the same material thing as the body - maybe she is right è our normal conception of a soul is of that of an extended thing SUMMARY lecture 1 Question: how does the mind fit in the physical world? Dualism: the mind can function separately from the brain ð Dualism is not a promising position mind-body debate is concerned with the question: Ø how the mind fits into the material world Ø how our consciousness interacts with the physical world philosophy: Ø is concerned with the questions Ø has a less scientific approach Ø is important Ø can tell us about the relation between diierent world views Ø focuses on the validity aspects of science: if concepts & methods are well applied What is consciousness? 1. CONSCIOUS EXPERIENCE what-it-is-likeness/qualia e.g. what it’s like to be Mark Lee 2. COGNITION: propositional attitudes, cognitive states have intentionality (=aboutness) e.g. it is raining 3. EMOTION: qualitative character and intentionality e.g. mad at a bad driver => therefore, we only have two mind body problems: 1. how do qualia fit into the physical world? 2. how do intentionality fit into the physical world? The (hard) problem of consciousness: Ø Mind and body are two fundamentally dieerent concepts, but they are heavily intertwined (=connected) è Asking about consciousness is tricky The position of the dualist: Ø believe in a physical world that is separate from the mind Ø Interaction problem: they can’t explain how they interact and not deny interaction (tries with pineal gland, ends up relying on God) Ø Two substances: 1. res cogitans = thinking substance 2. res extensa = physical substance Ø René Descartes: radical doubt -> distinct truths Ø Takes the mind but not science seriously: Mind: YES – Science: NO Lecture 2 - IDEALISM AND BEHAVIOURISM IDEALISM - Defends a form of monism Question 3: Is there only mind? substance = sth. that can exist on its own the interaction problem faced by substance dualism is: How can a non-physical substance interact with a physical substance? 2-Berkeley´s monism G. Berkeley is a monist defends a form of monism he says everything is mental on interaction problem: there is no interaction problem because there is only mind there is only mind = there is no material substance, only properties e.g. we don’t see the color blue walking around - we only see blue things To be is to be perceived: Esse est percipi = to exist is to be perceived è material things couldn’t exist if there is no mind that perceives them è shows dependence between the mind and he body: the material world cannot exist without the mind ≠ Descartes Dialogue between: Philonous (man who loves the mind) & Hylas (matter-man) è Philonous defends the thesis that there is no material substance è Note: he doesn’t deny the existence of matter è For the material world to exists we need the mind to perceive it Berkeley´s argument DESCARTES = RATIONALIST While BERKELEY (like HUME & LOCKE) IS AN EMPIRICST: KNOWLEDGE COMES FORM EPXERIENCE Empiricism: knowledge comes from sensory experience (e.g. observation) What is observable? Substances: are NOT observable Properties: ARE observable e.g: I don’t see blue walking around, if I was a colorless object, you couldn’t see me e.g.: perceive orange juice: we can see its properties: temperature, position, size, freshness, taste sour sweet… John Locke proposed a distinction between primary and secondary properties: 1. Primary properties = independent form an observer e.g. temperature, position, size... 2. Secondary properties = are NOT independent from an observer e.g. warm/cold, taste, … Berkeley denied that there are primary properties ð we must accept that the mind is the fundamental substance e.g. tree falling and we don’t hear it did it fall? e.g. temperature depends on your mind to be perceived Colors as secondary properties High school physics: “colors are the wavelengths of light” = we learn that colors are primary properties Consider 2 colors: Color 1 (bluish purple) = 390 nm is one wavelength Color 2 (purple) = 380 nm is another wavelength è Berkeley say they are secondary properties Is this how we experience these colors? Reductio ad absurdum = reduction to absurdity Color 1 = 390 nm - Color 2 = 380 nm è We can perceive the colors in the same way è If we can show that we experience 390 nm as color 2, it follows that color 2 is both 380 and 390 nm è BUT that would be absurd: color 2 cannot be both 380 and 390 nm. è Berkeley says: what really matters is how we perceive the color this is what we get from our sense = so colors are secondary properties The illusion of consciousness "The physical world certainly contains electromagnetic radiation, air pressure waves, and chemicals dissolved in air or water, but not a single sound or smell or taste exists without the emergent properties of our conscious brain. Our conscious world is a grand illusion!" è same case about all primary properties (e.g. size, figure, motion) è physical world depends on the mind to exist Colors as secondary properties: Let´s accept that colors only exist because there is an observer BUT aren’t there other primary properties or qualities? = properties that exist independently of the observer è E.g. height, size è John Locke, Galileo Galilei, Robert Boyle would not accept this idea Primary properties according to Berkeley: ALL properties are dependent on the observer è whether sth. is big depends on the observer: e.g. an insect è In consequence: size is also a secondary property Berkeley´s error Whether sth. is big or small does depend on the observer meaning size = secondary property, the material world depends on the kind of mind that is observing it there is error: if sth. is 2cm or 4m tall does not depend on me: it´s height does not depend on the observer! E.g. temperature of water independent of any observer: Sauna -> cold water feels good Cold walk -> cold water is super cold but the water and temperature didn’t change persistence of the physical world: Berkeley: the existence of the physical world depends on the existence of a mind: > physical cannot work without the mind BUT What happens if there is no observer? E.g.: what happens when we close our fridge? Does our food just disappear? => There is no one observing it so there must be an observer to guarantees that the physical world exists = GOD Berkeley needs God to stop the physical world from disappearing è this doesn’t leave us any closer to an explanation! è we substitute one problem: how does the mind interact with the physical world? with another problem: how does the physical world persist? è Idealism doesn’t take it seriously either … è It’s dieicult to make idealism compatible with science or research BEHAVIORISM Question 4: Is there only behavior? If psychology is to be scientific, then it cannot accept non-observable mental entities nor use mentalistic terms that refer to such entities. What can we observe? Input: stimuli - Output: behavior Ø We cannot see what happens inside the black box Ø The behaviorist cannot say anything about this There are 2 versions of behaviorism: 1. Philosophical behaviorism 2. Psychological behaviorism (methodological behaviorism) è psychologists and philosophers have diierent motives to defend the view Psychological behaviorism “Psychology as the behaviorist views it is a purely objective experimental branch of natural science. Its theoretical goal is the prediction and control of behavior. Introspection forms no essential part of its methods, nor is the scientific value of its data dependent upon the readiness with which they lend themselves to interpretation in terms of consciousness.” (Watson, 1913). “Human thought is human behavior” (Skinner) Methodological reasons Psychological behaviorism = also known as methodological behaviorism As a science: psychology should be objective Its method should be aimed at: observing & documenting stimulus-response correlations You do not talk about the black box Watson´s Little Albert experiment Conditioned association between stimulus & response: 1. Out of Albert´s sight: produce loud noise that scares him 2. Show him an animal of which he is not afraid 3. Show him an animal while producing the loud noise = R. Albert if afraid of the animal even in the absence of the stimulus è Watson`s conclusion: Albert´s fear is the result of conditioning Emotions can be, thus, understood in terms of stimulus-response What about the mind? Behaviorists: interested in revising the scope, methods & goals of psychology not interested: metaphysical questions - e.g. what is in the mind? However, they should be committed to the claim that there is no mind beyond behavior. Behaviorists face a dilemma: “Either a behaviorist psychology studied at least the fundamental aspects of the human psyche (of mental life) or it did not. If it did not study the fundamental of the human psyche, then it could hardly claim to be introducing a new way of doing psychology.” è behaviorists claim: there is no mind beyond behavior If you are doing psychology, then we have some questions to be answered: A refusal to talk about the mind… “But having once been a part of this major school, I confess it was not really what it seemed. Behaviorism was only a refusal to talk about consciousness. Nobody really believed he was not conscious.” (Jaynes,1977) è Behaviorist are doing psychology in a scientific way, but the problem is that there is a bunch of things in our mental life that they cannot describe in this way Philosophical behaviorism = also known as analytical/ linguistic behaviorism the motivation for philosophical behaviorism: philosophers have logical and linguistic reasons to defend behaviorism to make sense of this, we need to go back to the problems with dualism and idealism, and to the ideas of logical positivism back to the dualism and idealism (and their problems) Dualism is highly unscientific: very good analysis of what the mind is a can very easily explain that the mind has intentionality, and our thoughts are about sth. and that we have experience But we just attached those things to what the mind is Regardless of this: cartesian views of the mind marked the way we understand our mental states as inner things and inner states= Descartes huge influence (introspection) It takes the mind very seriously… But it is conceptually incoherent and goes against what scientific data shows è Sth. similar is the case with idealism è The problem with that: we struggle to understand mentalistic talk He calls it the doctrine of the ghost in the machine Gilbert Ryle (classified as philosophical behaviorist) argues against dualism: è for Descartes: animals are mindless machines it’s diiicult to attribute a mind to them because we have diierent experiences è this is not the case with human beings BUT no one can observe your mind but yourself è If this is so, then we cannot establish whether anyone (animal or human) has a mind! è Attributing a mind to someone does not explain anything è Our understanding of the mind is like that of a Ghost in the machine (= body): The phrase "Ghost in the machine" is a metaphor coined by philosopher Gilbert Ryle in 1949 to describe the idea that the mind and body are separate entities—suggesting that the mind (="ghost") exists independently of the physical body (="machine"). It implies that our mental processes are like a ghost inhabiting a body, functioning separately from the physical mechanisms of the body The mind as behavioral dispositions Distinctions between conscious & non-conscious phenomena is based on the way we behave Then, why not study behavior instead of an immaterial mind? The mind should be analyzed as behavioral dispositions Asking where the mind is = making a category mistake A category mistake - example Imagine someone asks to see Tilburg University You show them the library, the lecture halls, the canteen, etc. What if they, then, ask: “I see the buildings, but where is the university?” It´d seem like they haven’t understood what the university is =same with the mind: Asking where the mind is/ what it is independently of our behavior = pseudo-problem Mistake: categorizing sth. (uni, mind…) in the wrong category è It generates strange questions è The mind is all this behavior Dispositions Definition: a behavioral pattern that sth. displays under certain circumstances non-psychological example: being soluble psychological example: the hypocrite => someone who believes sth. but expresses the opposite - BUT the belief is a state in the black box! -> e.g. someone who says is not religious, but goes to church every Sunday -> they have inconsistent behavior: actions vs. claims Back to the mind-body problem For Ryle: the mind body problem is a pseudo-problem that arises from the category mistake of thinking that the mind is sth. beyond behavioral dispositions Logical positivism & behaviorism Ryle´s ideas are similar to those of the logical positivists: “While definitely not a card-carrying member of logical positivism Ryle was very sympathetic to their movement” Logical positivism: Aimed at distinguishing between meaningful & meaningless statements Scientific statements are meaningful Committed to empiricism: meaningful statements are related to observations ð consequence: psychology cannot be subjective The scientific world conception: The Vienna Circle (1929 manifesto) “The attempt of behaviorist psychology to grasp the psychic through the behavior of bodies is, in its principal attitude, close to the scientific world-conception.” “All sentences of psychology are about physical processes, namely, about the physical behavior of humans and other animals.” Philosophical behaviorism “In its strongest and most straightforward form, philosophical behaviorism claims that any sentence about a mental state can be paraphrased, without loss of meaning, into a long and complex sentence about what observable behavior would result if the person in question were in this, that, or the other observable behavior.” In bullet points: Philosophical behaviorism focuses on how mental states are linked to observable behavior It argues that any statement about a mental state (like "feeling happy" or "being angry") can be reworded into a description of the behavior that would occur in that state The reworded sentence will describe what behavior would be seen if the person experienced a specific mental state (e.g., crying when sad, smiling when happy) No meaning is lost in this paraphrasing; it's just an alternative way of expressing the same idea using observable actions. Pattern of behavior: what behavior follows from behavior X? Ex: sentence about a mental state: “Ann wants to go on a holiday” è What is it equivalent to? è If we oiered Ann a trip, when would she say “yes” è If we gave her travel brochure, when would she take it ð desire to take a trip = equivalent to certain behavioral dispositions Ex: sentence about a mental state: “John has a toothache” è What is it equivalent to? è If we oiered John painkillers, he would accept them è If we oiered to take him to the dentist, he would accept è If he had a free hand, he would rub his cheek The problems of behaviorism Ryle’s own problem: what is the thinker doing? 2 main problems: 1. It is impossible to define the disposition: the description will be too long & might leave things out 2. Hurt or pain will be left out ð description is not equivalent in meaning (in both cases) The 2nd problem is fatal for behaviorism: ð takes science seriously: YES - does not take the mind seriously: NO ð from a specific POV: it is far better that SD or idealism, so philosophers were reluctant to abandon it SUMMARY – lecture 2 How does the mind fit into the physical world? SD and idealism: don’t argue properly (don’t take science seriously) Behaviorism: doesn’t take the mind seriously SD, idealism, and behaviorism are not very promising positions Dualism & idealism: take mind very seriously BUT don’t argue properly (= aren’t compatible with the scientific method) Behaviorism: takes science very seriously BUT it doesn’t take the mind seriously The idealist position in the mind-body debate. è since we only have one substance (mental) we technically have no interaction problem The behaviourist position in the mind-body debate. è since we only have one substance (physical) we technically have no interaction problem, is a pseudo problem, arises form a category mistake Knowledge Clip: Idealism Rene Descartes: Believed humans consist of two independent substances: the body and the mind. Interaction problem: How these substances interact was unclear, leading to the failure of dualism. A new theory was needed Berkeley’s Idealism: Claim: There is only one substance, not two. Mind-Body Relationship: The mind may exist, but it depends on the physical world for its existence. "To be is to be perceived": The material world depends on the mental world for existence. è Example: Objects only exist because they are perceived by a mind. John Locke: DiBerentiated between primary and secondary qualities: 1. Primary qualities: Properties an object has independently of the observer (e.g., temperature). 2. Secondary qualities: Properties that depend on how the observer perceives them (e.g., hot or cold temperature) Berkeley’s View: Agreed with Locke but argued that what Locke considered "primary qualities" (e.g., size, shape) are actually secondary qualities. Without an observer, objects like water are neither hot nor cold, and a die is neither big nor small. Criticism of Berkeley’s Idealism: 1. Improper Reasoning: The size of a die doesn’t depend on the observer. It’s independent of perception (e.g., 1cm³). 2. Absurd Conclusion: If "to be is to be perceived," what happens to objects when not observed? è Berkeley's Answer: God perceives everything, even when no one else does, meaning God must exist to maintain the existence of unperceived objects. Conclusion: Idealism fails to provide a coherent explanation of the mind-body relationship. A diBerent theory is needed to address the interaction between the mind and body. Knowledge Clip: Behaviorism Relationship between Mind and Body: Behaviorism oiers a scientific approach to the mind-body problem, emphasizing observable behavior. Previous views didn’t take science seriously: o Substance Dualism (Descartes): Implied mind and body couldn’t interact, which contradicts observable interaction. Did not align with scientific understanding. o Berkeley's Idealism: Monistic view ("to be is to be perceived"), meaning the physical world depends on the mental world. Did not consider science eiectively. Behaviorism’s Scientific Approach: Science should be objective, avoiding subjective concepts. Gilbert Ryle (Behaviorist): Mind is a collection of behavioral dispositions (tendencies to act in specific situations). o Example: Sugar cube dissolves when placed in water—no "mysterious" dissolving power, just a dispositional property. o Similarly, mental states are reflected in observable behaviors in certain contexts. Belief in an additional "mind" is a category error. Category Error: A category error happens when you incorrectly assign something to a wrong category. o Example: Believing the "mind" is separate from behaviors is like assuming the university is separate from the campus. Behaviorism’s Explanation of Mental States: If behaviorism is right, we can rewrite sentences containing mental states into descriptions of observable behaviors. o Example: "Mary has a headache" → "Mary sits still, accepts a painkiller, asks for the stereo to be turned oi." Problems with Behaviorism: 1. Hard to describe all behavioral dispositions: It's impossible to list all behaviors to fully explain mental states. Paraphrasing is incomplete. o Example: Describing Mary's behavior in all scenarios (shopping, walking slowly) doesn't fully capture the mental state. 2. Leaves out the mind: Paraphrasing mental states omits the subjective experience. For instance, a headache's pain is central, not just the behaviors related to it. o Question: Is it a category error to view pain as distinct from behaviors? 3. Thinking doesn’t fit into behavioral terms: Thinking can occur without observable behavior, so how can it beLecture reduced3 to behaviors? - The identity Behaviorism theory struggled to explain this. overview of the positions Conclusion: While behaviorism took science seriously by focusing on observable behavior, it left the mind out of the theory. We need a theory that considers both the mind and science. Lecture 3 – the identity theory 1. Another type of monism idealism`s call for monism: to solve interaction problem Berkeley turns to monism: there is only one substance for idealism = there’s is only mind (res cogitans) this fails because Berkeley needs God to keep the physical world from disappearing But there is another kind of monism: Materialism or physicalism (using terms interchangeably) The only substance = is the physical: there are only extended physical things è only thing that fundamentally exist are materialistic things Advantages: + It avoids the interaction problem + You might not need God for the persistence of the physical è The mind-body identity theory (MBIT) is one version of materialism discussing materialistic view of the mind: 2. Mind-body supervenience the general assumption of all materialist theories Jaegwon KIM develops the notion of supervenience minimal demand to materialism: commitment to mind-body supervenience various approaches to what supervenience is– 2 ways of understanding thesis 1. Identity 2. Realization (lecture 4) supervenience in general: Supervenience relation thesis: one set of properties determines another set of properties supervenience base: properties of the Lego bricks supervenient properties: properties of the tower >we are trying to establish the relation between the mind and the body è tower with Lego brigs – the shape the tower gets (properties of the tower depend on properties of the Lego brigs) è other color would have diierent properties The star trek assumption Consider the tele transporter case: machine that allows you to beam yourself from location A to location B by (disassembling & reassembling) You are destroyed in one location & rebuilt physically as you are in another location (lecture 11) BUT what if something goes wrong and two versions of yourself appear? Will these two versions have the same mental states? Because the physical determines the mental if there are two things that are physically the same then they are mentally the same Mind-body supervenience tells us: any two things exactly alike in all physical properties cannot diier in mental properties (must be mentally alike too) è Physical indiscernibility entails psychological indiscernibility – if I cannot distinguish your brains, I could not be able to distinguish your mental states è supervene on = depends on: A supervenes on B = A depends on B Question 5: is the conscious mind part of the brain? MBIT = mind body identity theory – tells us that the mind is identical to 3. What is identity? Identity: we can understand identity in various ways: 1. personal identity = who am I? -> leaving this concept aside because this is not how identity is used in MBIT 2. qualitative identity = they share properties ->identical twins ->two qualitative similar brands of coiee ->this is also NOT how identity is used in MBIT 3. quantitative identity = two things are actually one and the same -> my neighbor is the winner of today`s lottery -> this is how identity is used in MBIT – the identity type we are interested in Leibniz´law: x and y are identical IF all properties of x are properties of y, and all properties of y are properties of x. so, what is the mind identical to? Acc. To the MBIT - the mind (mental states) is identical to certain brain states For Ullin T. Place (defender of the MBIT), this is an empirical matter è sth. to be discovered è as things stand, a hypothesis ð To clarify the nature of the identity between mental states and certain brain sates we need to make some distinctions Contingent vs necessary truths (technical matters of MBIT) A contingent truth can be denied without the denial resulting in a contradiction (8 questions p. 82) What happens if we deny these two claims: 1. “a triangle has three sides” – it´s denial is a contradiction è because it is wrong gif you say no a triangle has 4 sides = necessary truth è Sth that is necessarily truth could not have been otherwise (distinguish) 2. “traiic signs are triangular” – it´s denial is not a contradiction è there are circular traiic signs = contingent truth è Sth. that is contingently truth could have been otherwise (distinguish) A priori vs a posteriori truths Distinction between a priori and a posteriori: A priori: the truth of a claim can be established by mere thinking è Descartes… A posteriori: the truth of a claim can only be established by looking at the world è E.g. empirical research) = knowledge we have by looking at the world, by discovering things empirically The relation between these distinctions priori truth were thought to be necessary posteriori truth are not necessarily contingent! Philosopher Saul Kripke proposes thinking of this relation diierently: there is some posteriori truth that is necessary – there is certain truth that proposes thinking of this relation diierently? This means that if we have discovered that a=b => then this is a necessary truth Is water necessarily H2O? It could not have been otherwise Note: this is a conditional* * Kripke’s idea of necessary a posteriori truth means that some truths, though discovered through experience, are necessary once discovered. For example, we learned through science that water is H2O, and this is not just a coincidence. Once we know water is H2O, it must always be H2O—it couldn’t have been something else. This necessity comes from what we’ve discovered about the world, not from the definition of "water" or "H2O." So, it’s a conditional: if we know water is H2O, then it's necessarily true that water is H2O in all cases. Examples: 1. Venus Ø we used to think that Venus were two stars: the morning & evening star Ø we discovered that they are one & the same: just venus Ø so even when we discovered this (=a posterior truth), it could not have been otherwise (it is necessary) - learned empirically 2. Water and H2O Ø If water have had molecular structure, it might not have the features it has Ø Saying that water can also be XYZ (=have a diierent molecular structure) involves changing the way we use language Back to the MBIT identity between mental states & certain brain states = like the cases of Venus and water For the MBIT, saying that mental states could have been sth. other than certain brain states involve changing the language Identity claims in MBIT are a posteriori NECESSARY truth 4. Mind-Body Identity Theory (MBIT) MBIT in a nutshell Claim: all mental states are identical to certain brain states Recall, there are diierent kinds of mental states (lecture 1): 1. Conscious experiences 2. Cognitive states (e.g. propositional attitudes) 3. Emotions But what are these mental states? These are states of the brain: e.g., propositional attitudes are structured states of the central nervous system (see Armstrong) Note: relation between mental states & brain states isn’t symmetrical: not every brain state is a mental state BUT every mental state is a certain brain state MBIT is a version of reductive materialism Is MBIT committed to the elimination of mental states? No. Mental states are reduced to brain states => reduce water to talk about H2O If brain states are real, then it is proven mental states are real MBIT is not an eliminativist* view of the mind * An eliminativist view is one that says some of the concepts we use to understand the mind (like "thoughts" or "feelings") don't actually exist, and we should stop using them. It’s like saying, “There’s no such thing as a ‘belief’—we should get rid of that idea entirely.” So, when it’s said that MBIT is not an eliminativist view, it means that MBIT still accepts the idea that things like thoughts, feelings, and beliefs are real and useful for understanding the mind. If we can reduce a mental state to a brain state, then we can identify the mental state with the brain state We haven’t eliminated the mental state If we say that A=B and A exists => then B must exist Nonetheless, terms that refer to mental states are no longer necessary: e.g. pain è If we have the description of the brain state (c-fibers), we can eliminate the mental term that we used (pain) è By eliminating the term, we don’t eliminate the state Eliminativism & Reductionism Eliminativism Reductionism >Eliminativism eliminates mental states >A mental state is reduced or analyzed as a brain state >Consequently, we don’t need mental terms >It can also lead to the elimination of mental either! - because there is nothing more than states whatever happens in the brain >If it’s not possible to reduce a mental state to a brain state: then it doesn’t exist >An empirical matter Witches & viruses – elimination: imagine that we used to believe that the cause of some diseases were witches witches were part of our ontology BUT we discovered that viruses cause these diseases ð witches are then: eliminated from our ontology Water – reduction we discovered that water is H2O instead of talking about water, we talk about its molecular structure in this case: water is not eliminated but reduced - it is still part of our ontology Eliminativism and reductionism “(T)hese are the end points of a smooth spectrum of possible outcomes, between which there are mixed cases of partial elimination and partial reduction. Our empirical research (…) can tell us where on that spectrum our own case will fall” - Churchland Another feature of MBIT: Type physicalism - MBIT is typically interpreted as type physicalism Every token (individual) state of a type of mental state is identical to a token of a type of brain state Let´s assume we`ve discovered that pain is the firing of c-fibers ð Every token state of pain is identical to a token of c-fiber-firing types of mental state are identical with a (certain) type of brain states ð ex: experience of seeing blue = activity x in brain area V4 ð I can bring together 2 sorts of classifications other examples: type: psychology students - token: each one of us diierent sorts of substances (also water) is a token of that type of substance bread is a token of all the type dogs 5. Arguments in favor of MBIT Mind-brain correlations Consider empirical data: not an argument, this is a fact, that is data We have already found many correlations between types of mental states and types of brain states è E.g. activation in V4 is correlated with experiences of color 3 arguments that explain the strong correlation between V4 and the experience of color: 1. The simplest explanation Ø How can we explain the correlation between V4 & experience of color? Ø In same way we explain correlation between appearance of Superman & appearance of the Clark Kent: they must be the same person! 2. Ockham´s razor Ø We should follow a principle of parsimony* in our explanations Ø Realism and non-reductionism are ontologically not parsimonious Ø It´s more parsimonious to accept that mental states are just certain brain states * Principle of parsimony =simplicity choosing the explanation that a less weird entity exists (witches’ example) 3. Causal role analysis Ø case of genes: 1. Genes play a certain role in a causal chain = structure that carries information = it had a function 2. Eventually we realized that DNA plays that role 3. So, we can say that genes are identical to DNA: Genes = DNA Ø same analogy with mental states: 1. Mental states play a certain role in a causal chain: pain alerts about damage 2. If we discover that a certain brain state fulfills that role => can say pain is identical to that brain state 3. We´ve discovered that brain state X plays that role: Pain = brain state X A strong case FOR MBIT: Paul Churchland says: each of these 3 arguments in separation are not convincing BUT together they make a strong case in favor of MBIT 6. Arguments AGAINST MBIT Back to identity: How could we show that MBIT fails? Consider Leibniz law: X and y are identical IF: 1. all properties of x are properties of y 2. all properties of y are properties of x If x has a property that y doesn’t have (or vice versa), x is not identical to y – so it is an argument against MBIT A strategy against MBIT if brain states have a property that mental states don’t have (or vice versa), then brain states are not identical to mental states strategy against MBIT: find dieerentiating properties: properties that aren’t shared can we find diQerentiating properties = properties that aren’t shared = first limitation of MBIT ! some candidates: 1. epistemic properties = properties that concern what we know about sth. and how we come to know it are there epistemic properties that mental states have but not brain states? e.g. pain (known directly) vs neural activity (known indirectly) BUT water and H2O have diierent epistemic properties too So epistemic properties cannot be dieerentiating properties 2. spatial properties (e.g. location) = brain states have a location, but mental states don’t have a location (maybe that is one of the properties they don’t share) While some theories accept the latter, MBIT denies it So, we cannot use this argument because: the reason given against it is just an aiirmation of an alternative theory What is at stake is whether mental states are such that they have properties like location 3. semantic properties = properties related to the intentionality, meaning, content of a state Mental states have content, meaning & intentionality but brain states don’t But as in the case of spatial properties, we cannot use this argument What is at stake is whether mental states are such that they have semantic properties Second limitation with MBIT: Multiple realizability (MR) (the argument that is going to break the MBIT) = something is multiply-realizable if it can exist as diierent kinds of things è E.g. clocks = 2 time measuring devises: sand clock & wristwatch è E.g. liquidity Water, liquidity & multiple realizability: Water = H2O -> Water has to be H2O But liquidity is multiply realizable – it is not only H2O Multiple realizability thesis that tells us that: ð sth. is multiply-realizable when it can exist as diierent kinds of things or system of multiple kinds ð It is possible to establish that there is a realization relation between one kind of properties (mental) and properties of diierent kinds (e.g. organic material and non- organic material) So, is the mind multiply realizable? Is it like liquidity or like water? Who is in pain? when humans are in pain = certain kind of activity in neocortex MBIT: pain is activity in the neocortex How about animals that don’t have neocortex? -> E.g. fish if MBIT is right: then fish cannot be in pain BUT is this plausible? that’s not likely! if fish experience pain, and they have a diierent brain from ours the pain must be MR: Do fish experience pain? fish behave as if responding to pain (but Descartes…) evolutionary considerations: > experience of pain is useful (alertness, avoidant behaviors) > unlikely that pain evolve only a couple of hundred years ago with us => it is plausible to think that pain as other mental states is multiply realizable A response of MBIT it’s not about the diierences, but about the similarities whatever is similar between these brains: that is pain can we find those similarities? MBIT can weaken the view: it´s not about type identity, but about token identity But this makes it unclear why sth. belongs to a certain category! In consequence, a science of the mind becomes untenable – doing science of the mind base on this weaken idea is diiicult SUMMARY + MBIT is the first theory that takes science seriously AND the mind seriously This is a theory that we can accept as scientists who want a scientific theory about the mind - Even when many arguments against MBIT can be rejected, the view has one big problem: The mind seems to be MR How does the mind fit into the physical world? materialism avoids the interaction problem by stating the only substance in the world is physical (monism) X is said to supervene on Y if and only if some diierence in Y is necessary for any diierence in X to be possible mind body supervenience: any two things exactly alike physically cannot diier mentally star trek assumption identity theory è type of materialism The position in the mind-body debate of the identity theorist è Focus on quantitative identity (things are one and the same) è Mind is identical to certain brain states è Kripke: identity statements are necessary truths, è Reductivist materialism: reduces mental to brain states è Identify A=B (brain states = mental states), A exist (we measure brain activation), the B must exist (mental state) è Typically interpreted as type physicalism token mental state is identical to token brain è Highly empirical, takes the mind seriously, parsimonious è We have some problems with Leibnitz law since we kind of also have diierentiation properties è Multiple realisability also calls for problem (pain activation in frontal cortex in fish would mean they are also conscious with mental states?) The identity theory: knowledge clip Identity Theory = Reductive Materialism Ø Focuses on explaining the mind-body connection by grounding mental states in physical brain states. Ø Rejects unscientific views like Descartes' dualism, Berkeley's idealism, and behaviorism. Core Beliefs of the Identity Theory: Ø Physicalism: Claims that only physical things exist, and any mental states are entirely determined by the physical world (the brain). Ø Mental States and Brain States: Mental states (like thoughts, feelings, pain) are identical to specific brain states. This means that mental states do not exist separately from the brain's physical processes. What is meant by "Identity"? Ø Numerical (Quantitative) Identity: o This refers to one thing being the same as itself under two diBerent descriptions or labels. For example, two brands of coBee may have the same taste (qualitative similarity), but they are still numerically diTerentbrands. o In this context, a mental state (e.g., a thought or pain) is numerically identical to a brain state. They are the same thing described in diBerent terms. Ø Not Qualitative Identity: o Qualitative identity refers to things that are the same in quality but diBerent in essence (e.g., two cups of the same coBee). The identity theory, however, focuses on the numerical identity—one thing (the mental state) is the same as another thing (the brain state). Arguments for the Identity Theory: Ø Jack Smart’s Argument: o Smart argues that it is more parsimonious (simpler) to postulate that mental states are identical to brain states rather than inventing separate "mental substances" or divine beings (like God) to explain mental phenomena. o Following William of Ockham’s Razor: Never postulate unnecessary entities. It’s simpler to say mental states are brain states than to say mental states are separate entities. Reductive Materialism: Ø The identity theory is a version of reductive materialism, meaning it reduces mental states to physical brain states. Ø Mental states exist, but they are reduced to or identical to brain states. Misconception: Ø Reductionism ≠ Eliminativism: o Reductionism in the identity theory doesn’t mean we eliminate mental states; instead, we explain them in terms of brain states. o Eliminativism would only occur if a mental state couldn’t be reduced to any physical process, which is not what the identity theory claims Conclusion: The Identity Theory is the first theory that seriously addresses both the mind and science. It argues that mental states are not separate from physical states but are rather identical to them, grounding the mind firmly in the physical world. Lecture 4 – Functionalism Hilary Putnam among the philosophers who formulates functionalism: “We could be made of Swiss cheese, and it wouldn’t matter.” (Putnam, 1975 Recall multiple realizability: Something is multiply-realizable when it can exist as diierent kinds of things The MR is often regarded as fatal for MBIT Against MBIT: functionalism argues that mental states are more like liquidity than they are like water: they are MR -Negative role of MR: against MBIT +Positive role of MR: in favor of functionalism Question 6: Can machines have conscious minds? Mental states are computational: Hilary Putnam draws on Alan Turing to formulate his view Turing argues that machines (like human minds) can think è he develops the notion of computation based on that idea = Computation is using rules to manipulate symbols Consider: è 2 + 3 = a simple computation è How can we instruct (program) a machine to perform this computation? è We would give the appropriate instruction. (Turing) è How would this look? Turing imagines a machine that is capable of performing all kinds of computations The Turing Machine A Turing Machine has the following parts: 1. A tape divided into separate compartments that can move from left to right and from right to left 2. A machine head that can read or write 3. A set of internal states (configurations): q1 – qn 4. An alphabet b1 – bm : Each compartment on the tape can have just one symbol of the alphabet on it è Athena said that we just need to know names but not the details what they do At any given time, the TM is in a certain internal state (q1 – qn) and the head reads a part of the tape What the TM will do depends on: a. the internal state & b. the symbol on the tape A Turing Machine: 1. can replace the symbol by another (or the same) symbol 2. can move the tape to the left or the right (unless the computation has finished, then it stops) 3. goes to a new internal state (which can be the same as the initial state) Any mathematical computation can be performed by a Turing Machine. If it’s computational, the TM can do it. This is rule-based manipulation of symbols. It does not matter what the TM is made of as long as it does what the MT specifies ð This is multiple realizability of computation (symbol manipulation) The mind is like a Turing Machine “What it is for an organism, or system, to have a psychology, that is, what it is for that organism to have mentality – is for it to realize an appropriate Turing Machine. [...] In short, our brain is our mind because it is a computer, not because it is the kind of organic, biological structure it is.” (Kim, 1996, p. 91). The mind is multiply-realizable: it can exist as a physical computer or as a brain. THINKING IS COMPUTING Mental holism So, how does functionalism think of the mind and of diverent kinds of mental states? Functionalism considers mind as a whole Mental states as states that play a causal role in relation to: (1) input; (2) output; and (3) other mental states è Sensory input -> belief -> desire -> action è Example pain: Input: tissue damage (Sth damages your feet) -> alert (You enter a state of alert and stress) -> stress -> output: You raise your feet because it feels bad- ouach Mental states as discrete entities For functionalism: mental states are discrete We are interested, then, in the relations between the diierent mental states These relations result in behavior and in new mental states This assumes that mental states (propositional attitudes) are “discrete, reoccurring, entities with causal powers” (Sprevak) è Functionalism tells us to focus on the relations between those è E.g. form the belief that it is cloudy: Emotion: I form some negative emotions about that (winter) reasoning: I might choose not to wear warm clothes desire: I have the desire for sun I want to travel to warm country …. 1. Functionalism with respect to what? Qualia and intentionality: è Functionalism about qualia (conscious experiences): No (come back to in part II) è Functionalism about intentionality: Yes Intentionality and folk psychology Jerry Fodor combines computationalism (a new idea) with an old idea (recall lecture 1) intentionality Some mental states are about sth These mental states reflect what they are focused on Folk psychology (FP) = the theory we use to predict and explain the behavior of others ð Other people have mental states with intentionality Folk psychology (FP): è Works: we successfully predict behavior of others by attributing mental states è How can we explain this? è Acc. to Jerry Fodor: FP works because these mental states are real! 2. Classical AI Machines that think How is it possible to determine whether some machine has a psychology or a mind? The turing test Player must determine based on a conversation with someone whether the other is a person or a machine passes the test: if participant cannot tell whether it is a machine or a person The first robots Classical AI If thinking is computing = we can implement this idea in a real robot the robot: called Shakey “In the late sixties and early seventies, the blocks world became a popular domain for AI research. The key to success was to represent the state of the world completely and explicitly.” (Brooks) Google maps Classical AI still widely used E.g. path finding algorithms used in navigation programs: like Google Maps Edsger W. Dijkstra’s algorithm You can see an explanation of one version of the algorithm here: https://www.youtube.com/watc h?v=GazC3A4OQTE WATCH è we solve the task of finding the best path (e.g. the shortest route), by manipulating symbolic information following a set of rules or instructions Functionalism, classical AI and cognitivism: Functionalism = theory that defines mental states in terms of their causal role between input, output, and other mental states Cognitivism/machine functionalism = accepts functionalism and adds that cognition is computation (i.e., processing of symbolic representations) Classical AI = committed to cognitivism, implements these ideas in artificial systems 3. Functionalism vs behaviourism Sophisticated behaviourism è Is functionalism a sophisticated version of behaviorism? (as it was sometimes claimed) è Answer: No - Functionalism answers what’s in the black box Mental realism For functionalism, mental states are internal states with causal powers ^^ The behaviorist does not want to talk about internal states ! è Remember, this was one of their problems Functionalism accepts mental realism 4. Functionalism vs MBIT The model of MBIT Reduction: 1. What is the causal role analysis of X? 2. Whatever fulfils that role is X. è This is MBIT´s interpretation of supervenience The model of functionalism Reductive explanation: 1. What is the causal role analysis of X? 2. Whatever fulfils that role realizes X? è This is an alternative interpretation of supervenience The case of pain What is the function of pain? è It is a response to tissue damage è how this is realized is irrelevant 5. Evaluation of functionalism Problems with functionalism: 1. The frame problem If something changes in the environment, Shakey needs to check all its representations (its map) again to see what relevant changes have occurred. 2. The filling cabinet method as biologically unrealistic Machine functionalism follows the filling cabinet method: you put all the information in the system (the MT) before you let the system interact with the world This method is biologically unrealistic: we are not born that way 3. The Chinese room According to John Searle, we can distinguish between: 1. Weak AI: computer or AI is a tool in the study of the mind 2. Strong AI: the appropriate programmed computer really is a mind “Partisans of strong AI claim that (…) the machine is not only simulating a human ability but also: 1. That the machine can literally be said to understand the story and provide the answers to questions, and 2. That the machine and its program do explain the human ability to understand the story and answer questions about it” (Searle) The person inside the room passes the Chinese Turing test Does the person inside the Chinese room understand Chinese? She passes the test only because she has access to the rules of the language è BUT this doesn’t mean that she understands these sentences For Searle: syntax does not lead to semantics Functionalism doesn thelp us learn it just ives us symbol – we humans learn – that’s why functionalism is biologically unrealizable Serial processing Information is processed in a serial fashion There is a low tolerance to damage It is also biologically unrealistic SUMMARY Functionalism is NOT biologically realistic Functionalism can NOT explain semantics How does the mind fit into the physical world? è Functionalism is an alternative to MBIT that takes MR seriously è Now we need a theory that does that, without the problems of functionalism The position of the functionalist in the mind-body debate: è MR is in favour for functionalism è Better that behaviourism, functionalism tells what’s I the black box è Mental states = internal states with causal powers è Accepts mental realism è Does not eliminate Pas è MBTI: reduction (whatever is causal analysis of x, is x) -> Functionalism: Reductive explanation (whatever is causal analysis of x, realises x) è Not about Qualia but about intentionality è Fodor combines FP and intentionality => other people have mental states that have intentionality è Problems: serial processing, Shakey the robot, makes it biologically unrealistic è Chinese room argument: applying the set of rules that need to be programmed, written down somewhere first doesn’t proof that the machine understands è Our quest: How does the mind fit in the physical world? è Dualism & idealism: don’t take science seriously. è Behaviorism: doesn’tt take the mind seriously. è MBIT: mental states are certain brain states. è But the mind might be MR! è Functionalism: takes the mind, science & MR seriously. è But it is biologically not realistic & the Chinese Room Argument is against it Functionalism Knowledge Clip - Notes: Mind-Body Relationship: The debate centers on how the mind and body are connected. Physicalism: Only position in the mind-body debate that integrates both science and the mind. Types of Physicalism: There are diierent versions of physicalism. o Identity Theory (1st version): Mental states = certain brain states. § Problem: Multiple Realizability – Mental states can be realized in diierent ways. § Example: A clock can have diierent materials or forms but still serves the same function (telling time). § Mental states are similar: Pain might be associated with brain activity (neocortex), but animals without a neocortex (e.g., fish) still show signs of pain. § Conclusion: Pain can't be solely identified with activity in the neocortex. Functionalism: Mental states are defined by the functions they serve, not by a specific brain state Example: Pain is caused by tissue damage (like stepping on a nail), and its function is to signal discomfort (e.g., "aua") Functions are multiply realizable, meaning they can be achieved in diierent ways or with diierent structures Proponent of Functionalism: Hilary Putnam We are Turing machines, similar to the computers described by Alan Turing Turing machines can, in theory, process all imaginable input If Putnam’s theory is correct, understanding the mind is similar to understanding a Turing machine Problems with Functionalism: 1. John Searle’s Chinese Room Argument: § Symbol manipulation based on rules doesn’t lead to true understanding. Syntax alone doesn’t produce semantics (meaning). 2. Information Processing Model: § Functionalism assumes serial processing, where every step must be perfect. This is unrealistic biologically, as small errors (e.g., a neuron dying) don’t always disrupt overall function ð Conclusion: Functionalism's strict model isn’t biologically realistic for modeling the mind ð Need for a More Biological Theory: If we want a physicalist theory that accounts for multiple realizability, we need a more biologically accurate model of the mind. Lecture 5 – Connectionism An overview of the positions An alternative to functionalism: Connectionism = A version of physicalism that takes MR seriously (at least at first sight) and that claims to be biologically realistic 1. Connectionism It is an alternative to the classical serial approach to intelligence and information processing It is conceived as a computational approach to the mind: the diierence is the way in which the mind computes information Other terms that are used for this way of modeling are: 4. parallel distributed processing (PDP) 5. (artificial) neural network theory Philosophical connectionism For philosophical connectionism, the right way to model and explain mental states is as a network of units that processes information in a parallel fashion “Intelligence emerges from the interactions of large numbers of simple processing units” (Rumerhart & McClelland) Mechanisms involved in cognition are “precisely the mechanisms embodied in a large and highly trained recurrent neural network” Note: this is a restriction to MR but not a rejection of it! Biological networks In a biological neural network: it is important to understand how neurons receive and transmit information to other neurons through synapses The way the network works as a whole is central Artificial neural networks inspired by this despite diierences artificial networks an artificial network is composed by several units that are connected in a parallel fashion and “organized into layers” è units are the artificial “neurons” that receive input and send output è units are connected to each other in a parallel manner è the strength of this output is diierent for each unit and depends on the unit the output is sent to (this is the weight, which can be amplifying or reducing) Activation value depends on: a. degree of activation of units in previous layer b. weight of the connection c. an activation function è The most important information used by the system to go from input to output are the connection weights è Training involves the adjustment of weights 2. From early connectionism to the golden age Logical neurons - early connectionism it started with the logical neurons or perceptron of McCulloch & Pitts these are shallow neural networks of three units organized in two layers Units have a threshold: weight of a certain value needed for activation According to very function of proposition-logic (like “and” and “or”) can be implemented in a neural network Example: “and” “or” Conjunction “AND” It’s a simple network: two input units I1&I1 and an output unit O The input units both send a signal to the output unit, which has a threshold of 2.0 (No variable weights for the input units: if the input is 1, the output is 1 as well) O only becomes active if it gets input from both I1&I2 è But if I2, e.g. is not activated, O will not be activated Disjunction “OR” Also, a simple network: 2 input units I1&I2 and an output unit O The input units both send a signal to the output unit, which has a threshold of 1.0 O becomes active if it gets input from I1 or from I2 or from both Exclusive disjunction: a challenge for early connectionism neural networks can plausibly compute operations in which classical AI is very good at -> i.e. logical operations these shallow neural networks cannot compute non-linear functions è e.g. exclusive disjunction (XOR) solution: add layers to compute non-linear functions! But how are these hidden layers trained? è In the 80s, they came up with the answer: backpropagation A neural network that can detect mines How to recognize a mine from within a submarine? By radar? That doesn’t work well? Paul Gorman & Terry Sejnowski developed a neural network that solves this problem Gorman & Sejnowski were inspired by the auditory system 3 ranges of frequencies: low, medium and high to analyze the volume of a sound across these E.g. powerful echo of a mine at low frequencies, not at all at medium frequencies, so-so at high frequencies -> represented as Recall: Activation value of hidden and outputs units depends on: (a) Degree of activation of units in previous layer (b) Weight of the connection (c) An activation function è How does the network know how to solve the problem? training a network a neural network is not programmed but trained the training process “involves presenting the network with a sample input and then adjusting the connection weights in such a way that the network´s response to the input is improved. This is repeated over and over, with more samples presented and more adjustments made.” Backpropagation We need to adjust the weights that are stronger and increase them and decrease the weights of the connections that activated the wrong output Involves tracing the errors from output layer to input layer Backpropagation = the learning algorithm that allows us to reduce error or the diierence between the current output and the desired output Definition backpropagation: a supervised algorithm that allows the adjustment of the weights of the connections between the units of a network “Such learning algorithms can discover solutions we had not imagined.” (Andy Clark) è Diiiculty figuring out how neural networks achieve their results è They are opaque (non-transparent) The success of connectionism – image recognition Neural networks have been able to perform well in tasks in which classical AI performs poorly è E.g. complex pattern recognition like written text the challenge of snapshot reasoning one problem with these first-generation models was that they could only do “snapshot reasoning”: time was not incorporated in the model Recurrent networks the brain features both feedforward and feedback connections (descending pathways) è these pathways give info about past activity & make it relevant for current activity neural networks are like a pipeline information sampled from farther down tells us about the past è by connecting this information back to units in the hidden layer, information about the past is incorporated (like a short-term memory) recurrent pathways = make information about recent past processing available for current processing recurrent networks = are architectures that incorporate recurrent pathways 3. Post-connectionism Post-connectionism Although biologically inspired, there is a significant dieerence between the performance of connectionist networks and the brain Particularly in connection to the models of learning of connectionist networks: 1. Lack of anatomical evidence for e.g. backpropagation 2. Number of repetitions required for learning diiers dramatically 3. Catastrophic interference: solutions found for new cases interfere with the solutions found to the old cases -> More recent developments aim at addressing these limitations or challenges by, for instance (for 1 and 3): 1. developing alternative learning algorithms 3. developing training strategies or architectures that tackled catastrophic interference ð Notice that these are engineering problems with engineering solutions: core ideas on what neural networks are and do remain the same Deep learning Shallow neural networks Deep neural networks 1. Shallow: 3 to 4 hidden layers 1. Deep: containing more than 5 and up to hundreds of layers 2. Uniform: nodes deploy only one 2. Heterogeneous: nodes deploy diierent kind of function kinds of functions 3. Sparse: nodes of hidden layers are 3. Fully connected connected to only neighboring nodes 4. New architectures: e.g. transformer architecture (LLMs) 4. Evaluation of connectionism is connectionism biologically realistic? Connectionism (artificial neural networks) has some advantages over cognitivism (classical AI) that also show that it is a biologically more realistic model: 1. An economical manner of representing Stacking of representations: the same units and connections are used to store/activate diierent representations This is a very economical manner of storing data: you do not need new units to store new representations The representations are distributed 2. Tolerance to damage Since the system is parallel and not serial, damage to just one unit will not have large implications -> the system will work just fine system gradually breaks down when units are damaged: graceful degradation The system has a high damage tolerance 3. Pattern completion Neural networks can perform well even when they have incomplete input Even in cases with incomplete input, the output might be correct In that sense, patterns are completed 4. Free generalization If you have an input of e.g. an image of a tiger that looks like another input (e.g. image of a panther) and already know how to respond to the former, then you can at adequately, even though the situation is new This has implications for how we think of meaning! Neural networks accidentally get semantics right “The upshot is that semantically related items are represented by syntactically related (partially overlapping) patterns of activation.” 5. Implications of connectionism Theoretical implications representations in the brain if representations in the brain are distributed, then we can no longer think of representations as distinct entities è this means that propositional attitudes do not exist, in the classical sense (in the classical sense, PAs are regarded as distinct entities – recall lecture 4) If you learn sth. New = means that you also change what you had already learned practical implications biases in training sets network is trained rather than programmed what they learn is only as good as the training data for instance, neural networks tasked with and trained for face recognition face problems with biases related to gender and race neural networks inherit biases from us based on the biased information on which they are trained SUMMARY Some conclusions about connectionism: connectionism is an alternative to functionalism ð that takes MR seriously (even when it restricts it) BUT connectionism is biologically more realistic than classical AI/cognitivism/functionalism Position of the connectionist in the mind-body debate Ø Biologically more realistic: damage tolerance, economical manner of representing, pattern completion, free generalisation Ø Problem: representations in brain distributed means that we can no longer thing of representations as distinct-> Pas do not exist -> learning something new is changing something you already learned connectionism as a biologically realistic alternative to the functionalist view of the mind mental states are states distributed across a network: the processing of the network: the processing of the network is determined by the connection weights è neural networks are trained (via learning algorithms like backpropagation) to adjust weights è accepts multiple realizability (MR) but restricts it Refresher definitions: Functionalism: the theory that defines mental states based on the function the fulfill, that is: in terms of their casual role between input, output and other mental states Computationalism: the idea that mental processes are computations (rule-based manipulations of symbols) Cognitivism/machine functionalism: accepts functionalism and computationalism. The mind is a (multiply realizable) Turing machine. Cognition/thinking consists of the rule-based manipulation of propositions. Classic AI: committed to cognitivism, implements these ideas in artificial systems (that is: programming machines to have cognitive states by having them do rule-based manipulations of representations, like Shakey and Dijkstra´s path-finding algorithm) Connectionism: should be seen as a biologically more realistic theory of mind as a computing system than cognitivism (that is: neural networks are better model of the mind than classic Ai/Turing Machines are) Functionalism Knowledge Clip - Structured Notes: Mind-Body Relationship What is the relation between the mind and the body? Physicalism in the Mind-Body Debate - Physicalism is the only position that takes both science and the mind seriously - There are diIerent versions of physicalism. Identity Theory (First Version of Physicalism) Mental states = specific brain states. - Problem with Identity Theory: Multiple Realizability of Mental States § Mental states can be realized in diIerent ways and by diIerent structures. § Example: A clock can be made of various materials but still functions to tell time. § In the case of mental states: § Pain might be linked to brain activity in the neocortex. § However, animals without a neocortex (e.g., fish) still show behaviors indicating pain. § Conclusion: Pain cannot be identified solely with brain activity in the neocortex. Functionalism Mental states are characterized by the functions they perform. o Example of Functionalism: § Pain is caused by tissue damage (e.g., stepping on a nail). § The output is the response (e.g., saying "aua"). § These functions can be realized in multiple ways (multiply realizable) Proponent of Functionalism: Hilary Putnam - Putnam argues we are like Turing machines, as described by Alan Turing. - A Turing machine can process any imaginable input. - Conclusion: Understanding the mind is similar to understanding how a Turing machine works. Problems with Functionalism - Problem 1: John Searle’s Chinese Room Argument è Symbol manipulation based on rules does not lead to true understanding. è Syntax (form) does not result in semantics (meaning). - Problem 2: Serial Information Processing è Functionalism assumes information is processed step-by-step, with each step needing to be perfect. è This doesn’t align with biological reality, where small errors (e.g., a single neuron dying) often don’t disrupt function. Conclusion: The functionalist model is biologically unrealistic as a model for the mind Conclusion: If we want a physicalist theory that accepts multiple realizability, we should look for a more biologically realistic theory. Lecture 6 - Embodied, embedded, and extended mind in the mind-body debate, only the materialist views take both the mind and science seriously the embodied, embedded, extended mind theories is a movement that argues against these traditional views (MBIT, functionalism, connectionism) The Frankenstein Hypothesis ð We only need to consider the brain to understand the mind ð Hypothesis of brain-cantered views of mind MBIT, functionalism & connectionism = brain-centred views of the mind è They don’t pay attention to the rest of the body or the environment to understand the mind The alternative Connectionism seems to be a good alternative to functionalism The suggestion of embodied, embedded and extended theories is that the supervenience base might be wider than just the brain. That is sth. we will take a look at now Question 7: is google maps part of the conscious mind? 1. The embodied and embedded mind The body and the environment play a more important role in consciousness and cognition than previously was recognized: the mind is embodied and embedded Usually this is a claim about cognition, but it also works for consciousness Later we will see that some of them go further (like Chalmers): the mind is also extended Embodied cognition = your cognitive capacities are determined, or at least heavily influenced, by the type of body you have è Ex: concepts “in front of” and “in the back of” are determined by the type of body we have and where our eyes are positioned è what if we were spherical and looked in every direction? Would we have the concept of “up” or “down”? = Some cognitive states are embodied è Everyday examples: - Thinking with our hands: - Consider the relevance of gestures to thinking - It has been thought that gestures are relevant to various cognitive tasks that include mathematical thinking and verbal expression è More generally, our physical condition influences how we execute cognitive tasks (e.g. focus and concentration under the influence of alcohol or when ill) embedded cognition = your cognitive capacities are determined, or at least heavily influenced, by the type of environment you live in è E.g. termite nest building How do termites build nests? Ø The basic building block is a mud ball Ø They make these themselves Ø They leave a chemical trace in the ball Ø A termite leaves a mud ball where the chemical trace is the strongest Ø The pheromones near the bottom evaporate, so they start piling Ø This is called a stigmergic routine stigma= sign, ergon = work => sign to do more work Ø Piles and arches are the basic building strategies for a termite nest “The moral is that apparently complex problem solving need not always involve the use of heavy-duty individual reasoning engines, and that coordinated activity need not be controlled by a central plan or blueprint, nor by a designated “leader”. Conclusion from this example: No central supervisor is necessary Relatively simple systems can display complex behaviour Some cognitive behaviour is clearly embedded Does this challenge our ideas of cognition? Clearly if there is no plan, then there is no intentionality – do we have cognition without intentionality? Embodied consciousness = your conscious experiences are determined, or at least heavily influenced, by the type of body, you have. Example: serial killer Arthur Shawcross: His liver produced an LSD- like chemical: cryptopyrol -> this bind to vitamin B6 Vitamin B6 is needed for producing the neurotransmitter serotonin He also has lower levels of serotonin in his brain Furthermore: he had an extra Y-chromosome. Men with an extra Y-chromosome produce extra testosterone è combination of high testosterone and low serotonin is dangerous: many of these men are very aggressive è His mental states were aiected by the atypical features of his body: his consciousness was embodied è We can ask ourselves: what if a brain that didn’t have these features was transplanted in Shawcross´ body? Reading and smiling Consider facial feedback hypothesis Study by Strack, F., Martin, L. L., & Stepper, S. (1988) on how facial activity influences someone´s aiective states Participants were asked to watch funny cartoons while holding a pencil between their teeth (similar muscles used when smiling) or between their lips (similar muscles used when frowning) Those who held the pencil between their teeth found the cartoons funnier Note: problems with replication of the study Embedded consciousness = your conscious experiences are determined, or at least heavily influenced by the type of environment you live in -> e.g. colors recall -secondary qualities we saw in lecture 2 that colors are secondary qualities & that we often experience the same color as if it was diierent ones what changed when we shifted one tile to another place: surrounding colors Consciousness is embedded as well SUMMARY: on the embodied and embedded mind Ø consciousness and cognition are embodied and embedded Ø That’s not surprising if you look at it from a biological/ evolutionary perspective: all living things evolved in their in certain ecological environments Ø DO WE NEED TO ADJUST OUR ANALYSIS OF MENTAL STATES? intentionality it looks like we make progress with respect to intentionality functionalism -> connectionism -> EET do we need to add parts of the body or environment to the mind? conclusion We saw that cognition can be embedded and embodied è E.g. making a jigsaw puzzle The body and the environment allow for diierent actions By manipulating the world, the cognitive problem gets easier ð This seems like an uncontroversial claim To solve cognitive problems, we might use our environment This seems pretty straightforward & unproblematic More interesting: “where does the mind stop and the rest of the world begin?” Two answers: 1. Internalism: the mind stops where the brain/body stops 2. Active externalism: the environment plays an active role in cognitive processes Knowledge clip: The Embodied and Embedded Theory of the Mind Frankenstein Hypothesis (Movie Example) Ø In Frankenstein, a criminal's brain is used to create a new person, and the monster becomes "bad," assuming the mind is entirely shaped by the brain. Ø This assumption ("you are your brain") underlies many physicalist theories. Brain-Centric Theories Ø These theories focus on the brain to explain mental states: 1. Identity Theory 2. Functionalism 3. Connectionism Ø These theories often ignore the role of the body and environment in shaping mental states. Embodied and Embedded Mind Theories Ø Embodied Cognition: Mental states are influenced by the body type of the organism Example: A spherical organism with eyes all around wouldn’t have concepts like front and back, as it lacks a distinct "front." Ø Embedded Cognition: Mental states are influenced by the environment in which the organism lives Example: Termites build mounds without plans; their environment guides their actions (environmental signals influence behavior) Embodied Experiences Ø Embodied Consciousness: An organism’s experiences are influenced by its body -> Example: Arthur Shawcross, a serial killer, had high levels of kryptopyrrole, a chemical that mimics the eiects of LSD (visual hallucinations) - > Frankenstein Hypothesis Question: If the monster had a normal brain but Shawcross’s liver, could it aiect his behavior similarly? -> Conclusion: Other body parts besides the brain also play a role in shaping experiences Embedded Consciousness Ø Embedded Experience: Experiences are influenced by the environment -> Example: John Locke’s view on colors: colors are secondary qualities dependent on the environment. Experiences of color change in diierent environments (e.g., orange can appear more brown depending on surroundings) Ø Darwinian Perspective The idea of embodied and embedded cognition fits with evolutionary theory, where organisms evolve as a whole and adapt to their environment Conclusion: The idea that the mind is embodied and embedded is an extension of physicalism, not a rejection. Physicalists have always believed the mind is shaped by both body and environment. 2. The extended mind = active externalism = the extended mind hypothesis (EMH) The extended mind = thesis about cognitive states, not about conscious experience è The cognitive mind does not stop where the brain or body stops è The cognitive mind extends into the world “the central idea is that human thought may depend on a much broader focus than to which cognitive science has become most accustomed, one that includes not just body, brain and the natural world, but the props and aids (pens, papers, PCs, institutions) in which our biological brains “learn”, mature and operate” Extended mind checklist – the “trust and glue” conditions “it is quite proper to restrict the props and aids that can count as my mental machinery to those that are, at the very least, reliably available when needed and used (accessed) as automatically as biological processing and memory. Such simple criteria may again allow the incorporation of the artist´s sketchpad and the blind-person´s cane while blocking the dusty encyclopedia left in the garage.” 1. Parity: if done in the head would we categorize it as cognition? 2. Does it work? Is it reliably available and typically invoked? è is it a reliable advice? Does the device work? 3. Trustworthy: is it trustworthy? è is the information correct? 4. Easy to use: is it easily accessible? è is it easy to use? is an encyclopedia at home part of my mind? 1. Parity: Yes – if it was all stored in your brain, this would all be beliefs 2. Availability: No – it is not available right now. 3. Trustworthy: Yes – we usually trust the contents o9f encyclopedia´s 4. Easy to use: yes – it is not hard to find anything in an encyclopedia => parity alone is not enough, for then the encyclopedia that is on the bookshelf at home would be part of the extended mind arguments in favor (+) of the extended mind Otto and Inga "Now consider Otto. Otto suvers from Alzheimer's disease, and like many Alzheimer's patients, he relies on information in the environment to help structure his life. Otto carries a notebook around with him everywhere he goes. When he learns new information, he writes it down. When he needs some old information, he looks it up. For Otto, his notebook plays the role usually played by a biological memory." (Clark & Chalmers, 1998, p. 12) Does otto know where the MoMA is located? “It seems reasonable to say that Otto believed the museum was on 53rd Street even before consulting his notebook.” How about the dieerences with Inga? Inga has no Alzheimer Inga also wants to go to the MoMA Inga thinks for a moment & then recalls it´s on 53rd street For Clark & Chalmers, these are shallow dieerences: “The various small diierences between Otto´s and Inga´s (who does not suier from Alzheimer) cases are all shallow diierences. To focus on them would be to miss the way in which for Otto, notebook entries play just the sort of role that beliefs ply in guiding people´s lives” Is Otto´s notebook part of his mind? 1. Parity: Yes- it would be a memory or belief 2. Availability: Yes – otto takes it with all the time 3. Trustworthy: Yes – the information is correct 4. Easy to use: Yes – notebooks ar