Summary

Lecture notes covering the idea of the unfolding argument and explores the debate about whether consciousness is a result of computation. The lecture discusses arguments for/against implementationism in consciousness science.

Full Transcript

Lecture 2: The unfolding argument Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Summary 1. Functions...

Lecture 2: The unfolding argument Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Summary 1. Functions can be implemented on any universal function approximator 2. Functions determine behavior, implementation of functions does not 3. If implementation = experiences, then experience is independent of behavior → Science of consciousness is impossible How is this similar to Lecture 1? Epiphenomenalism ◼ Consciousness algorithm ◼ Algorithm causes all behavior and introspection (in response to external stimuli) → ◼ Consciousness is epiphenomenal for introspection and behavior [consciousness in others, and yourself, is undetectable] Implementationism ◼ Consciousness function [implementation also matters] ◼ Function causes all behavior and introspection (in response to external stimuli) → ◼ Consciousness (and implementation) is epiphenomenal for introspection and behavior [consciousness in others, and yourself, is undetectable] Why would implementation matter?? Why would implementation matter?? Implementation matters! ◼ Recurrent processing theory of consciousness ◼ Integrated Information Theory Implementation matters! ◼ Recurrent processing theory: consciousness only arises during recurrent processing ◼ F1 and F2 have the same input/output properties, but only F1 employs recurrent processing → only F1 produces consciousness ◼ IIT: integration of information (φ) equates to consciousness ◼ F1 = F2 regarding i/o properties, but only F1 has high φ, then only F1 produces consciousness RPT and IIT are leading theories Waste of money... ◼ However, these Implementationist theories are fundamentally flawed.... ◼ Behavioral and introspective tests are used to test the theories ◼ However, per the theories themselves, unconfounded tests will falsify the theory Consciousness science in crisis 1. Functions independent of implementation 1. Functions independent of implementation MacBook: reviews, prijzen en alle uitvoeringen op een rij HP Laptop 15s-fq2140nd - HP Store Nederland 1. Functions independent of implementation ◼ Both Apple laptops and HP laptops are Turing complete → both types of laptops can instantiate any i/o function ◼ Similarly, both feedback and feedforward networks are universal function approximators: nearly every function can be instantiated by both 1. Functions independent of implementation ◼ Recurrent networks: you can “unfold” the feedback to higher layers ◼ E.g. instead of V1 – V5 – V1 you go V1 – V5 – Va1 1. Functions independent of implementation ◼ Similarly, both a highly parallel, integrated machine and a serial machine can be Turing complete ◼ Both can install all possible functions, yet the parallel machine does so with high phi, and the serial one with low phi High phi Low phi Doerig et al., pg. 53 ◼ If there is a recurrent network (RN) that performs image recognition, there is an equivalent feedforward network (FN) that does it equally well ◼ RN exhibiting characteristics of Binocular Rivalry, an equivalent FN exist that does that too ◼ RN having one collection of spike trains as input, other spike train as output, there is an equivalent FN that does exactly the same thing Doerig et al., pg. 53 ◼ In fact, any input-output function can be implemented in infinitely many networks that are universal function approximators ◼ input can be spike trains, stimuli, TMS pulse etc. ◼ output can be neural input of later brain areas, verbal reports, overt behavior, etc. ◼ Universal Turing machines: RN network, FN network, laptop, cellular automata, cyclic tag system, etc. ◼ You can make an RN that produces the opposite behavior of the one we actually observe ◼ You can make an FN that produces the exact same behavior as the behavior we normally observe a) Mask stimulus (input) → verbal report/introspection indicate invisibility (output) b) BR: brain activity varies (input) → different reported/introspected percepts (output) c) Anaesthesia/background masks a figure (input) → behavioral report indicates invisibility (output) d) Different internal cycles (input) → sleep or awake behavior (output) a) Mask stimulus or not (input) → verbal report/introspection indicate (in)visibility (output) ◼ Measured: masking reduces recurrent processing ◼ Possible: FN showing the exact same input-output relationship, i.e. same output, but masking increases recurrent processing b) BR: brain activity varies (input) → different reported/introspected percepts (output) ◼ Measured: less phi/RP for picture, subject less likely to report is as percept ◼ Possible: more phi/RP for picture, subject less likely to report is as percept a) a b) b c) Anaesthesia/background masks a figure (input) → behavioral report indicates invisibility (output) ◼ Measured: anaesthesia reduces RP ◼ Possible: anaesthesia does not affect RP or phi d) Different internal cycles (input) → sleep or awake behavior (output) ◼ Measured: less phi/RP when people sleep ◼ Possible: awake behavior with low phi, sleep with high phi Implementation does not matter ◼ We could make a human/monkey in whom the implementation is reversed ◼ Input-output functions that are now implemented with RP/high phi will be instantiated with feedforward processing/ low phi (and vice versa) ◼ Input-output will be unchanged in this monkey/human ◼ She will indicate to not see a stimulus when it is masked, act unconsciously under aneasthesia, have a normal wake/sleep cycle ◼ However, in this subject phi is high when she indicates unconsciousness, and phi is low when she seems to be completely conscious 1. Functions independent of implementation ◼ Examples Example I, pg 53 ◼ Replace auditory cortex by a feedforward system Example I The implant takes the same collection of spike trains as inputs, and outputs the same collection of spike trains as the native brain areas. We know that such implants exist in principle because of the previously mentioned unfolding theorems. Even though the causal structure in the new implant is completely different, the rest of the brain does not notice any difference. Example I The brain can do its normal job. This means that all subjective reports by the person are identical before and after the surgery. The person will claim all the same things about sound as before the implant was placed, such as “I hear the drizzle of the rain, it is music to my ears”, or “I understand what you are saying”, etc. In particular, any experiment about which sounds are consciously perceived will yield exactly the same results as with the native brain area. Example I Therefore, we end up with the dilemma mentioned earlier: either causal structure theories are wrong (if they accept that there is still auditory consciousness with the implant), or they are outside the realm of science (if they claim that consciousness is different with and without the implant even though there are no empirical differences). Example II ◼ A completely feedforward brain Example II Since anything that can be done with a recurrent network can also be done with a feedforward network, there could be «feedforward brains» that behave exactly like human brains. Such systems would have all the same functional characteristics as a normal human brain, but completely different causal structure. They behave exactly like a human in all respects, passing the Turing test seamlessly. However, according to causal structure theories, they are not conscious because they do not have the “right” kind of causal structure. Example II Crucially, these systems respond to any empirical experiment exactly like humans. For example, they identically describe what it is like for them to see red, hear sounds, have memories, and so on. They respond to all scientific paradigms (such as masking, binocular rivalry, figure-ground segmentation, etc.) in exactly the same way. They exhibit the same wakefulness characteristics and the same sleep characteristics. In summary, no behavioural experiment can distinguish between human brains and feedforward brains in principle. Therefore, either causal structure theories are wrong or they are outside the realm of science. 1. Functions independent of implementation Basic argument 1. Functions can be implemented on any universal function approximator 2. Functions determine behavior, implementation of functions does not 3. If implementation = experiences, then experience is independent of behavior → Science of consciousness is impossible Effects of implementation Effects of implementation ◼ Implementation affects low-level features (size, friction, boiling point, etc.) ◼ Implementation does not affect higher-level features such as overt behavior and spike trains of other brain areas Effects of implementation Imagine replacing your eye by a machine that exactly copies its function. I.e. with light input the machine gives exactly the same output to the brain as your eyes would do Effects of implementation ◼ We know that your neural activity will be identical with biological and non-biological eyes ◼ Will your visual experiences be the same? ◼ Crucially, the motor cortex determines your overt behavior, so your overt behavior will be unaltered ◼ The implementation has changed, but the i/o function (i=external stimuli, o = overt behavior) has not! Effects of implementation Now replace your visual cortex by a man following a rule-book. Same output to motor- and frontal-cortex as normally is generated. Still okay? Effects of implementation ◼ Replacing eye by camera seems intuitively uncontroversial → consciousness unaffected ◼ Replacing visual cortex by man following rulebook may be more challenging intuitively ◼ Yet reasoning is the same in both cases: i/o function (i=external stimulus, o=overt behavior) is unaffected in both cases Effects of implementation ◼ Input = external stimulus ◼ Output = overt behavior, determined by activity in the motor cortex ◼ Any Turing complete device connecting input to output [electrical activity of eyes/ears to electrical activity in motor cortex] can instantiate any i/o function Basic argument 1. Functions can be implemented on any universal function approximator 2. Functions determine behavior, implementation of functions does not 3. If implementation = experiences, then experience is independent of behavior → Science of consciousness is impossible Implementation does not affect behavior ◼ Your visual cortex is rewired such that it always has low phi ◼ The neural cells in your visual cortex are replaced by electronic transistors ◼ Crucially the i/o function is exactly the same as it currently is ◼ This is possible because both low phi and electronic devices can be made Turing complete Implementation does not affect behavior ◼ What will your overt behavior be like if we only change the implementation but not the i/o function? ◼ Exactly the same under all circumstances ◼ So, in every behavioral experiment and every situation your behavior will be exactly the same RN - robot FN - robot RN - robot FN - robot I don’t know about the other I don’t know about the other guy, but I’m definitely guy, but I’m definitely conscious! conscious! Basic argument 1. Functions can be implemented on any universal function approximator 2. Functions determine behavior, implementation of functions does not 3. If implementation = experiences, then experience is independent of behavior → Science of consciousness is impossible Basic argument ◼ In science we rely on measurements ◼ Feedforward robots can yield the exact same measurements, under all circumstances, as recurrent robots → ◼ Either implementation (feedforward, recurrent, etc.) is not important for consciousness OR measurements to probe consciousness are uninformative Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Analyzing the argument ◼ Technical problems ◼ What is identified as relevant i/o here? ◼ What is the argument exactly, and is it sound? Technical problems ◼ The relevant notion is not universal function approximator, but Turing completeness ◼ RNs are Turing complete, FNs (probably) not ◼ Think of a closed, infinite, loop: that’s two layers in an RN, but an infinite amount of layers in FN Technical problems ◼ When would you ever need an infinite loop? ◼ One suggestion: keep on checking for the occurrence of a feature in multiple loops ◼ E.g. you know that A or B will occur after some time, but not which one when ◼ Two infinite loops, one checking A, one checking B ◼ Not clear if this is really biologically relevant, but claim that FN can instantiate any function is false Technical problems ◼ More generally, Turing completeness implies that any function can be instantiated, universal function approximators do not ◼ Functionality is ill-defined. IIT is also a functional theory: I = sensory input, O = level of phi ◼ This is not essentially different from (some versions) of Global Workspace Theory, where I = sensory input, O = connection to Global Workspace Analyzing the argument ◼ Technical problems ◼ What is identified as relevant i/o here? ◼ What is the argument exactly, and is it sound? Technical problems ◼ Both which input and which output are relevant is never specified in the paper ◼ Why certain input/output functions are crucial for science is not spelled out either ◼ It seems that the only focus is on behavioral output. However, we do not only experience consciousness by watching others, we also experience it directly (from the inside) Analyzing the argument ◼ Technical problems ◼ What is identified as relevant i/o here? ◼ What is the argument exactly, and is it sound? ◼ Okay, these technicalities are solvable ◼ But what is the broader argument here? What is Doerig et al.'s argument? Doerig et al.’s argument Afbeeldingsresultaat voor brain TREE | signification, définition dans le dictionnaire Anglais de Cambridge I see a tree! Doerig et al.’s argument Input Afbeeldingsresultaat voor brain TREE | signification, définition dans le dictionnaire Anglais de Cambridge I see a tree! Output Doerig et al.’s argument ◼ Input = external stimuli ◼ Output = overt behavior 1. Every Turing complete system can instantiate any i/o function 2. If a theory claims that only certain Turing complete systems create consciousness (e.g. biological systems) then no behavioral data can ever be acquired to falsify this theory Doerig et al.’s argument ◼ Is it sound? Doerig et al.’s argument ◼ Input = external stimuli ◼ Output = overt behavior 1. Every Turing complete system can instantiate any i/o function 2. If a theory claims that only certain Turing complete systems create consciousness (e.g. biological systems) then no behavioral data can ever be acquired to falsify this theory Doerig et al.’s argument ◼ Input = external stimuli ◼ Output = overt behavior 1. Every Turing complete system can instantiate any i/o function 2. If a theory claims that only certain Turing ? complete systems create consciousness (e.g. biological systems) then no behavioral data can ever be acquired to falsify this theory Epiphenomenal implementations ◼ The implementation is epiphenomenal with regards to the i/o function IFF the being is entirely computational ◼ All overt behavior of the being is caused by well- defined set of rules (as computationally defined) ◼ That is, it is a finite, deterministic (mechanical), being ◼ If the implementation is epiphenomenal, and consciousness depends on implementation, then consciousness is epiphenomenal Epiphenomenal implementations ◼ Overt behavior is completely caused by i/o function ◼ Consciousness is epiphenomenal with regards to this function → ◼ Based on overt behavior you can never know if another being is conscious This checks out! ◼ No? ◼ Consider two beings that instantiate the exact same input (=external stimulus) / output (=overt behavior) function; but different implementation ◼ All their behavior is caused by this function ◼ Sketch an experiment that gives a different result for both beings This checks out! ◼ This challenge cannot be met ◼ Therefore, no experiment with overt behavior as dependent measure can falsify the claim that one implementation does, and one does not instantiate consciousness Barry versus robot-Barry Humans vs robots - Contexta360 A wide variety of tasks ◼ Behavioral tasks ◼ Respond to stimuli on a computer screen ◼ Respond to real life stimuli ◼ Avoid objects ◼ Indicate confidence in performance ◼ Self-reports ◼ Indicate consciousness ◼ Describe experiences A wide variety of tasks ◼ Both will have the same performance on all tasks ◼ Both have the same confidence and metacognition ◼ Both will describe their consciousness in the same Belief science Religion - Knowledge, Belief and Opinion | Blogs and stuff ◼ You can still believe that Robo-Barry (or bio- Barry!) is unconscious ◼ However, there is no behavioral evidence that can test this belief Introspection ◼ We do not only know about consciousness through overt behavior ◼ We also directly know our own experiences through introspection Introspection paradox ◼ If implementation causes consciousness ◼ Yet, the function causes overt behavior AND introspection → ◼ Consciousness is epiphenomenal with regards to overt behavior AND introspection Jerry A. Fodor, Philosopher Who Plumbed the Mind's Depths, Dies at 82 - The New York Times if it isn’t literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for my saying..., if none of that is literally true, then practically everything I believe about anything is false and it’s the end of the world ◼ Fodor, J. (1990). A theory of content and other essays. Cambridge: MIT Press., pg. 137 Example ◼ Change the implementation, but keep the i/o function constant ◼ i = external stimuli; o = introspection and overt behavior Example I see a tree! Output Input Afbeeldingsresultaat voor brain TREE | signification, définition dans le dictionnaire Anglais de Cambridge TREE | signification, définition dans le dictionnaire Anglais de Cambridge I see a tree! Output Example I see a tree! Output Input Afbeeldingsresultaat voor brain TREE | signification, définition dans le dictionnaire Anglais de Cambridge I see a tree! Output What is the problem here? We take a test subject. Replace occipital cortex by a functionally equivalent smart-phone. ◼ All visual experiences disappear ◼ Yet, nothing in the subject’s internal/external report can change Phone replaces occipital cortex, impact on visual experiences? Highly motivated, honest, subject Can you report that your experiences disappear? Unreportable experiences ◼ Inaccurate thoughts pop up. Yet, you uncritically believe these thoughts. ◼ If this is always the case, then your beliefs are always unreliable ◼ I.e. even now your beliefs about your own visual experiences are unreliable Spooky experiences ◼ If the invocation of “spooky” experiences is allowed, then no theory can be empirically falsified ◼ If spooky experiences are not allowed → IIT, and all implementationist theories, are incoherent Speculation Implementationism violates a core assumption of empirical science ◼ Healthy adults can, if they decide to, reliably ground cognition and overt behavior in empirical data (=sensory experiences) ◼ Without this ability: theories cannot be empirically falsified Today’s lecture ◼ Summarizing the paper ◼ Analyzing the argument ◼ Preview: illusionism or non-computationalism Illusionism or non-computationalism ◼ Doerig’s argument can be sharpened for clarification ◼ Moreover, we can broaden the output. Input = external stimuli; output = overt behavior AND cognition (thoughts and beliefs) ◼ We will focus on sensory experiences (=empirical data) Illusionism or non-computationalism ◼ Sensory experiences are NOT deducible from i/o function (i=external stimulus, o=beliefs/reports) ◼ All beliefs/reports are caused by i/o function → ◼ Sensory experiences are epiphenomenal with regards to beliefs Illusionism or non-computationalism This means: sensory experiences are decoupled from thoughts/behavior Consciousness Science Illusionism or non-computationalism This means: sensory experiences are decoupled from thoughts/behavior Science Illusionism or non-computationalism ◼ Not only can you not detect sensory consciousness in others ◼ You cannot form accurate beliefs/thoughts about your own experiences ◼ You cannot accurately know that you have visual experiences Illusionism or non-computationalism ◼ If sensory experiences have no causal impact on thoughts/behavior ◼ Then theories, reasoning etc. cannot be based on sensory experiences ◼ This would make any empirical science impossible ◼ Idea of empirical science is to couple solid reasoning to empirical observations, i.e. sensory experiences Illusionism or non-computationalism ◼ Something has got to give! Illusionism or non-computationalism ◼ Sensory experiences are NOT deducible from i/o function (i=external stimulus, o=beliefs/reports) ◼ All beliefs/reports are caused by i/o function → ◼ Sensory experiences are epiphenomenal with regards to beliefs Illusionism ◼ Sensory experiences can be deduced from i/o function [i=stimulus, o=belief/report] Non-computationalism ◼ Beliefs and overt behavior are NOT caused by a mechanical i/o function 102

Use Quizgecko on...
Browser
Browser