Lecture 9 - Consciousness - PDF
Document Details
Uploaded by HandsDownHappiness7951
Tags
Summary
This lecture covers different perspectives and definitions of consciousness. It outlines the "hard problem" of consciousness, discussing Chalmers' work on the relationship between physical processes in the brain and subjective experiences. The lecture also touches on modern approaches and theories in measuring consciousness.
Full Transcript
Lecture 9 - Consciousness Problems, definitions and measurements of consciousness Some modern theories on the nature of consciousness Underlying brain mechanisms supporting consciousness Some evidence from patients with disorders of consciousness The Problem “Consciousness...
Lecture 9 - Consciousness Problems, definitions and measurements of consciousness Some modern theories on the nature of consciousness Underlying brain mechanisms supporting consciousness Some evidence from patients with disorders of consciousness The Problem “Consciousness poses the most baffling problem in the science of the mind. There is nothing that we know more intimately than conscious experience, but there is nothing that is harder to explain.” From ‘The character of consciousness’ by David Chalmers (2010), Oxford University Press Consciousness has no generally accepted definition in either science or philosophy, despite the many attempts to define it Definition(s) of Consciousness The word ‘consciousness’ is common in everyday language, but used in different ways: Conscious, as opposed to unconscious: equivalent to ‘responsive’ or ‘awake’ Conscious as ‘aware’ of something Conscious as ‘subjective’, personal experience The Problem (2) Chalmers (1995) coined the phrase “the hard problem of consciousness” Easy problems are ones that can be solved (in time) using cognitive models or neuroscience Cognitive functions (the easy problems) are accompanied by subjective experiences (the hard problem) The hard problem of consciousness is also called the mind-body problem: How does a physical process give rise to an accompanying mental process like subjective experience? How does the mental interact with the physical? “Easy problems” according to Chalmers: ability to discriminate, categorise, and react to environmental stimuli integration of information by a cognitive system reportability of mental states focus of attention ability of a system to access its own internal states deliberate control of behaviour difference between wakefulness and sleep Not everyone agrees there is a hard problem… or suggests we overestimate its difficulty (as well as underestimate the “easy problems”) Thomas Nagel (1974): What is it like to be a bat? “To the extent that I could look and behave like a wasp or a bat without changing my fundamental structure, my experiences would not be anything like the experiences of those animals.” We can take perspective and imagine but we can never truly obtain phenomenology Therefore, the felt contents of consciousness (qualia) are beyond scientific explanation (e.g., Chalmers, 1996) There is no problem, really Daniel Dennett: Features of any creature's consciousness that are worthwhile knowing are amenable to third-person observation (and scientific study) Katherine Akins (1993) “What it’s like to be boring and myopic”: Subjectivity of bats can be described neuroscientifically, details remain to be discovered For example, “bistable percepts” are subject of scientific study to describe the neural correlates of visual consciousness (see Koch, 2004) We make predictions (hypotheses) about our own mental states just as much as we do about others’ Even our own first-person subjective experiences need to be accessed (and some can’t) Bodily self-processing (the experiencing / observed self) occurs automatically, mostly unconsciously, and is only brought into awareness (the observing self) when things go wrong or our body needs to tell us something (e.g. pain, thirst) Otherwise we are strangers to our own selves; strangers to the root causes of our experiences Evolution did not give us access to underlying processes of perception, emotion and action to turn them on or off at will But wait… that does not mean it’s easy We still need to describe consciousness scientifically Define it in its various forms (awake, aware, specific phenomenology) Find its neural correlates in the brain and explain how they relate to experiences Identify its purpose Definition(s) of Consciousness We can try to define aspects of it… (Dehaene et al., 2014) Conscious content = what I’m aware of at any moment Conscious access = process of becoming conscious content Conscious processing = operations that can be applied to conscious contents Conscious report = process by which conscious content can be described State of consciousness = brain’s ability to entertain a stream of conscious contents Some states of (un)consciousness from Laureys (2005) Modern approaches to measure consciousness Aim: to identify the neural correlates of consciousness (NCCs) Methods for investigating consciousness Compare different states of consciousness; e.g. asleep vs awake, anaesthetised vs. awake, with vs. without disorders… How does the brain’s activity differ? Disorders of consciousness: Vegetative state: sleep-wake cycle (eyes open and some non-specific motor activity; eyes close and sleep; reflexes but no communication) Minimally conscious state: some responsiveness and communication (different from Locked-in state: conscious and cognitively intact but paralysed except eyes, unless total LIS) Compare different contents of consciousness; e.g. hallucinations & delusions from drugs or psychosis vs. not How does the brain’s activity differ in each of those? Compare different access to consciousness for same content; e.g. masking studies where some content is ‘masked’ by other content - processed to an extent but without becoming conscious How does the brain’s activity differ in each of those? Selectively “remove” cognitive functions or abilities Movement? – locked-in syndrome / paralysis Feeling / emotion? – PTSD/Depression/Depersonalisation etc. Attention? – Often coincides but not necessary Language? – Prelinguistic children, aphasic patients Memories? – Amnesic patients Reflexive self-consciousness / inner voice (narrative self) – Immersion/flow Selectively “remove” structures of the nervous system Do you need… Sensory input organs? – dreams, imagery Cerebellum (80% neurons)? – Acerebellar patients Primary sensory cortex? – Vegetative patients show activity without awareness; experimental evidence through masking studies, but see blindsight patients… so maybe Higher-level sensory association areas like Fusiform Face Area? – Probably! Prosopagnosic patients; direct stimulation of FFA distorts conscious experience of faces Interim summary Definition of Consciousness? Waking/alert awareness and subjective personal experience (of a present and continuous/ extended self within a continuous world) Daniel Dennett: “the brain’s user interface for itself” Despite lack of agreed definition, aspects of it can be measured / described individually E.g. Conscious state can be measured along a bi-dimensional awake/aware continuum What do we know? What have all of these approaches taught us about the neural basis of consciousness? What are some of the modern theories of consciousness that are informed by these approaches and research findings? (see Northoff & Lamme, 2020, for review; and Koch et al., 2016 for critical assessment) Modern approaches to consciousness Modern neuronal theories consider consciousness a working part of the cognitive system and residing within relevant functional networks Although only certain parts of the brain will reflect conscious experiences (frontal, parietal, superior temporal, insular cortex; closely associated structures like thalamus & basal ganglia) – most processing in brain is unconscious – modern theories are less about brain regions… …and more about the way our neurons work and interact / share info with one another in large- scale global networks Integration and differentiation of neural activity Re-entrant processing and predictive coding Consciousness is distributed in the brain, not fixed “Anything” can be conscious if it is part of the current working circuit Long-distance axons and synchronised neural activity are needed to make a circuit Re-entrant processing may be needed to fully establish integrated representations that are subjectively experienced (There is no mind-body problem: Qualia are not caused by neural activity, they are entailed by neural activity) Modern neuronal theories of Consciousness: Stanislas Dehaene (2006, 2014): Neuronal Global Workspace Theory based in part on Bernard Baars’ (1988, 1997) GWT and Giulio Tononi’s (2004, 2008) Information Integration Theory Gerald Edelman (2000, 2011) and Lawrence Ward (2011): Neural Darwinism / Dynamic core theory; Thalamic dynamic core theory Not opposing; there is a fair amount of overlap (see Edelman, Gally & Baars, 2011) – emphasis on relative importance of areas differs Northoff & Lamme (2020) review even more theories & propose some convergence; Koch et al. (2016) identify empirical progress and problems Neuronal Global Workspace Theory Stanislas Dehaene (2009; 2014): What we subjectively experience as a conscious state is the global availability of the corresponding information Once in GWS, that info can then be ‘broadcast’ to specialised processors Makes verbal and other reports possible Change automatic responses to deliberate ones 5 main types of processors connected to GWS: perceptual systems, motor systems, attentional system, evaluative systems, long-term memory Consciousness relies upon the integrity of long-distance connections exchanging information (cortico-cortical circuits; long-axon pyramidal neurons) Especially dorsolateral prefrontal cortex (dlPFC) and its connections are important in allowing access to consciousness dlPFC highly deactivated during REM sleep (dream states) vs. during waking self- awareness Cortico-cortical exchange of information allows flexible routing à allows the slow serial performance in novel or arbitrary tasks (ultimate function of consciousness) Neural Darwinism / Dynamic core theory Gerald Edelman (e.g., 2000, 2011): The way the brain is structured (connectivity) is fundamental to understanding consciousness Neural structures fire in synchrony and form basic operating units of the brain (if not, they are pruned in development = Neural Darwinism) Therefore, wirings organise into similar maps of neural structures in all of us, although no two people are wired in the same way Nature of consciousness depends on structural complexity of networks (i.e. what it’s like to be a…) Thalamo-cortical loops are important Thalamus gates sensory input to cortex “Miniature map” of cortex Well-placed to integrate cortical computations, sensory input and subcortical activity (see also Lawrence Ward’s thalamic dynamic core theory) Thalamo-cortical system in humans is larger than in other animals Disruption in this system can result in disruption of consciousness (vegetative state, anaesthesia) Dynamic core = Re-entrant connections in thalamo-cortical loops Neural processes that form the dynamic core of synchronised neural activity contribute to what is conscious (same processes can also not contribute when they do not form the dynamic core) Re-entrant processing Modern theories assume that the majority of conscious presence requires more than feedforward processing Cortico-cortical (or thalamo-cortical) connections which transfer information from lower level to higher level areas (feedforward) and then back to lower level areas (feedback) E.g. Central to visual perception, attention and visual awareness in visual cortex Some (predictive coding theories) go so far as to suggest that prediction of perceptual content (feedback) and comparison with actual content (feedforward info) is critical component of consciousness Disrupted re-entrant processing is investigated in visual masking paradigms, e.g. object substitution masking (Enns & Di Lollo, 1997) Thalamo-cortical re-entrant connections allow continuous re-categorisation / recognition of information Consciousness is just such a recognition First order conscious activity (concepts) = the brain re-categorising its own activity Built on first-order concepts, you get secondary consciousness (concepts about concepts), language and a concept of the self What have we learned from conscious states? How much damage can there be without a loss of consciousness? Quite a lot! Acerebellar patients or those who lose part of their cerebellum show many movement-related problems (ataxia, slurred speech, unsteady gait) But they do not complain of a loss or diminution of consciousness Philippi et al. (2012): Severe amnesia, affecting his ‘‘autobiographical self’’ Core physical self-awareness (basic self-recognition, sense of self-agency) preserved Extended self-awareness (stable self-concept and intact higher-order metacognitive abilities) also largely intact Insular cortex, ACC and mPFC not required Self-awareness likely to emerge from more distributed interactions among brain networks including those in brainstem, thalamus, and posteromedial cortices Disorders of consciousness Are patients ‘conscious’ (e.g. do they ‘feel’ pain like MCS vs. just reflexively respond to it like VS)? How do we know and how can we communicate with them? Consciousness assessed and classified along e.g. Coma recovery scale or Glasgow coma scale Coma recovery scale (CRS-R) consists of 23 items, grouped into 6 sub-scales (Auditory, Visual, Motor, Oromotor, Communication, Arousal) Total score between 0 (worst) and 23 (best); score of 10 distinguishes Vegetative from Minimally Conscious States Vegetative state: GCS subscores show eye opening [E > 1] but absence of verbalization [V < 3] and absence of localisation of pain [M < 5] Other scales show that some of these patients may actually be minimally conscious (show visual fixation, visual tracking) Awareness in disorders of consciousness? EEG measures in vegetative vs. minimally conscious vs. conscious states in patients shows information sharing in brain as an index of consciousness (e.g. King et al., 2013) To quantify global information sharing, King et al. measured extent to which neural activity (EEG signals from different locations on head) shows similar fluctuations Information sharing increases with consciousness, especially over centroposterior regions Relative importance of (pre)frontal vs. posterior regions? dlPFC important in Dehaene’s GWT; brain damage studies show posterior parietal regions may be more critical Northoff & Lamme (2020): Posterior regions mediate phenomenal / experiential aspects of sensory contents (à physical self-awareness / conscious presence) (Pre)frontal regions mediate additional cognitive processing of the same sensory content (accessing, reporting, knowing, meta-cognition) (à narrative self-awareness) In disorders of consciousness, there is often also brain damage… Another angle is to look at conscious vs. unconscious states in healthy brains… Anaesthesia (Deep, non-REM) sleep is when we are regularly unconscious E.g. Massimini et al. (2005, 2007) Brain responds differently to stimulation when asleep compared to when awake à information is not processed as widely across regions / not “broadcast” or “shared” or “integrated” Sleep reduces the normally “integrated and differentiated” processing of information… Both integration and differentiation are important for consciousness Integration measured as neural synchronicity across areas (networks) and differentiation as complexity (or irregularity) of the EEG signals (“EEG entropy”) Both integration and differentiation are important for consciousness Integration and differentiation reduced in sleep… Using TMS and EEG in a “perturb-and-measure approach” Non-integrated system = localised effects Non-differentiated system = homogenous effects over time Integrated and differentiated (conscious?) system = complex spatio-temporal patterns of effects Massimini et al. (2005, 2007): Stimulation of premotor cortex (PMC); measure TMS-pulse- evoked EEG response, peaks of which moved around cortex when awake but stay local when asleep Perturbational complexity index (PCI) – best indicator of consciousness? Summary Consciousness arises due to the brain’s multiple cognitive processes Consciousness requires a large repertoire of neural activity patterns Large-scale brain networks (long-range functional interactions between disparate regions, both cortico-cortical and thalamo-cortical) Integration of neural activity (synchronisation of activity across regions) Differentiation of neural activity (complexity) Re-entrant processes (feedback projections in particular; feedforward signalling may not be enough) Prediction of / top-down attention to and comparison with actual input Quest for neural correlates of consciousness should consider causal evidence (perturbation: brain stimulation, lesion studies) as well as correlational evidence (EEG, MRI, PET etc.) …and look at healthy brains in different states, as well as at disorders of consciousness #PS495