Full Transcript

Week 1 Day 1 What is cognitive science? Short answer: the interdisciplinary study of the mind Long answer: the interdisciplinary study of the mind as an information processor Different disciplines Philosophy Psychology Linguistics AI Anthropology Neuroscience Education A tast...

Week 1 Day 1 What is cognitive science? Short answer: the interdisciplinary study of the mind Long answer: the interdisciplinary study of the mind as an information processor Different disciplines Philosophy Psychology Linguistics AI Anthropology Neuroscience Education A taste of what we’ll talk about What happens when minds break? Ex. alien hand syndrome How your mind creates the colors you see → inferences How your mind creates the shapes and motion you see How your mind keeps track of objects that change over time How your mind creates the words that you hear How hard it is to pay attention (change blindness) How your mind learns a language How that’s different than how AI currently learns a language Why artificial minds are now much better than humans at some tasks And still amazingly stupid at many other tasks you find easy What does your brain have to do with it? And how do we get those pretty pictures? The big questions Morality Free will Consciousness What’s nice about cogs Not only does it bring together research from all of these different disciplines, it offers a unified way of answering questions about how our minds work Cogs aruges that all of these aspects of our minds can be explained by thinking carefully about how our minds process information Shared questions, many methods What to expect in the next few classes Foundations of cogs Language in the Brain Well-established, left-laterailized language network Validated across: multiple modalities, multiple methods, lifespan, atypical populations, and a range of languages But what do those languages offer typologically? Leveraging diversity of the world’s languages to inform our understanding of how linguistic information is stored and processed 24W Ling 11: Language Acquisition 24S COGS 50: Aphasia and the neurobiology of language Contact prof about language research Syllabus Week 1 Day 2: Computation in COGS How does your mind do what it does? How do you see an image? But who is seeing that image? Conscious self? The problem of the homunculus (the little guy) We cannot explain how our minds work by positing another equally intelligent being who does our minds’ work for us Unilluminating Leads to an infinite regress, need to keep positing more and more little guys There aren’t many things we can be 100% sure of in cognitive science, but here is one of them: There is not a little guy inside your head, with another little guy in his head, with another… This isn’t really a problem that’s about cat gifs… You see things You understand language You remember what you had for lunch You decide what’s right and wrong In the past, we approached the human mind with the assumption that we understood how it worked (ie we succumbed to instinct blindness) Foundation of cogs is to approach the human without the assumption that you understand how it does what it does Instead, we try to figure out how the complicated things we do, seeing, understanding, remembering, etc. could be achieved by a set of somewhat simpler steps Four Foundational Principles in COGS Decomposition Taking a complicated (mental) process and breaking it into component pieces Materialism/Naturalism The claims that the only things that exist are material parts of the natural world Reduction Your mind is ultimately the result of purely physical processes that it “reduces to” Computation We can explain your mind and how it relates to the underlying physical processes in terms of information processing The bet: we can explain all of these without anything mysterious. These are just the result of somewhat complex mental processes, which can be broken down into simpler and simpler steps (computations), that ultimately reduce to basic physical changes in your brain The idealized cognitive science process: Take an ability that minds do Explain how the ability may arise from a set of somewhat more simple processes Explain each of those processes in the same way Stop when you get to something so simple that we can get a machine to do it by following a series of specific, executable rules Ex: you learn things These commitments have real implications! There is no spooky stuff (naturalism) Your mind must be a product of physical stuff occurring (materialism) Your brain is the obvious candidate (your mind reduces to your brain) So, now considering what it means for learning some new fact So where does this leave us? Some physical part of your brains must be changing – and staying changed each time you learn something (reductionism) But each of you have different brains, so how could you all have really learned the same thing? How could this possibly work, even in principle?? Reduction → level to level (mental to physical) Decomposition → processes into component parts An incredibly brief history of cognitive science → how we arrived at this problem as a discipline We’ve been thinking about how our minds came to understand things for a long, long time Rationalism Direct perception of the truth Rationalists vs. Empiricists For the empiricists, you had to take perception and ask how it could guide you to the truth Much of this was captured in terms of association between different perceptions, and between perceptions and action… These guys were really sold on Naturalism None of that spooky stuff! Many many years later, during the beginning of psychology as an empirical science, this same impulse gave rise to behaviorism, which explicitly argued that we only focus solely on the physical stuff! We can go from physical stimulus input to physical behavioral output and just skip the middle part The brain is just a mapping-machine: from physical inputs to physical outputs Reinforcement Learning - BF Skinner - Operant Conditioning Pecking the key behavior was reinforced by food Operant response → pecking the key Reinforcer → food pellet reward, increases likelihood of the response Response Differentiation: reward only delivered when the green light is on This is called stimulus discrimination The response is said to the be under the control of the stimulus The benefit of behaviorism is that it didn’t seem mysterious We didn’t know how one could directly perceive the truth or how one could have innate knowledge But we knew what the (objective) external stimulus was and we could reliably produce a behavior upon presentation of a stimulus And so maybe all your minds are also just a scaled-up version of this Behaviorism seemed sciencey! Real benefit of this view was that everything was clearly physical, and so we felt like we were at least satisfying Materialism The trouble with Behaviorism was it turned out we were too smart Here’s how we realized that this wasn’t going to work: The exact same objective stimulus (physical thing) can give rise to different perceptions and behaviors (physical things) And the exact same behavior can be produced by infinitely different stimuli So there doesn’t seem to be a nice mapping between physical things and other physical things This is what was argued for in the rise of the “cognitive psychologists” Related to what is called the “poverty of the stimulus” argument Chomsky’s review of Skinner, optional reading There’s too little stimulus in the world to learn all of what we know This isn’t just theoretical, you can see it yourself Ex. visual illusions about light sources/shadows What does this show? Square B is always the same physical stimulus, but it elicits a different response so there is not going to be a direct simple mapping from input to output It’s not just the physical properties of the stimulus that matter Your brain is clearly a purely physical system, there’s nothing else! The inputs have to be physical (ex light waves producing nerve excitation) Everything that happens based on those inputs has to be physical (ex more neuron excitation producing more neuron excitation) The result of all of this (your perception) can only be the result of these physical changes …so there’s an objectively physical system, but the operation of that system operates indepnedently of the objective physical properties of its input? Computation: to cognitive scientist’s answer to our puzzle Alan Turing: perhaps the most important person in the history of cognitive science A detector Turing machine… read/write/movement What are we doing??? This is just meaningless… Kinda true, until you know what the symbols on the tape represent If the numbers on the Turing tape represent binary numbers… What is the Turing machine detecting? In this example it’s detecting odd numbers! And it can do this for any arbitrarily large number! A machine with this kind of ability inspired a new way of thinking about the mind Why? For this machine, there is no nice, neat mapping from a finite set of stimuli to behavior! Why? There are infinitely many odd numbers, but the machine will respond to each of them the same way Turing showed how we could maintain Naturalism and Materialism, but allow for independence between the physical inputs and outputs of a physical system. He showed how intelligent systems need not be mysterious The real key, however, was the universal turing machine A turing machine that can simulate any other arbitrary turing machine Ex. it could first run turing machine we built to check for odd number and then, only if it is, it will go into a new state and follow a new set of instructions This is what solved the puzzle! It is a physical system whose operaiton is caused not just by the physical input, but by the information it represents given that input Context! Proof of concept for a physical system where the info represented INCOMPLETE Still not convinced? Dog example Every single brain is different No one shares a single neuron, all have utterly unique sets of connections Despite massive physical differences in brains, every person likely responded to every physical input by saying “dog” There is no physical fact which provides a good explanation of the mapping We appeal to computation of the information → multiple realizability Multiple Realizability What we’ve shown is that the same thought–one about dogs–can be realized by many highly varied physical systems Now reconsider again the example of determining whether a number is odd What we’ve learned is that yet another kind of physical system (something made out of tape and markers) can solve the same problem. At a computational level, all of these physical systems are doing the same intelligent behavior Computation lies at the heart of cognitive science Most do not think we can explain behavior without appealing to the representational system that we call a mind It’s why Cogs focuses on the mind as an entity separate from the brain It’s why we focus on building minds in non-human physical machines Keeps you honest about what you know about what you are actually going The idealized COGS process Takes an ability that minds do Explain how that ability may arise from a set of somewhat more simple processes Explain each of those processes in the same way Stop when you get to something so simple that we can get a machine to do it by following a series of specific, executable rules Implement rules in different physical system and show that it can also do the same thing Stepping back: are you really trying to say our minds are just turing machines? Is that all there is to a mind? The Turing test for detecting other minds…but we aren’t actually that good at detecting other minds Some obvious dis-analogies: how is a Turing machine not like a human mind? Turing machines in principal exceed the scopes of humans since we have finite computational power Turing machines have a central processor that operates serially, one digit at a time. We process information in parallel Turing computation is deterministic, human thought is not In what sense are we actually computers? Cognitive scientists mean that minds are physical systems whose operations are explained better at a representational level rather than a physical one Correct analysis of the rules is given in terms of the information represented in the system, not in the physical structure Returning to where we began: No naturalism Your mind must be a product of physical stuff occurring (materialism) Your brain is the obvious candidate Where does this leave us? Some physical part of your brains must be changing–and staying changed each time you learn something (reductionism) How could this possibly work, even in principle? Week 1 Day 3: What Happens When Minds Break? They often tell us a lot about how they were put together Symptoms of broken things = clues about how things work normally What’s broken? What works? What does it tell us? Broca’s aphasia Intact: recalling the past, understanding the question Double Dissociation Process A is intact while B is disrupted B is intact while A is disrupted Good evidence these processes are independent Our question Can disrupted minds help us understand how they work normally? If you want to understand how the mind does what it does, it’s extremely helpful to look at cases where part of it is no longer working We can learn at least 2 kinds of information from rare cases of brain damage 1) by examining which parts of the brain are lesioned, we can learn what part of the brain typically performs the function that is impaired Psyc 38 covers more about mind-brain connection 2) by examining patterns of symptoms, we can learn more about how the mind works normally, and what cognitive functions are separate from other functions Mind or Brain? Is there a difference between the two? Oliver Sacks “Speed” illustrates this principle beautifully Take something as fundamental as how we perceive time Wonder about how it works… Sacks asks what we can learn from various disruptions: Drugs Tourette’s Syndrome Parkinson’s disease Neurons and dopamine and adrenaline NB: Sacks was writing for a popular audience and so focused on telling a compelling story We’ll focus instead on the scientific details Plan for the day: Three types of case studies Cases of visual agnosia and related phenomena Like “the man who mistook his wife for a hat” Cases of disrupted proprioception Awareness of one’s own body and movement Cases of “value related” (moral behavior) brain damage Will be posted later as a mini-lecture so we have time for questions and discussion Visual Agnosia reveals incredibly rich structure Color and shape are separate from object recognition Object recognition is necessary for reading music and words Object recognition seems to be separate from your semantic knowledge Can be totally damaged without other deficits - he has no trouble talking He can use information he can get, like shape and color or size, and then use semantic knowledge to make a guess about what object he’s seeing …and it’s separate from face perception! We use object perception to read music or words, but we don’t use it to see other’s faces Not something we could have known a priori? Face perception is holistic - he sees the entire face, not just parts of it He doesn’t see whole objects, just parts of them If face perception is separate from object perception, does anyone have the opposite problem? (ie can they recognize objects but not faces?) Prosopagnosia: what’s disrupted and what’s preserved? Face recognition is impaired, but object recognition is preserved Opposite of the previous patient Knowledge about people is also preserved Terry didn’t forget everything about her mother, she still knows who she is She can recognize her mother from her voice, and even from other visual cues like her clothes or her walk Double dissociation: object recognition and face recognition are separable, and both can be impaired independently Who knew?! Decomposition, cognitive scientists’ guiding principle Take an ability that minds do Explain how that ability may arise from a set of somewhat more simple processes Disorders can give us a hint about how such a decomposition might work! A different kind of visual agnosia: Balint’s Syndrome Movement and perception It’s a visual disorder, but object recognition remains intact Patients are not able to see more than one object at a time (simultagnosia) They often also suffer from optic ataxia They can’t correctly reach for an external object that they see No difficulty when they are reaching for a part of their own body or something they’re touching Suggests a dissociation in the systems we use to guide our reaching behavior They have difficulty fixating their eyes Insights from Balint’s Syndrome No trouble with visual acuity: for one object at a time, they see it well Suggests difficulty arises from something after primary visual processing The current best understanding of this is that it’s a deficit of attention Suggests that the perception of objects is not something we voluntarily control – rather they appear to us by grabbing our attention from the “bottom up” Patients can’t control which objects they see; some just grab their visual attention We attend to objects as a whole, not just their various parts They can tell if a single object has unequal sides, but not whether two objects are of unequal length And perhaps most surprising… Coslett & Saffran (1991) presented a patient with pairs of words or pictures briefly and asked her to read or name them When the two stimuli were not related semantically, the patient usually saw only one of them, but when they were related, more likely to see them both This suggests that there is some (nonconscious) way in which participants actually do see the objects they are not aware of! How else could their attention be affected by the semantic relatedness of the objects/pictures? So the processing of objects, up to determining what semantic categories they fall in, happens at least partially without any conscious awareness Despite this, we need conscious awareness of the object to interact with them In many cases, decomposition is not at all intuitive! Helps us escape instinct blindness ~cases of disrupted proprioception~ Phantom Limb Syndrome Points on the face of a patient that elicit precisely localized, modality-specific referral in the phantom limb 4 weeks after amputation of the left arm below the elbow. Sensations were felt simultaneously on the face and phantom limb Sensory cortical homunculus cortex Insights from Phantom Limb Syndrome Reflects the topography of sensory cortex And the contralaterality of the brain (ex right side of the brain controls left side of the body) Suggests that this functional topography can be changed even in fully developed adults Neural reuse, neuroplasticity Demonstrates the critical role of visual and tactile feedback in sensory perception! Again, this wasn’t obvious! From sensory to motor cortex: Alien Limb Syndrome This can have a sudden onset Can last several days to several years Proposed that lesions in the parietal cortex result in isolated activation of the contralateral primary motor area due to its release from the intentional planning systems Damage to the parietal cortex can also cause lack of awareness of movements due to loss of proprioceptive feedback or left hemineglect. The combination of these factors results in initiation of spontaneous movements without the patient’s knowledge or will Insights from alien limb syndrome Reflects the separability between motor cortex (which directly controls movement) and intentional planning (frontal cortex) Also the separability of the motor cortex from proprioceptive feedback, which is important for knowing that your limbs are moving! These are the kinds of pieces we don’t realize were all coordinating and creating what seemed so simple and intuitive to us: just moving our arms Another example of removing instinct blindness Reflections Learning about how damage to the brain leads to specific changes in cognition teaches us new things about the brain and about cognition Similarly, we can use non-invasive neuroimaging techniques (ex. fMRI) both to learn how the brain works, and to gain new insights about cognition from observing and Values → Phineas Gage Lesion of his prefrontal cortex → personality changed, became mean towards others Not himself anymore Damage to the prefrontal cortex increases utilitarian moral judgments What is good and moral is what results in the best outcome for the greatest number of people A wild demonstration: lesions leading to pedophilia Brain damage and functional deficits, a tumor Also had number of other deficits Visual cortex After removal of tumor, his visual functioning was restored and his pedophilia disappeared Week 2 Day 1 Susan Carey Harvard psychologist Language acquisition, fast mapping Representational primitives Mecca bullchild Sabrina chu Maeve kenney Sarah parigela Flora marie roberts Decomposition vs. reduction Decomposition: breaking down a process into steps Reduction: going between levels (mental to physical) TURING MACHINES Given a tape and a machine, how would the tape get altered? The output? Start from first square of tape on the left Always assume you’re starting in q0 input # # 1 0 0 1 1 1 # # output # # 0 1 1 0 0 # # # If you read x, you write y, move z 1, 0, R = when you read 1, you write 0, move right Write the instruction for a turing machine that turns every 0 into a #, but only if the sequence of digits starts with a 1 Week 2 Day 2: Modularity and Evolution Or, are we really just a collection of tiny computers? On Where We’ve Been One of the guiding principles for cognitive science is decomposition We want to explain complex phenomena in terms of the less complex parts that give rise to them Ex. brain injuries This helps us avoid the problem of the homunculus (and infinite regress) But, intuitively, it doesn’t seem like the mind is made up of a bunch of separable parts–it seems like one whole thing How do we know that the mind has parts? The way that minds break can give you an idea of what some of the dissociable parts are Today: So if the mind has parts… How might we characterize a part of the mind? Why does it have parts, anyway? Computational question Information-processing devices are designed to solve problems They solve problems by virtue of their structure Hence to explain the structure of a device,you need to know: What problem it was designed to solve Why it was designed to solve that problem and some other one Physical Objects Have Parts Jerry A. Fodor, The Modularity of Mind Faculty Psychology Phrenology was the practice of evaluating bumps and irregularities that were thought to be the result of pressure caused by underlying faculties This is a psychology-like, highly non-empirical pseudo-science with a nugget of truth Modularity (ex. Python) Package of information that can be added or removed from the main processor Jerry Fodor (1935-2017) and The Modularity of Mind Faculty Psychology and Phrenology The idea that mental functions are localized to different parts of the brain Self esteem, secretiveness, parental love, mirthfulness, etc. Franz Joseph Gall (1758-1828), German physiologist Phrenology was the practice of evaluating bumps and irregularities that were thought to be the result of pressure caused by underlying faculties This is a psychology-like, highly non-empirical pseudo-science Fodorian Modules Modules are domain specific They only act on certain types of input Modules have mandatory processing If that kind of input is present, the module operates automatically They are informationally encapsulated Information from outside the module, including thoughts and desires, don’t affect processing Aka “cognitive impenetrability” Modularity may be evolutionarily beneficial “Perception is above all concerned with keeping track of the state of the organism’s local spatiotemporal environment. Not the distant past, not the distant future, and not…what is very far way. Perception is built to detect what is right here, right now–what is available, for example, for eating or being eaten by” “[Therefore,] it is understandable that perception should be performed by fast, mandatory, encapsulated…systems that are prepared to trade false positives for high gain. It is, no doubt, important to attend to the eternally beautiful and to believe the eternally true. But it is more important not to be eaten” Fodor 1985 Does the Brain have Fodorian Modules? Modular processing is domain specific Example of domain specificity Yin (1969) showed people pictures of various objects (faces, houses, airplanes) Tested for recognition of objects on representation in different orientations People were especially bad at re-identifying upside-down faces Illustrates that the parts of our minds that recognize faces are only able to properly compute a certain kind of domain-specific structured input Face processing happens in the canonical (right side up) format, and is disrupted otherwise Not the case with processing normal objects. Ex you can identify an airplane in any orientation Modular processing is mandatory and automatic Ex: mandatory face processing Evidence for mandatory word processing: Stroop Effect Visual words get automatically processed Informationally encapsulated Evidence from depth perception: The Muller-Lyer Illusion edge/corner perception A modular architecture of perceptual systems “It is understandable that perception should be performed by fast, mandatory, encapsulated….systems that are prepared to trade false positives for high gain. It is, no doubt, important to attend to the eternally beautiful and to believe the eternally true. But it is more important not to be eaten” - Fodor The Benefits of Modular Architecture Fast and efficient processing (for routine tasks) Encapsulation protects against just seeing what we want to see (Mostly) reliable delivery of perceutal information about the world Biased towards false-positives, just to be safe Why modules? David Marr’s AI Perspective: fully holistic systems are “extremely difficult to debug or to improve, whether by a human designer or in the course of natural evolution, because a small change to improve one part has to be accompanied by many simultaneous compensating changes elsewhere” Marr, 1976 Marr’s levels of analysis Computational algorithmic/representational Implementation What is the problem being solved? What is the functional explanation? Challenges to this model of the mind/brain: It’s modules all the way down (and up?) The parts aren’t modular It’s modules all the way down (and up?) What kind of computer is the mind? “The human cognitive architecture is far more likely to resemble a confederation of hundreds or thousands of functionally dedicated computers” - Cosmides and Tooby Theory of Evolution (in a slide) heritability Intra-species variation different members of a species have different traits This variation is characteristically generated by random mutations Competition (for resources, mates,...) Principle of variation in fitness some organisms have traits that make them more likely to survive and reproduce - they fit the environment better Implies the principle of natural selection The traits that make organisms more likely to survive and reproduce will become more common in the later generations Evolutionary Psychology “Nothing in biology makes sense except in the light of evolution” - Theodosius Dobzhansky Nothing in psychology makes sense except in the light of evolution - Cosmides and Tooby Modularity in Evolutionary Psychology “The mind is probably more like a Swiss army knife than an all-purpose blade: competent in so many situations because it has a large number of components…each of which is well designed for solving a different problem” - Cosmides and Tooby Computational Question Information-processing devices are designed to solve problems They solve problems by virtue of their structure Hence to explain the structure of a device, you need to know: What problem it was designed to solve Why it was designed to solve that problem and not some other one Marr’s levels of analysis Computational algorithmic/representational Implementation What is the problem being oslved? What is the functional explanation? An example of a social cognition module: Cheater detection Surviving in social groups requires trust Which gives rise to the free-rider problem, when people use something they haven’t contributed to Example: private jets account for 17% of all flights the Federal Aviation Administration is responsible for, but contribute only 2% of the taxes that fund it How do we know that others are following the rules? Ie automated cheater detection systems? “Cheater detection” module Card color/number vs. age/drinking test People perform better when the task is mapped onto a pre-existing social rule In studies using these kinds of tasks, most participants have difficulty with the first example (~10% get it right), while many find the task easier (65-85% accuracy) when the examples are about social rules Ex. drinking age; or “if you borrow my car, then you have to fill it up with gas afterwards” In their Social Contract Theory, Cosmides & Tooby propose a domain-specific cheater detection module; when engaged, it automatically looks for those breaking social norms Are there other possible explanations for these results? Evolutionary Psychology Summary The view from Evolutionary Psychology argues for even more modularity than Fodor does They argue against the evolution of a general-purpose processor How could that evolve? And for what purpose? Instead: high-level mental computations (ex. Dealing with social problems) are likely to have dedicated modular functioning Skepticism about Natural Selection Explanations Explanations too easy, and sometimes non-falsifiable Just-so stories Why do humans have two breasts instead of 1? In case of twins? Or: bilateral symmetry exists across species Getting stuck on function/problem-solving rather than identifying larger evolutionary structures that we carry Vestigial structures not serving a current purpose Skepticism about Natural Selection / EvoPsych Explanations Not all “modules” are genetically hard-wired The Fallacy of Biological Determinism: there’s no good evidence suggesting that IQ in inherited Lewontin: many have argued over the years that intelligence is inherited and therefore determined by genetics, however, such conclusions were: Based on flawed data that under-samples variation in environments Used to justify social disparities (ex. Racist “eugenics” theories) Is IQ hard-wired? Intelligence quotient = IQ Quantifying reasoning skills, NOT truly “intelligence” “People who boast about their IQ are losers” - Stephen Hawking There’s ample evidence that environment plays a large role in IQ Turkheimer et al. (2012) Nisbett et al. (2012) Flynn Effect Turkheimer et al. (2012) 95 pairs of twins among 1000 at-risk infants ⅓ randomly assigned to treatment group Treatment = home visist from clinically-trained educator, access to child development center for ages 1-3 Cognitive battery administered at age 8 Heritability for IQ? No! Treatment group > Control group Nisbett et al. (2012) In lower SES groups, environment accounts for most or all of the variance in IQ In higher SES groups, genetic heritability accounts for most or all of the variance in IQ Interpretation: in low-SES groups, the early environment prevents children from developing their full potential How much of IQ comes from one’s environment? Flynn Effect: Social Environment. IQs have been steadily rising, too quickly for it to be evolution Change across all “modules” Happening too fast for an evolutionary explanation Challenge 2: Maybe there aren’t modules after all? McGurk Effect Perception depends on top-down mechanisms Surrounding context affects perception Top-down vs. bottom-up Bottom-up: data driven, low level properties Top-down: category-based experience and expectations We will return to this question many times in this course! How does the same perceptual input = different perception? Even face perception is experience-dependent? Face perception in monkeys reared with no exposure to faces Even if we give up on Fodorian modularity and Evolutionary Psychology in their strongest forms… Evolution still has important consequences for thinking in Cogs Our capacities must have evolved, even if they are more general and not as hard-wired or encapsulated The brain may still be modular in important ways Hierarchical nature of development can permit changes at certain stages (ex. Monkey face study; influence of childhood environment on IQ) Changes more likely not to have deleterious effects if they are somehow confined General principles that work may be iterated How much modularity there is remains an ongoing debate More to come about it this term Week 2 Day 3: How You See the World A good place to start is understanding the physical mechanisms that allow us to see Many aspects of visual experience can be explained just by the physical properties of your eyes Perception of Light It didn’t have to be this way though, other species have different sensitivities S-cone: most activated by blue/violet M-cone: most activated by green L-cone: most activated by yellow/red So what’s with the color wheel then? Neural Adaptation Typically work: sensitive to a particular stimulus Response to a continuous stimulus decays over time Upon offset, there’s a dip below baseline Two Kinds of Photoreceptors Rods and cones Rods: good at detecting low light levels, movement. Predominant in the periphery Cones: concentrated in the fovea, good at color detection This same principle applies to how you see motion (motion adaptation) Filling in the periphery, pattern detection Eye movement contingent display control in studying reading Reminder: you’re all partially blind Motion Induced Blindness New & Scholl paper We seem to be seeing all too well We experience much richer information than we actually get from the light that hits our eyes Poverty of the stimulus example Perception seems altogether too smart What’s going on? By now, you should have given up on fully explaining what you see simply in terms of the activation of the photoreceptors in your retina… Whatever is happening, it clearly involves some kind of inference, or as we like to say, computation You’re not directly perceiving the world, and you’re not even directly perceiving your retinas’ response to the world After all, you don’t see the world upside down, even if your retinas do What you’re perceiving is your visual systems representation of the world, which is its interpretation of your retinas’ response to the external world “Trying to understand perception by studying only neurons is like trying to understand bird flight by studying only feathers: it just cannot be done” - David Marr Constructivism: you’re really just making up much of what we see Your visual system is trying to solve an impossible problem Ex: x * y = 16 (an inverse problem) X = the “true” color of the object Y = the lighting conditions 16 = the light hitting your retina Not everyone makes the same guesses Your visual system uses every trick it can to make a good guess A general way to think about what it’s doing: coincidence avoidance The shading on a shape doesn’t just happen to align with the shading in the background You don’t just happen to have random black spots that move around the world depending on where you are looking Coincidence avoidance The general lesson Your visual system gets too little information to answer all the questions you have about the world So it makes a lot of educated guesses for you “In the theory of visual processes, the underlying task is to reliably derive properties of the world from images of it” - David Marr DEPTH PERCEPTION Many cues you use for depth perception Binocular disparity Relative size Absolute size Texture gradient Motion parallax Linear perspective Interposition Aerial perspective Shading and lighting Etc. Kinematic depth perception (wiggle wiggle wiggle) You’re constantly trying to fit together two different pictures of the world Random dot stereograms Solution → unfocus so that the “two images” are different enough to overlap properly One Final Example Week 3 Day 1: Attention and Memory Do you think you’re more likely to notice something novel or something expected? Do you think subliminal messaging is real? Our visual systems gives us a sophisticated way of representing the world Kind of like how you think you see everything in color Today’s Plan Are there different kinds of attention? What are the major theories of attention? How is attention related to awareness/consciousness? What is attention? “The taking possession by the mind, in clear and vivid form…it implies withdrawal from some things in order to deal effectively with others” - William James This is a lecture about decomposition: kinds of attention Exogenous attention Attention capture Bottom up Passive Endogenous attention Volitional shifts of attention Top down Active searching for things Top-down vs. bottom-up Bottom-up: data-driven, low level properties Top-down: category-based experience and expectations Why do most of the change blindness displays have that flashing black screen? It disrupts any exogenous cues that may draw your attention, and instead limits you to only using endogenous attention Endogenous attention is truly bad at noticing changes, resulting in a kind of change blindness when only endogenous attention can be used Cocktail party effect What does attention do? “My experience is what I agree to attend to. Only those items which I notice shape my mind–without selective interest, experience is an utter chaos” - William James Are there any good theories of attention? Limited capacity theories Due to bottleneck Attentional mechanisms (shifts of attention) determine what gets through bottleneck (what gets through = what one attends to) Early selection Semantic properties/identity recognized only after passing through bottleneck No control over low-level stimuli Unattended semantic features unrepresented (and thus inert - cannot grab attention) Smaller types of processing all flowing into semantic processing → Modularity Visual processing, auditory processing, olfactory processing, etc. modules Parallel processing streams. You can narrow/widen bottleneck for each of them using endogenous attention Semantic processing without awareness Late selection Automatic processing of all features Consciousness of that which passes bottleneck and entry into working memory Semantic processing without awareness (ex. mushing together two different streams into one coherent sentence) Anne Treisman Another challenge: high-level content without attention/awareness Reaction times Ex. lexical decision task (“is this a real word?”) You can still experience priming effects even if the priming stimuli was flashed so quickly that it never reached conscious awareness gender/sexual orientation dependent spatial attentional effect of invisible images What was their hypothesis? How did they test it? Showed erotic stimuli so briefly that it doesn’t reach conscious awareness Then show gabor patch on one side of visual hemispheres If the patch and the erotic iamge are on the same side, people were faster to respond since they were already attentionally drawn there Modulated by people’s sexual orientation You can learn new things without conscious memory Ex. Clive has amnesia but can still remember how to play piano. He was able to improve performance at a song even if he couldn’t recognize it each time he saw the sheet music Do you need to attend to something to process? We know early selection theories are wrong because semantic and high-level information is processed in absence of awareness A challenge: attention affects low-level processing Prof Stormer :)! Changing contrast on faces Tend to rank faces more attractive if 1) it’s high contrast and 2) if there was an attentional cue before seeing the face Hm… So we know early selection theories are wrong because semantic and high-level information is processed in absence of awareness But late selection theories also seem wrong because attention does affect low-level processing, like contrast There’s much more that’s been said about this, but the short answer is we still don’t have a perfect general theory of attention How is attention related to awareness/consciousness Unicycling Clown Experiment People talking on cell phones less likely to have noticed We’re often unaware of changes, even mid conversation (ex. Door experiment) Relation between attention and consciousness A few things to take away Difference between endogenous and exogenous attention Attention can be drive by post-perceptual properties (ex. meaning) even without conscious awareness Attention can also enhance processing of low-level features Attention bears an interesting but still unknown relation to conscious awareness Week 3 Day 2: Quiz and Operationalizing the Mind Numerically Explain how color perception is a form of an inverse problem Color is determined by two things, lighting conditions and the object’s innate color You have to make inferences, best guess Briefly explain what Marr’s three levels are in the context of visual perception Computational: highest level, what problem are you trying to solve? How to use light to determine the structure of the physical world Structure of the problem itself. Isn’t necessarily being executed in any way Algorithmic: what process are you using to solve the problem? Light into neural firing rates into interpretation/construction by the brain Implementational: how the algorithm gets physically implemented, hardware. Nerves, cones/rods, etc What is the difference between endogenous and exogenous attention? Endogenous: top down Exogenous: bottom up What does it mean that a module has “mandatory processing”? Whenever input is present, it’ll be processed automatically What is evolutionary psychology? Massive modularity, lots of tiny independent modules Fodorian modularity is different, everything feeds up into general processing Week 3 Day 3: Conceptual Development Today’s Agenda Number and quantity Analog magnitude representation Parallel individuation of small sets Is number hard-wired? Paper + presentation Our Case Study: how do we learn about numbers? One traditional view: need formal training Explicit learning Jean Piaget, psychologist focused on child development Conservation task (same volume of liquid in differently shaped glasses) Child could count, but not compare counts of different sets Lacks object permanence Graham crackers, going by count rather than surface area There’s something right about that, but… Xu & Spelke Habituate infants to seeing 8 dots in different sizes/configurations Then show them 16 dots Track where the infants are looking → babies look longer at the 16 dots, look at it longer and sense that it’s “more” Do the task backwards (habituate with 16 and then 8) → loook longer at the 8 dots Representations Underlying Infants’ Choice of More (Feignson, Carey, Hauser) What is the hypothesis? How was it tested? Graham crackers going into boxes, What did they find? Evidence for Core Number Systems One kind of system: Analog Magnitude Representation Weber’s law: a measure of discriminability. Works for all kinds of perceptual quantities Weber fraction changes 1:2 ratio in previous example Ex. you double 1 to get 2. 7 and 8 have the same absolute difference, but proportionally it’s a much smaller difference Discrimination governed by Weber Fractions, improves with age 6 months: disriminate a 2:1 ratio but fail with 3:2 ratio 9 months: discriminate 3:2 ratio but fail with 4:3 ratio Their approximation skill continues to improve Several animals also have Analog Magnitude Representations! Another Kind of System: Parallel Individuation of Small Sets (Object Files) Multiple Objective Tracking (MOT) task Back to the Carey paper Do infants have representations of “more” or “less”? Infants crawl toward containers of crackers and of various quantities and sizes Total volume/surface area guides behavior Babies consistently failed comparing 3 vs 4, functionally capped at 3 vs 3. Only three conceptual slots for object tracking Can do 2:1, but not 4:2! → can’t be magnitude estimation since these are identical ratios! Results Children succeeded at 1:2, 2:3 Failed at 3:4, 2:4, 3:6 Most surprising: 4:1 failure Infants can discriminate among ets of 1, 2, and 3, but NOT between sets of 4+ Number is only implicitly represented and is bound to individual objects in the set This system is separate from Analog Magnitude Representation NOT consistent with Weber’s fraction effects Stages of Number Learning Core systems provide limited number representation Parallel individuation of small sets provides basis for first few numerals “One” is mapped to sets of 1: {|} “Two” is mapped to sets of 2: {||} “Three” is mapped to sets of 3: {|||} “Four” is mapped to sets of 4: {||||} So how do we learn to represent number explicitly? Children learn the “placeholder” counting routine (“one, two, three,...”) Counting proceeds over individual items Bootstrapping involves realizing that “three’ from the counting routine is the same word as “three” for the set of three Sometimes this is called Quinian bootstrapping Using mapping between object files and words you first become A one-knower, then A two-knower, then… Magic! You bootstrap yourself into a “Cardinal Principle” knower The Cardinality Principle (CP) refers to the understanding that the last count word in the counting sequence represents the total number of items in the collection The word “five” is learned in a different way than the word “four”! Stages of Number Learning He knows number words but not number concepts! Doesn’t realize that “if I count to six, there are six objects here” They lack the Cardinality Principle Is this just normal human development? Will we all pass through stages automatically? Is there any counter-evidence that shows we don’t? Quantity in an anumeric language Piraha (Hiaitihi) speakers → language without numbers Exact with 3 or fewer Inexact with 3 or more No specific labels for each number, but can distinguish Evidence for us innately having 3 or 4 slots. If we don’t have labels What’s the picture of conceptual development we have been arguing for? Using the structures that appear in development to bootstrap our way up to more complex concepts Ex. conceptualizing 1 billion doesn’t come naturally Analog and Parallel end up operating together Paper + Presentation Response paper - October 7 ~1000 words Three options 1) Theoretical argument for or against a position we’ve covered in class 2) a proposal for an experiment that would significantly advance our understanding of a topic covered in class 3) an application of an explanation we’ve covered in class to an aspect of our minds it has not yet been applied to Ex. modularity and moral decision making Connection to course material, but with novelty RWiT, reading AND writing guides on Canvas Presentation Oct 5 Presenting to group members Meet BY THE FIFTH and present idea to group mates Record on zoom Slides NOT necessary, and shouldn’t be submitted if you choose to make them May choose to have an outline instead

Use Quizgecko on...
Browser
Browser