COG SCI FINAL PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document is a PDF file containing details about cognitive science. The document covers topics such as attention, memory, and consciousness. It includes various theories and concepts related to these topics and examples. The document also discusses some of the challenges with computational theories of consciousness.
Full Transcript
ATTENTION - We have limited awareness of what’s happening - Difference between sensory input/ our representation - Take in more than we can process - To pay attention: to be fully aware of Bottleneck theories - Parts of our mind have capacity limit- need...
ATTENTION - We have limited awareness of what’s happening - Difference between sensory input/ our representation - Take in more than we can process - To pay attention: to be fully aware of Bottleneck theories - Parts of our mind have capacity limit- need to filter out most of environment - Face-processing system only work on single faces: select one Early vs. Late selection theories - Filtering happening on - Early on - sensory systems - Specific part of visual field - Occur later - based on semantics - recognize/ think, then filter Selection-for-action theories - Mental cap is not too small, but actually too large - Processing everything would lead to interference- make difficult ability to respond to specific stimuli - Filter out irrelevant Feature Integration Theory - TREISMAN(most intuitive) - Need attention to bind together multiple features of a stimulus - Without attention, features in working memory become jumbled - Integrating, not filtering - to be coherent - Detecting task - Lots of distraction, 2 simultaneous properties - to identify one - Illusory conjunctions for unattended shapes: people report incorrect feature combos Filtering vs. Attenuation - TREISMAN - Unattended features are not fully filtered out, just attenuated (suppressed) - Dichotic listening experiments - Reporting what's coming in left ear- what in right ear breaks into awareness - - Group information based on semantic meanings Change blindness - Swap out experiment - Scientists swap out a person mid conversation to see if switch was noticed - Shows that significant visual changes can be ignored if attention is diverted - Scope of attention is very limited - real world situations - Overestimate how much information we are attending to - Failure of our “meta-cognition” - like texting/ driving Taxonomy of Attention - Things coming in to sensory system - Can control attention to particular stimulus features - Internal attention - mental processes/ internal representations - Task rules: takes time to mentally redirect attention to new task - long-term memory: selecting relevant memories/ attenuating related but irrelevant ones - working memory: choosing which info to maintain/ which to discard - Strategically manage working memory - External attention - kind of attentional filters of external stimuli/ representations - Features, objects, spatial locations, sensory modality, time points - Modality specific (visual v auditory) - Spatial (visual/ auditory) - Visual feature attention (colors, shapes) - Visual object attention (familiar face) - Auditory feature attention - Green needle v. brainstorm - Attention to location improves memory for location-related details - Priming task - manipulate attention to different aspects of story - Story listening task - Memory task Stimulus driven vs goal directed Flow - goal directed: top down (I to E): - Stimulus driven - Posner cueing task: measures ability to shift attention - dichotic listening: hearing one's own name can pull attention to unattended ear during - Jiang et al: unconsciously present erotic image on left/ right side, present a tilt judgent - Has image centered consciousness - Heterosexual observes: image of opposite se pulled attention- improving judgments on title task if it was on the same side - Attention influenced by something we’re not conscious of Attention Schema Theory of Consciousness - Effectively control a system if we have a schema of how that system works - Motor control: body schema - To control attention, need schema of attentional system - Internal mode of attention is consciousness - Can’t attend do conjunctions automatically, can’t switch quickly - Predicts that we can sometimes have attention without awareness (model not perfect) - Attention is less well-controlled without awareness (cannot use mode) - Any creature w/ attentional model has consciousness - Attention not synonymous with consciousness MEMORY Operant conditioning - Rewards (or penalty) introduced to increase a behavior - Punishment introduced to decrease a behavior classical conditioning - Shaped by past experiences - Transferring a natural response to another - Pairs old response with new stimuli Skills and Habits - Conditioned, or do it enough - Learned: gradually from practice, complex senses of motions, contains trigger - H.M. - Lost memory, helped invent neuroscience-couldn’t learn anything new - Mirror tracing task - Tracing out figure in mirror, proves multiple memory systems - Motor sequence learning - Learning to press button - some sequences repeated - Reactive to predictive processing - Implicit learning- not aware when learning, get better at repeating sequences Sensory memory - Iconic memory: brief visual stimulus snapshot of information - tied to perception - Sperling: Found when playing tone in relation to letter: row/ target - There is brief memory with a high capacity Working memory - Reading span task: verbal working memory task that measures person’s ability to process and store information simultaneously - Brown peterson task: study duration of short term memory: recall three constant syllables - Visual working memory task: ability to identify depends on number of things -limited to four objects - Swap errors: location error 0 - brady/ alvarez: investigate meaningfulness of objects impact how well people can remembr them: statistical averaging Semantic memory - Sentence like facts - - Conceptual associative priming: related concepts - semantically related - Perceptual associative priming - similar in form Episodic memory - Relive a particular event- reinforce behavior from mind - Birds can store lots of information - Hyperthymesia: remember every moment of their lives (William Jame) - Childhood amnesia: - Few autobiographical memories before 3, relatively fewer at 3-7 - Either aren't formed, or aren't returned - Returned: seems to be long-term recall in children as young as 9 months- have most abilities required for memory - Reminiscence bump: remember the most from 10-30 years old Storage and retrieval - Random access memory- address can be systematic - Like info / address - phone book analogy - content-addressable memory / address content - Computational model - Point to other times- more specific - allow for highly structure memory - State-dependent retrieval Probabilistic reasoning Sources of uncertainty - Perception - Memory - Testimony - Future Probability - Probability of event is number between 0 (impossible) and 1 (necessary) - Subjective probabilities are the probabilities that we assign in our reasoning - X possibilities/ Total possibilities Bayes Theorem Bayesian Optimality - Weighted average: more weight = more reliable - Optimal approach to cue combination: for discrepant cues, take weighted average, where each is weighted by reliability of that cue - Optimizes decision making - applying objective function Bayesian Suboptimality - Representativeness: when asked the probability that A belongs to a Class B, people often rely on the degree to which A is resembles a paradigmatic example of B- rely on heuristics - Failures that result from representativeness: - Base rate neglect: cognitive error where people give more weight to specific information than the prior probability (base rate) - Insensitivity to sample size - Misconceptions of chance - insensitivity to predictability - Misconceptions of regression - Illusory correlation Metacognition - Assignment of certainty - Confidence varies independently of accuracy - Overconfidence tends to apply across domains - More information/ expertise = overconfidence - Bad at predicting which problems we can solve Testing - using probabilities- abstract- have to have some theory of your own mind - Is stimulus A or B- opt our paradigm - have some sense of how much their categorization is right - Behaviorally: have to make a choice- decline to see if they are right- guarantee reward - Study in animals; give them choice to classify stimulus - indicate highlight confident, or low confident - Mice categorization task - A or B smell - chose dominant smell - random delay - In cases where animals were wrong, in a clear scenario- they are going to wait long time to wait in reward - or they might not wait - Argument: mouse is having thought about their own decision making, and weighing how likely they think they made the right decision Robotics andEmbodied Cognition - Robots: difficult to sense things - Acting in the physical world feels easy for women/ hard for machines - Moravec’s paradox: sensorimotor/ perception skills require enormously more computational resources than abstract reasoning - AI can easily learn tasks that are difficult for humans, but has difficulty doing what is easy to humans - Having mind: taking action in physical world Embodied cognition - Traditional cognitive approach: - (natural/artificial) mind as separate from environment, provides inputs/ outputs - Embodied cognition: - mind+ environment as a single cognitive system, existing in single environment: - Made to exist in physical world - Spectrum - Simple: cognitive systeme made to operate in a physical world - Medium: states of the world/ bodies shape thoughts in a fundamental way - Radical: mind cannot be meaningly studied in isolation from the world Modal representations - most/ all semantic representations are modal - Tied to sensory/ motor information - derived from physical experience - Tucker & ellis 1998 - left/ right alignment - inherent motor response to where handle is - motor response - If mind is operating separately from motor the hand shouldn’t matter - Objects linked to action-based representations - mental rep tied to potential actions - Time as space representation - Time passing has no spatial component, but automatically activates spatial metaphor for time - inherently mapped onto spatial mention Environment as part of cognitive ecosystem - Robots - Program robot to move forward whenever possible - if stuck, back up and turn a random amount - Eventually create piles of blocks - In this specific environment, robot has function of helping to pile up blocks - Box - If obstacle in left rear or right front, turn left - If obstacle in right near or left front, turn right - Function: in a maze, this robot will follow walls to solve a maze - Highest marr level: something about physical representation - Only possible when considering robot/ environment - Consistent with “radical” embodied cognition- can’t study cognitive agents in isolation Offloading cognition - Get away with very limited cognition on-board the robots - Humans use their environments strategically in order to carry out cognitive processes - Use property of physical world to keep track of tings - Robots not keeping track of blocks- instead follow algorithm-environment to keep track of - Techniques - Take advantage of context aspect of memory - Memorize list of butterflies- try to memorize pieces of information - connect info to physical object - Australian aboriginal memory technique: associate info with physical locations,then walk back through these locations to remember - Physical world as memory scaffold Dynamic coupling - Embodied cognition: ongoing interactions between agent and environment - Something in physical world that prompts response in mind - “representation-hungry” system: maintains complex internal states and computers offline, - Observation: systems can repeatedly make simple computations by observing the world Rodney brooks - actionist/ situated/ behavioral robotics - “Argues against maintaining representational states inside robots - No internal models/ multi-step plans - instead have interacting behavioral units that react to the environment - Intelligence without representation - Robot maintains bit of internal state- once in box, only makes measurements of world - chose strategy to make things easier Social cognition - Can think of other people/robots as part of environment in which we are situated - “Basic unit” of cognition: consisting of groups of minds - Consensus synchronization- synchronize multiple agents to form a cognitive system - Car - Autonomous driving requires modeling decision-making at the multi-car level Morality for robots - Can cause physical harm - May need to make fast decision without time for human confirmation Asimov’s law of robotics Reinforcement learning - Unsupervised learning- detecting patterns in the world without specific goal - Supervised learning; being taught the correct response to stimulus - Useful for thinking about an agent that is making repeated decisions in an environment to achieve goals - Framework to solve problems Problem solving - Have one or more goal/ reward state that we want the world to be in - Requires multiple steps/ actions to get there - No obvious next step - Model free: Use prior experience to choose action that has worked out - Model-based: explicitly make a multi-step plan - Current state - possible next states (multiple) - the more actions, recursive, find chain of actions that leads to solution Q learning - Q of action is the sum of future rewards that we will get (on average) if we take action in this state - If we know the Q of every action in every state, we can make good decisions by just picking the action with the highest Quality - Evaluate by considering results of past actions - Recording experiences to learn Q values is model-free - Creating a model - Does not require us to know how our actions take us to new states - If there was a model that showed the results of actions, can make a plan for how to get into “good” state rather than trial and error Model-free - Learn purely from experience which actions end up working out well - Doesn’t require knowing how states change over time, or how actions impact states - What goes well in the long run - Fast to make decisions Model-based - Use knowledge about the domain to mentally simulate possible action choices - Estimate the quality of novel actions - Flexibly update decisions if there is a change in the way the world works Heuristic search - Use experience to guess which actions will have high quality for the current state (model-free) - Improve these guesses using some planning (model-based) Solving real problems - Difficulty - State spaces/ set of actions can be enormous/ continuous - Rewards may be many steps away - Learning Q values/ model may require a huge number of attempts Expertise in problem solving - Expert (AI/ human) in a domain is able to - Identify the most important aspects of a state - Estimate Q w/out rolling out future possibilities - Rely on cached, automatic sequences of actions rather than making conscious decisions - Not make a decision at all CONSCIOUSNESS - Feeling of pain/ tasting/ red/ sadness/ thinking/ thinking about oneself (self-consciousness) Materialism - mental states are the result of physical structures and processes, and that the brain causes all aspects of the mind. - Close connection between mind/body Problems - Explanatory gap - difficulty describing connection between physical properties in the brain to subjective experiences of consciousness - - Knowledge argument: MARY Dualism - Wholly separate domains - Body - mind causation - change in visual system in immaterial mind, mind causes body to conduct an action - Mind - body - mind doesn't have parts, body does Epiphenomenalism - mental events are caused by physical events in the brain, but do not cause physical events in return. Russelian monism - proposes that consciousness and physical properties are both grounded in a single set of underlying properties, often called "intrinsic" or "proto-phenomenal" properties Testing - Consciousness w/out Report: change blindness, visual/ spatial neglect - Behavior w/out consciousness: - Blindsight, people claiming to be blind: actually conscious to some extent - Subliminal priming: being affected by something beyond conscious awareness Theories of consciousness - Global workspace theory: a result of perceptual contents being broadcast across the brain to other processors. This process allows multiple networks to work together and compete to solve problems - Higher order theory: A mental state becomes conscious when the subject has a higher-order thought (a thought about that mental state). - lower-order representation vs meta-representation - Re-entry and predictive processing theories - Re-entry is the continuous feedback between brain regions to update representations - predictive processing involves the brain generating and adjusting predictions based on sensory input to minimize error. - Integrated information theory consciousness arises from the degree of integrated information within a system, Split brain - Measures how unified brain is ``` if a split-brain patient is shown the word "key" in their left visual field (processed by the right hemisphere) and "ring" in their right visual field (processed by the left hemisphere), they will verbally report seeing only "ring" (because the left hemisphere controls speech) but will pick up a key with their left hand (controlled by the right hemisphere), demonstrating how each hemisphere can independently perceive and act on information, even if they seem contradictory when asked to explain their actions verbally. Are both hemispheres conscious? If not, which hemisphere? Are they different people? Can people have two streams of consciousness? In what sense is consciousness ordinarily “unified”? Hard cases LSE criteria for consciousness 1. Nociception: receptors sensitive to noxious stimuli 2. Sensory integration: integrative brain regions, integrate information from different sensory sources - pain 3. Integrated nociception: neural pathways connecting nociceptors to integrate brain regions - connects pain detectors to other parts 4. Analgesia: behavioral response to noxious stimuli is outdated by chemical compounds affecting the nervous system a. Use opioids or transmitters 5. Motivational trade-offs: shows 6. Flexible self-protection: shows flexible self-protective behavior likely to involve representing the bodily location of a noxious stimulus 7. Associative learning: noxious stimuli become associated 3ith neutral stimuli a. Novel ways of avoiding noxious stimuli are learned through reinforcement 8. Analgesia preference: values a putative analgesic (pain relief) or anesthetic when injured Animals - Octopod, cuttlefish, squid, nautiloid Examples - Capacity to play/ intrinsic pleasure - evidence animals play means that they are conscious - Bees can get faster by watching other bees- social learning - observing demonstrators - Capacity for tool use - Experiment: bees and shapes - Trained when they could see but not touch the shapes - Tested when they could touch not see the shaped - Bees trained to tell cubes from spheres in darkness could also later identify the correct shapes when seeing but not touching them, indicating a form of mental image that can be accessed with more than one sense. Bees can also solve problems in a manner that indicates they understand the desired goal. Consciousness and free will Predict when push button/ aware of decision- all decisions made before consciousness - Mostly make decisions with principled reasoning/ or past experiences BELIEFS Folk psychology: mental states “the folk” use to explain behavior To explain: - Hoping and expecting desire; understanding basic mental concepts - need language to explain Dan Dennet Jerry fodor Attribute mental states to ourselves and others Attribute mental states to ourselves and others on the basis of their behavior on the basis of their behavior Mental states are merely helpful tools for Behavior is caused by representations in the predicting future behavior mind, and different mental states cause different behaviors There are no objective facts about which systems There are objective facts about which systems have mental systems have mental states Not belief - Deceiving others/ oneself about your beliefs - Uncertain beliefs - Forgetting what you believe Belief vs alief - Belief are sensitive to evidence, aliefs are not - Beliefs involve acceptance, aliefs do not when there is a chain of meanness, you are more likely to help the former / you like him because he was mean to someone who was mean to you depressant/ stimulant pill - Placebo, believe that your anxiety is actually caused by stimulant, while depressant made them more stressed Beliefs and OCD - Vicious cycle - Obsessive checking- unsure whether stove is on, but say otherwise - Believe stove is off, but alieve it's on - One part of them believe it might be on, another part believes its off- fracturing of the mind And delusions And technology Does where belief is stored matter ? what level of knowing Groups For: groups are complicated, goal directed/ have goals (w/out any individual in charge), parts for memory, perception, decision making (modular specialization) Against: groups are not conscious, alive, sufficiently integrated PHILOSOPHICAL CHALLENGES Computer analogy- understanding Chinese room Man: - Correctly manipulates symbols - Does not understand chinese - Correctly manipulating symbols not sufficient for understanding chinese Computers are just devices for manipulating symbols- so they do not understand Meaning - Man doesn’t understand, “system does” - Man doensn’t understand bc he doesn’t perceive or act - System doesn’t understand because symbol manipulations are too simple - Understanding requres symbol manipulation in a biological mechanism - Computer can’t have genuine intelligence - Computer could be the man/ book= the whole system - Only match input/output - does not understand structure of human language - Next word understanding: system lacks ability to learn and update Consciousness - Computer isn’t necessarily conscious - Wea are conscious - We are not merely computers Free will - A computer’s decision are never free - Out decisions aresometimes free - Therefore, we are not computers Difference between being random and being free Not just unpredictability Creativity - A computer is never creative - We are creative - Therefore, we are not comptuers Dynamic systems theory - Dynamical systems are not computers - We are dynamical systems - So we are not comptuers - How systems change over time No dynamic system for long-term memory - Searchable - Unpredictable delay Multiple sources of evidence Adapt to fast-changing environment Hypothesis forming and testing Abstract thoughts- cannot understand mind unless it is in a body situated in a world Neural networks - If it's helpful, we can choose to interpret neural networks as using an algorithm - Neural networks use alg, but shouldn’t look for parts corresponding to each step AI - agents that think - or take actions in real world - definitions of AI - whether system is acting like a human or acting rationally - making good decisions - cases in which humans act irrationally the more irrational something is, the more intelligent - thinking abt AI - Humanly - have some kind of cognitive process - information processing inside of them - human like in some way - emulate human tasks - systems that might think like human Physical symbol system - Any system exhibiting intelligence must operate through symbol manipulation - Simply manipulating/translating words makes really weird results- not understanding anything at all - Intelligent systems manipulate symbols Expertise - Store of knowledge with set of rules - If computer acquires it, then computer can make decisions Image net - Train on big data - set distributive neural network processing models to large set Can cognitive theories be encoded into computational theories? Can we analyze AI agents using tools of cognitive science? inspire next generation of AI models pushed by cognitive scientists - goes beyond turing machine models current AI systems don't have memory system that can memorize information Language prediction models - Would someone actually use this sentence - not the same as being grammatical - some sentences don't make semantic sense even if they are grammatically correct False belief task - Getting more advanced Examples - No belief task - Centration - model-based/ free learning - Wason selection task - Alief What is AI? - Multiple approaches to AI- rule based symbolic systems replaced by connectionist systems trained on large datasets - Cognitive science can give us tools for evaluating or improving the human-ness of AI systems Society - Bias; racist chatbots, youtube recs, prison sentences Vicious feedback loops - Identify an area for extra policing - More criminal activity found in that area - Identify it as in need fo still more policing - etc Misalignment - Goal satisfaction: worry that AI systems will not have similar balance of values - will only optimize one value Social harms - Become super intelligent and ousstmart us - Used for bad purposes - Put in charge of important tasks and make mistakes - Employers hire fewer employees D