Summary

These notes cover the topics of attention, focusing on theories like bottleneck theories and feature integration theory. It also discusses attentional filters, external attention, and internal attention.

Full Transcript

Part 5: Psychology 10/30: Attention Theories of Attention Bottleneck Theories: ○ Parts of our mind have capacity limitations, and so we need to filter out most of the environment (e.g. if our face-processing system only works on single faces, we need to...

Part 5: Psychology 10/30: Attention Theories of Attention Bottleneck Theories: ○ Parts of our mind have capacity limitations, and so we need to filter out most of the environment (e.g. if our face-processing system only works on single faces, we need to select just one face at a time to attend to). Early vs. Late selection theories – does this filtering happen early on (in sensory systems) or can it occur later (e.g. filtering based on semantics)? **Early selection: Filtering based on simple sensory properties. **Late selection: As information is going through our sensory system, it’s all there until we filter based on complex properties of a stimulus. Selection-for-Action Theories: ○ Argues that the issue isn’t that we have a small mental capacity, but rather that it is too large – if we try to process too many different stimuli at a time, it makes it harder for us to take action/make decisions. Processing everything would lead to interference between them, and make it hard to respond appropriately to specific stimuli. **Feature Integration Theory (Anne Treisman): ○ We need attention to bind together multiple features of a stimulus. Without attention, many features can make it into working memory but they are jumbled. **More so about organizing sensory properties rather than filtering things out – whatever you’re paying attention to is what you’re able to do feature integrations. Red-O flashing experiment → without paying attention, we can tell that there were red and O-shaped letters, but it’s hard to tell where they are bound together. So if you have to look for a particular color in a particular shape, it’ll be harder to identify if you aren’t attending to that specific location. Evidence of Feature Integration Theory since if you’re not attending to something, then the features don’t get bound together into a single object. ○ Illusory conjunctions: for unattended shapes, people will report incorrect feature combinations (e.g. if you quickly flash a picture with different shapes, they might incorrectly conjoin features of two different objects “large unfilled red circle”). ○ **Another major theory of Tresimasn’s: unattended features are not fully filtered out, just attenuated. **Dichotic listening experiments → playing two different sets of words in the left and right ear. Even though our attention is actively trying to suppress the sounds of the right ear and making us less aware of them (attenuation), the words still have the opportunity to break into awareness. They’re not fully filtered out. Subjects report “Mice Eat Cheese” even though it was “Mice 5 Cheese” played in the left ear. That is because our subconsciousness conjoins them together since “eat” makes more sense in our minds. Change Blindness – the scope of our attention is very limited, even in real-world situations (not flashed images in the lab). We are dramatically overestimating how much information we are attending to in the world. This is a failure of our “meta-cognition” A Taxonomy of Attention External Attention → features, objects, spatial locations, sensory modality, time points Attentional “filters” we can use: ○ Modality–specific (visual vs. auditory) attention ○ Spatial (visual vs. auditory) attention ○ Visual feature attention (e.g. colors, shapes) ○ Visual object attention (e.g. a familiar face) ○ Auditory feature attention **Green Needle vs. Brainstorm – we can hear the audio in two different ways because we are attending to features that confirm our expectation about what it’s going to say. We have the ability to have these attentional filters of expecting certain phonemes, and we can try to listen for those particular ones. Internal Attention → task rules (internal goals), responses, long-term memory, working memory ○ Task rules: takes times to mentally redirect attention to a new task ○ Long-term memory: selecting relevant memories and attenuating related but irrelevant ones ○ Working memory: choosing which information to maintain and which to discard To some extent, these two forms of attention are at competition with one another. **Priming Story Task: found that if you prime subjects by telling them what types of questions they’ll be asked (i.e. about the proposal or restaurant), they would have good memory on aspects relevant to their goals, but could not remember any other details about the story. External attention is stimulus-driven, while internal attention focuses on goals. Stimulus-Driven vs. Goal-Directed In addition to being goal-oriented (“top-down”), attention can also be pulled by stimulus features. ○ Posner cueing → shows that our visual attention can be automatically pulled towards other things in our environment even if it is not relevant to their goals. ○ Hearing one’s own name can pull attention to the unattended ear during dichotic listening (in conflict with early bottleneck attention theory). ○ **Jiang et al. → unconsciously present an erotic image on the left or right side, then present a tilt judgement on the left or right side. For hetero observers, an image of the opposite sex pulled attention to that side of the screen, improving judgements on the tilt tasks if it was on the same side. Shows that our attention can be influenced by factors that we’re not conscious of. Attention and Consciousness What we’re attending to is changing something about how we experience stimuli. Attention is one of the factors that influence what we’re conscious of. Attention Schema Theory of Consciousness – we can most effectively control a system if we have a mental model (schema) of how that system works. Essentially simple mental models that are generally well-synchronized to reality, but still different. ○ Ex: for motor control we use a body schema, a simplified representation of our physical body. ○ **To effectively control our attention, we should have a model (schema) of our attentional system – this internal model of attention is consciousness. ○ Predicts that we can sometimes have attention without awareness (since our model is not perfect), and that attention is less well-controlled without awareness (since we cannot use our model in that case). Any cognitive system that has a model of its own attention is having some kind of conscious experience and the degree of conscious experience depends on the complexity of the model. **Attention Theory of Cinematic Continuity – editing provides cues to seamlessly transfer our attention across cuts (scene from Despicable Me with 11 separate shots in 28 seconds). Conclusion Competing theories of attention give different explanations about why attention is limited and the consequences of not paying attention. Attention can be directed based on external stimuli or internal goals. Attention is not synonymous with consciousness, but there are debates about their relationship. 11/6: Memory **Conditioning (simplest kind of memory) ○ Operant Conditioning – behavior that is encouraged or discouraged through a reward or punishment. ○ Classical conditioning – a learning process that occurs when two stimuli are repeatedly paired: a response which is at first elicited by the second stimulus is eventually elicited by the first stimulus alone. Skills and Habits ○ Skills are things we learn gradually over time, and typically take practice. Involves some kind of memory/past experience to make you able to perform these complex tasks and get better at it. ○ Habits have a connection with conditioning since there might be some kind of reinforcement. But habits can also emerge in the absence of reinforcement – if you just do a specific activity enough, it becomes a habit. ○ **Patient HM – removed hippocampus and could not develop new memories, but still had intact information of the past. Mirror tracing task – example of explicit learning. Motor sequence learning – example of implicit learning. Repetition and key presses. Sensory/Iconic Memory – extremely brief and tied to perception. It involves fast-decaying visual information. ○ **Sperling (1960): Participants were briefly shown a grid of letters and then asked to recall them. Despite only seeing the grid for a fraction of a second, many could recall a significant portion of the letters. This experiment highlighted the fleeting but high-capacity nature of our iconic memory. When subjects were cued to report just one particular row (top, middle, or bottom) using different toned cues, subjects could correctly remember 3-4 characters from that row with an accuracy of 75-100%. Thus, ~9 letters are available for recall. If subjects were told to report all of the characters they could remember (whole report), they would report 4-5 characters but with an accuracy of 33%. Showed that there is a brief memory with a high capacity, and what goes into working memory. **There can be more information in early sensory processing than can be preserved in working memory, and it seems to have a duration of about one second. We can probe that by playing different tones, which indicate which row to report. Working Memory – Working memory is the small amount of information that can be held in mind and used in the execution of cognitive tasks. ○ **Reading Span Task: Shows that there is a limitation on how many words you can keep in mind at a time. Though working memory lasts longer than sensory memory, you need to keep rehearsing it to make it accessible/move it to long-term memory. ○ **Luck & Vogel (1997) found that visual working memory has a limited capacity of about 3–4 items, with performance depending on the number of objects rather than their complexity. This supports the idea that working memory stores integrated object representations, rather than individual features, highlighting its capacity constraints. Depends on the number of objects. Looking at a scene and then another, then asked to verify if there was a change. Your ability to do so depends entirely on the number of things you can pay attention to. Studies demonstrating errors in working memory: ○ **Bays, Catalao, Husain (2009) discovered “swap errors” – a specific type of memory error where an individual mistakenly reports a feature from a different item in a memory set, essentially "swapping" the features between items, rather than recalling the correct feature associated with the target item. ○ **Brady & Alvarez (2022) experiment with adjusting size to match the circle in a certain location. Shows us that computations are being used in working memory – we are storing information about each item along with information about other items of the same type. We’re storing information about some statistical averaging of all the circles. **Semantic Memory – a type of long-term memory that refers to our store of general knowledge about the world, including the meaning of words, objects, places, and people. ○ Conceptual associative priming: the phenomenon where a concept or idea is primed by presenting a related concept, meaning the activation of a concept in memory influences how quickly you process a related concept based on their meaning. ○ Perceptual associative priming: occurs when a stimulus is primed by presenting a visually similar stimulus, focusing on the physical characteristics rather than the meaning of the stimuli. Episodic Memory – a type of long-term memory that involves conscious recollection of previous experiences together with their context in terms of time, place, associated emotions, etc. ○ **Scrub Jays experiment: scrub jays could remember 'when' and ‘where’ food items are stored by allowing them to recover perishable 'wax worms' (wax-moth larvae) and non-perishable peanuts which they had previously cached in visuospatially distinct sites. Jays searched preferentially for fresh wax worms, their favoured food, when allowed to recover them shortly after caching. However, they rapidly learned to avoid searching for worms after a longer interval during which the worms had decayed. ○ Hyperthymesia – A rare condition that causes people to have an exceptional ability to recall details of their life experiences. ○ **Childhood Amnesia – The difficulty adults have in remembering events from their early years. Two explanations: Autobiographical memories aren’t formed before 3, fewer 3-7 Autobiographical memories aren’t retained before 3, fewer 3-7 ○ **Reminiscence Bump – People remember the most from when they were 10-30 years old. This seems to explain their preferences. Storage and Retrieval: Focus on computational memories **Random Access Memory – Essentially, you have addresses and each of those addresses have content. So you look it up and get the relevant content (phonebook analogy). ○ Addresses can be systematic and point to other memory stores, so it is a way of coordinating lots of memory. ○ If you want to know what happened at a particular time, you go through the addresses, find the relevant time/location, input it, then get the output. The pointers allow for highly structured memories – you can have one address that picks out a bunch of other relevant addresses, which then you can look up and retrieve more information (i.e. accessing an address/memory you have about a restaurant. The address can point to other addresses that store info about cost, experience, food, etc.) **Content-Addressable Memory – In this form of memory, you don’t have a separate address and content, instead they are combined into one. It is when you have memories and then input partial/small errors in the inputs and then the content memory gives you the closest match (i.e. inputting a few pixels and then getting the full picture as the output that was initially remembered). ○ **Godden and Baddeley (1975): Scuba divers were asked to memorize several words, then given another set of words. They would have to distinguish whether that list is the same one as before. If they were shown the words on land and then tested underwater, they would do worse at recall and vice versa. Evidence of content-addressable memory because it’s not like they have a separate storage for each of the words; the way the words are stored/coded is mixed together with their total experience at the time. If you put them in the same environment, it helps them recall the memories they had before. State-dependent retrieval. Similar experiment with marijuana when testing sober vs. stoned. Shows CAM since accessing one part of the memory makes it easier to remember the rest. Part 6: Problem Solving 11/11: Probabilistic Reasoning Probability theories are mathematical theories that help us reason and make decisions under uncertainty. The probability of an event is a number between 0 (impossible) and 1 (necessary). ○ Subjective probabilities are the probabilities that we assign in our reasoning. **Probability (X) = X possibilities / Total possibilities **Conditional Probability = X-and-Y possibilities / Y possibilities **Bayes’ Theorem P(H|E) = P(E|H) x P(H) / P(E) ○ P(E|H) – likelihood ○ P(H) – prior; the reliability of the evidence/how likely the hypothesis is before any evidence. ○ P(E) – normalizing constant ○ P(H|E) – posterior; how likely you think the hypothesis is given the evidence. ○ Consequence 1: If the prior probability of H is sufficiently low, the posterior probability P(H|E) will be low regardless of Evidence. Basically, if you believe that H is almost impossible, new evidence won’t make much difference. ○ Consequence 2: If the likelihood of P(E|H) is about the same as P(E|notH), the posterior P(H|E) will be close to the prior P(H). Basically, if your evidence E is worthless, you stick with what you already believed. Bayesian Optimality ○ **Optimal approach to cue combination – for discrepant cues, you should take their weighted average, where each is weighted by the reliability to that cue. Ex: taking the average of the weather forecast – 80% reliable weatherman says it’ll be 60 degrees, and a 60% reliable weatherman says it’ll be 50. ○ When combining multiple sources of evidence, you should take all evidence into account, but you should place more weight on the more reliable evidence. ○ **Fetsch (2010): Monkeys are placed on a moving platform and shown dots that suggest the direction of motion. What they ideally do is take into account both their vestibular information, which is the platform motion, and the visual information, which is giving them direction. Then, they average them together in a way that takes into account the weighted reliability of the visual vs. vestibular. Bayesian Suboptimality ○ Representativeness – when asked the probability that A belongs to a class B, people often rely on the degree to which A resembles a paradigmatic example of B. ○ **Failures that result from representativeness: Base rate neglect (Linda is a bank teller vs. feminist bank teller) Insensitivity to sample size Misconceptions of chance (fair coin flip, gambler’s fallacy) Misconceptions of regression ○ **Availability – when asked the probability of A, people often rely on the ease with which instances of A can be brought to mind. ○ Failures that result from availability: Influence of familiarity Influence of ease of search Influence of salience Influence of recency Bias of imaginability Illusory correlation **Metacognition – we’re using probabilities, which are themselves abstract, and using theories of mind and assigning them probabilities as well, making it complex. Essentially, the study of how probable you think you are right. How confident you are in the answer dissociates from how reliable you are at producing the right answer. ○ Important results about human metacognition: The assignment of certainties or uncertainties to our own decision making and more generally our own mental states. Confidence often varies independently of accuracy Overconfidence tends to apply across domains More information can lead to overconfidence Expertise can lead to overconfidence We’re bad at predicting which problems we can solve ○ Metacognition in animals: **Kepecs and Mainen (2012): Found an opt-out paradigm as evidence of metacognition. Shows that the animal has some sense of how likely it is that their categorization is correct, and can take that into account and weigh it against the guaranteed reward. Waiting Time Paradigm. In a study with mice, Kepecs had them conduct a classic categorization task. After choosing the correct A route, there is a random delay and then they are given a reward (water). If the mouse detected the wrong dominant smell and chose the B route, there would be a random wait time and nothing will happen until they go back to the initial starting place to do another trial. If the mouse is very confident that they got the right answer, they will wait as long as it takes since they know a reward is coming (this happens even when the mouse is incorrect but confident). If the mouse is not confident, Kepec found they would just give up relatively easily and go back to the beginning to start another trial. Demonstrates metacognition because it’s a way of testing how confident an animal is that they got the right response. When A and B smells were about equal, the mice would only wait around 7 seconds before heading back to the start because they were uncertain. 11/13: Robotics and Embodied Cognition What problems require cognition or intelligence? Acting in the physical world feels easy for humans, but is surprisingly hard for machines. **Moravec’s Paradox – sensorimotor and perception skills require enormously more computational resources than abstract reasoning. ○ Essentially a mismatch between the stuff that seems easy versus hard for humans as compared to cognitive systems in general. ○ We often give reasoning tasks to people to test how good their cognitive abilities are (i.e. chess); however, this paradox would argue that this is the thinnest veneer of what our minds are doing. As humans, we need to adapt to changes in the world, deal with uncertainty, take motor actions, etc. which are actually really challenging computational problems. **Embodied Cognition – Branch of cognitive science that’s interested in how cognitive systems exist in physical worlds. ○ **As opposed to the traditional cognitive science approach—in which we think of the (natural or artificial) mind as separate from the environment which provides inputs and outputs—embodied cognition involves thinking of the mind and environment as a single cognitive system with relations running back and forth. ○ Refers to a whole spectrum of ideas, from: “Simple” embodied cognition – our cognitive systems are made to operate in a physical world. “Medium” embodied cognition – states of the world and our bodies shape our thoughts in a fundamental way. “Radical” embodied cognition – the mind cannot be meaningfully studied in isolation from the world. **Modal Representations – one branch of embodied cognition argues that most/all of our semantic representations are modal: they are tied to the sensory and motor information associated with that concept. ○ Even relatively abstract kinds of representations are tied to the particular sensory features by which we learn them in the world. All of these concepts are ultimately derived from some physical experience that we have, and they still retain some of those physical properties. ○ To study this in humans, we can look for times when this kind of embodied information is present even when not necessary for a task. Tucker & Ellis (1998): In this study, subjects were shown images of objects and asked to distinguish whether they were right-side up or upside down by clicking a button with your right or left hand. Found some kind of consistent bias where having the handle of an object pointed towards one of your hands seems to speed up the action-taking process with that hand, even though the action you’re taking has nothing to do with the handle. This shows that our motor systems are inherently more responsive when we represent the image in your mind and see that there’s something on the object that we could interact with that hand – we are preparing for a motor response. Fuhrman (2010): In a study, subjects were shown images and asked to distinguish if it happened earlier or later in a sequence. When the button for “earlier” was placed on the right, they found that people would take longer answering, and this is because people inherently map that onto a spatial dimension where things that are earlier are more to the left. So if people are presented with a set of buttons that break that spatial relationship, it takes them a little bit of extra time to translate the motor action. For people whose native language moves from right to left, they did better at this task. Why should we think of the environment as part of a cognitive system? We cannot determine what the goal of a cognitive system is unless we know something about its physical characteristics. **Ex: programming a robot to: move forward whenever possible & if stuck, back up a bit and turn a random amount. Collecting dots into a pile → shows understanding something in the context of its larger environment and how that can help us understand why it’s doing what it’s doing. ○ In this specific environment (with this specific robot body, blocks of this specific size), this robot has the function of helping to pile up blocks. ○ If we want to study this system at Marr’s highest (functional/computational) level, it is only possible when considering the robot + environment. ○ Consistent with “radical” embodied cognition: we can’t study these cognitive agents in isolation. **Off-Loading Cognition – in some cases, we can offload our cognition to some extent onto the environment. Instead of keeping track of lots of representations inside our own mind, we could try to use properties of the physical world outside of our mind to keep track of some things for us. ○ Humans can use their environment strategically in order to carry out cognitive processes (i.e. taking notes and referring back to them when needed). ○ Australian Aboriginal memory technique – associate information with physical locations, then walk back through these locations to remember. **Dynamical Coupling – embodied cognition can be an ongoing interplay between an agent and the environment. There’s some ongoing back and forth where there’s something in the world that causes us to have a response inside our own mind and we take action, and vice versa. ○ As opposed to a “representation-hungry” system that maintains complex internal states and computes offline, systems can repeatedly make simple computations by observing the world. ○ **Ex: Strategies for catching a baseball: measure and represent its position and velocity and run a physics simulation algorithms to determine where it will land → constantly adjust your running direction such that the ball looks like it is always moving up in the same direction. ○ Rodney Brooks argues that this kind of dynamical coupling where you’re constantly measuring things in the world and using that to directly update your actions is the right way that we should be carrying out robotics. “Actionist” or “Situated” or “Behavioral” robotics – argues against maintaining representational states inside robots. Rather than creating detailed internal models and making multi-step plans, robots should be composed of many interacting behavioral units that react to the environment. Essentially, we should only keep internal representations of stuff that is critical for us to keep track of and not available in the world (i.e. someone giving us an instruction for a task). **Social Cognition – we can also think of other people/robots as part of the environment in which we are situated. ○ For some kinds of collective behaviors, we may need to think about the “basic unit” of cognition as consisting of groups of minds. ○ Ex: Programming a robot to increase its counter every time it sees a flash nearby. By the end, all robots should be flashing at the same time. This type of algorithm provides a way for a group to come to a consensus synchronization. The “cognitive system” here is the whole group, not the individual. ○ Ex: Programming a robot to align its direction with the average direction of nearby robots → leads to flocks heading in the same direction. Morality for Robots – since robots can take actions in the physical world, they have the potential to cause or prevent physical harm to people, animals, property, etc. ○ Asimov’s Laws of Robotics: First Law: A robot may not injure a human being, or through inaction, allow a human being to come to harm. Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 11/18: Reinforcement Learning What kinds of strategies can people or AIs use to plan actions? **Unsupervised learning – detecting patterns in the world without a specific goal **Supervised learning – being taught the correct response to a stimulus In CogSci, the mind “solving a problem” usually means that: ○ There is one or more “goal”/”reward” states that we want the world to be in. ○ It will require multiple steps/actions to get from where we are to where we want to be. ○ It isn’t obvious what the right next step is. To decide on a next action, we need to either: ○ Use prior experience to choose an action that has tended to eventually get us to the goal (“model-free”) ○ Explicitly make a multi-step plan (“model-based”) – thinking forward to the future **Reinforcement Learning – useful for thinking about an agent that is making repeated decisions in an environment to achieve goals. RL algorithms can be practically useful (for AI systems) and also useful as explanations of human/animal behavior. ○ We have a current state of the world → think about possible actions that lead to the next state → continually considering and taking the different actions that will lead us to the end goal we want. ○ Essentially, we have multiple steps, and when we find that things aren’t working out, we go and identify the step where the mistake took place. **Q(uality) Learning – The Quality (Q) of an action is the sum of future rewards that we’ll get on average if we take that action in this state. If we know the Q of every action in every state, we can make good decisions by just picking the action with the highest Q. ○ One way to learn Q values is through experience: when we’ve taken this action in this state in the past, how did things tend to turn out? ○ Ex: When playing Tic-Tac-Toe, we have to consider the rewards, states, and actions. How often have I won in the past starting with X in top-left? Example of model-free approach since we’re not using any knowledge about how this world of the game works and we’re not planning into the future. We’re just considering what actions have worked for us in the past. **Creating a Model ○ The approach of just recording experiences to learn Q values is model-free – it doesn’t require us to know how our actions take us to new states. ○ But if we have a model of how our actions impact the state, we can make a plan for how to get into a “good” state rather than using trial and error. This may be useful when we find ourselves in situations that are new and thus can’t fall back onto Q learning since we don’t have enough experience. Ex: Model-free: Choosing the ‘70s Gold Station turned out great since it played Taylor Swift, so we should choose it again. Model-based learner: I should choose Today’s Hits station since it is more likely to play Taylor Swift, which is what I liked. Demonstrates that even though a certain action worked out well for us in the past, it might not be the best decision to make again if we know the model of the world. ○ **Model-based systems: can use new information to update their models and then update their plans. Helpful for when you haven’t experienced something before, but can develop an understanding and thus a plan for future steps. Can estimate the quality of novel actions Can flexibly update decisions if there is a change in the way the world works. ○ **Model-Free systems: learn purely from experience which actions end up working out well. Doesn’t require knowing how states change over time or how actions impact states. Very fast to make decisions. ○ Most humans and AIs use heuristic search (combination of the two systems): Use experience to guess which actions will have high quality for the current state (model-free) Improve these guesses using some planning (model-based) Solving real problems ○ Such strategies can be hard to use due to several reasons: State spaces & set of actions can be enormous and/or continuous Rewards may be many steps away Learning Q values of a model of the world may require a huge number of attempts ○ **An expert in a domain is able to: Identify the most important aspects of a state Quickly estimate Q values of an action (even for novel states) Rely on cached, automatic sequences of actions rather than making conscious decisions **Chase and Simon (1973): Experts in chess were tasked with memorizing either a real game board or a randomly generated board. The study found that experts showed a memory benefit only for real boards – this is because they are keeping track of the current state of the game. They have an internal representation that is really sensitive to the rules of the game (i.e. they can remember better when they can interpret that knight is threatening the queen, etc.) AI learning: ○ “Embedding” function: extract relevant features of the state (how to define the current state in a meaningful way) ○ Value function: estimate Q ○ Dynamics function: approximate model (estimate of the possible states that could come up next) ○ Combines a model-free value estimate with some model-based lookahead SUMMARY: We can use the framework of Reinforcement Learning to describe different strategies for problem solving and multi-step planning. Model-free strategies: use cached knowledge about which actions led to goals/rewards Model-based strategies: explicitly plan out the actions that will take you to goals/rewards Part 7: Philosophy 11/20: Consciousness Some examples of consciousness include: feeling of pain, feeling of tasting, feeling of thinking, feeling of thinking about oneself, etc. ○ In principle, there can be cases where consciousness changes, but our internal representation of what’s happening doesn’t – there are some differences then between the feeling and representation of an event. Materialism and its Alternatives ○ Cognitive Science is a materialistic theory, meaning that it studies the mind as a physical object (i.e. process of the brain). ○ **Materialism – involves thinking about the mind and body as one thing. Popular theory since there are several instances in which damage to the brain can lead to damage to the mind. However, there are some problems: **The Explanatory Gap – what is it about the physical material activity in the brain that gives rise to consciousness (i.e. why does firing of some neurons lead to consciousness but others don’t?) **The Knowledge Argument – say that Mary knows all the physical facts of the brain, and when she finally leaves her black and white room, she learns a new fact: what it’s like to see red. Therefore, the argument is that facts about consciousness are not included among the physical facts (as she learned/felt something new that was not part of her neuroscientific understanding of the brain). ○ Dualism – there are two substances, the body and the mind, and the mind is an immaterial thing which lacks parts, and the body is a material thing that has parts. So there is a causal interaction between the mind and body. ○ Epiphenomenalism – goes against the theory that the body and mind can interact with each other and determine what each will do. States that we are passive observers about what our body is doing and our minds are trying to justify/give credit to ourselves since we are egotistical. Our behavior comes before consciousness. ○ Russellian Monism – states that fundamentally all activities/changes originate from the mind. Then, there is no real competition between physical and mental causation. Also states that every particle is conscious. **Testing for Consciousness: One problem is that people seem to have consciousness without the ability to report (i.e change blindness, visual/hemispatial neglect, blindsight, subliminal priming). Scientific Theories of Consciousness: ○ **Global Workspace Theory – in activities where you report not being conscious (i.e. in subliminal priming experiments), the activity in the visual cortex stays below some threshold. Information in the visual cortex becomes conscious when it reaches a certain threshold of activity and broadcasts a signal to the rest of the brain so that it becomes available to the memory systems, motor systems, etc. Consciousness is the global availability of information in the brain. ○ **Higher Order Theory – consciousness occurs when representation about what’s happening in an earlier part of the brain (i.e. the visual cortex) sends information to a lower-part of the brain. Comes from subsequent information of some earlier state and that’s sufficient enough. ○ **Re-Entry and Predictive Processing Theory – there’s a sequence of stages throughout the brain that are predicting what the earlier stages are going to do. So when new information comes in and the prediction is wrong, it will then update to make different predictions in the future. ○ **Integrated Information Theory – look at how integrated some information is in the brain, and that gives you a conceptual structure (phi). The higher the value of phi, the more consciousness there is. Phi measures how connected the parts of the brain are, how much information flows back and forth. Phi is what gives rise to consciousness. Split Brain ○ **Experiment: the brain is split – the part that’s responsible for speech (left hemisphere) gets visual information from the right half of the visual field, so they’ll say that they saw a ring. However, since their motor system is on the right hemisphere and they see the word key, they’ll pick up a key. This raises questions about consciousness – in what sense is consciousness ordinarily “unified”? Consciousness in Animals and Machines ○ LSE criteria for consciousness: To test for this, an experiment with octopuses – they could go to three places and each had a different pattern (stripes or dots), and they tended to go to one pattern. The experimenters then conditioned them to avoid certain places, demonstrating associative learning and thus consciousness. ○ **Studies have recently found that bees may have consciousness since they are shown to play with balls, and playing is a form of intrinsic pleasure. Thus, they may be aware and conscious of reward/pleasure. Another study has found that bees can be taught to use tools (i.e. pulling out the disk to reach rewards) and they experience social learning by observing other bees do it first. Another study had bees fly up to colored objects, either balls or cubes, with the balls holding a reward. The bees very quickly learned to go to the circular objects, and what’s important to note is that they could not distinguish through touch what was the difference between the two shapes. When the lights were turned off, they intrinsically learned to go to the spherical object. They perhaps created a sophisticated map of the world and integrated multiple sensory modalities. 11/25: Belief **Consciousness and Free Will: Libet’s experiment where subjects observe a clock, note the time that you’ve consciously decided to push the button, then push it. Found that EEG can predict when somebody was about to push the button before they reported the conscious intention. ○ Controversial conclusion: almost all decisions are made before consciousness, and consciousness just provides the opportunity to veto those decisions. **Beliefs and Folk psychology: the mental states “the folk” use to explain behavior. Includes beliefs, knows, expects, doubts, desires, etc. ○ Dan Dennett’s view: We attribute these mental states to ourselves and others on the basis of their behavior. Mental states are merely helpful tools for predicting future behavior. There are no objective facts about which systems have mental states. ○ Jerry Fodor’s view: We attribute these mental states to ourselves and others on the basis of their behavior. Behavior is caused by representations in the mind, and different mental states cause different behaviors. There are objective facts about which systems have mental states. Beliefs and Aliefs ○ Now we explore the complexities of our behaviors even though we believe/know something is true – for instance, we may be afraid to walk on a glass bridge even though we know that it is stable. Aliefs question how comprehensive folk psychology is. ○ According to Gendler, aliefs are: Associate Automatic Irrational Shared with animals Antecedent to other cognitive attitudes Action-generating Activate behavior Affect-laden ○ **Fundamental differences between beliefs and aliefs: Beliefs are sensitive to evidence, aliefs are not Beliefs involve acceptance, aliefs do not Ex: we see sugar being poured into two shakers, but one is labeled as sugar while the other is labeled as “not cyanide”. Even though we know that both contain the same stuff, people will prefer the one labeled as sugar. ○ Arguments against aliefs: We involve more complex reasoning in our reactions to things. So the enemy of your enemy you’ll like. Beliefs and Psychiatry ○ OCD → what’s the best explanation for obsessively checking that stove is off? Possible that they believe the stove is off, but alieve it’s on. ○ Delusions → the man has visual experience with seeing portals and believes that they exist, although he knows rationally that they don’t. **Beliefs and Groups ○ Group behavior can be complicated ○ Group behavior can be goal-directed without being directed by any particular member ○ Groups can have goals that aren’t the goal of any particular member ○ Groups can have parts for memory, perception, and decision-making **Beliefs and Technology ○ Ex: Inga believes that the MoMa is on 53rd street because that information is stored in her memory system and she can access it when necessary. Otto stores the information about the museum’s location in his notebook, and thus he also believes that the museum is on 53rd street. Both need to search for the information, have reliable access to the info, and can lose access to the info. This is similar to how information is stored in technologies like our phones, and we believe the information provided to us through these mediums. So at what point is integration too separated from belief? 12/2: Challenges to Cognitive Science Objections to the classic Computer Analogy: A computer doesn’t understand its inputs → we understand our inputs → thus we are not computers **Chinese Room Thought Experiment: just because you can respond to inputs or symbols in the right way, it doesn't mean that you have genuine understanding. ○ The man in the Chinese room correctly manipulates symbols → the man does not understand Chinese → therefore correctly manipulating symbols is not sufficient for understanding Chinese → computers are just devices for manipulating symbols → thus computers do not understand. Consciousness: A computer isn’t necessarily conscious, but we are, therefore we are not merely computers. ○ Free will: A computer’s decisions are never free, but our decisions are sometimes, therefore we are not computers. Creativity: A computer is never creative, but we are, therefore we are not computers. ○ Claim is that computers don’t have the right kind of thought process that produces the creative work that perhaps AI makes; relies on human instructions. **Dynamic Systems Theory: Computers are not dynamical systems, but we are, so therefore we are not computers. We should use the language of calculus and physics to understand the brain as just a complex physical system and we shouldn’t understand it as doing computations like a traditional computer. ○ Ex: Watt-Governor machine was invented to keep trains at a steady pace by controlling the amount of steam that comes out of a valve, which is a dynamical process. We can give a computational description of the Watt-Governor, and the claim is that the brain is similar to this machine. The brain controls lots of processes that are in a way balancing each other. To understand what the brain is doing, we would need to describe some kind of computations to over-intellectualize it → says that CogSci over complicates the process. ○ There is no real dynamical systems theory on long-term memory, but CogSci can. Neural Network Theory: Neural networks use algorithms, but we shouldn’t look for parts corresponding to each step. Part 8: Artificial Intelligence 12/4: Artificial Intelligence There are different definitions of artificial intelligence, but we have to distinguish whether they are agents that think and have cognitive processes, or are they agents that take actions in the real world (thinking vs. acting humanly/rationally). ○ CogSci focuses more on “thinking humanly”, which are the processes in which AI thinks like humans. Defined as “the automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning..” Newell & Simon’s Physical Symbol System hypothesis: any system exhibiting intelligence must operate through manipulation. The way that a system is intelligent is that it has to operate by manipulating discrete symbols that represent something in the world. ○ Found to be difficult; for instance, translating and manipulating symbols does not explain the complex behavior of language. Rule-Based Expert Systems: you can take an expert in some domain, and we can try to write down what their store of knowledge/rules are about the world. ○ Problem in that a lot of our behaviors/knowledge doesn’t easily fit into a strict set of rules. **The current cycle of AI started in the 2010s, and the big change here was an innovation on the data sets that people were using to create models. Instead of building a rule-based system where they’re going to hard code knowledge into these systems, they built a really flexible system that has many parameters that don’t need to be explicitly set. ○ Fit the model into some data set and have it figure out how to program itself so that it performs well on a specific task. ○ ImageNet database **Misalignment – the problem where what the AI is trying to optimize is not the same as what we’re trying to optimize. ○ Paperclip example: AI is trying to optimize the number of paperclips but that leads to the destruction of the human race because all it’s trying to do is optimize. How does ChatGPT’s cognition compare to human cognition? ○ Initially in 2022, AI could not solve the False-Belief Task and Ebbinghaus Illusion. However, within the next two years, it was able to accurately solve the problem, demonstrating AI’s growing nature. ○ ChatGPT can now solve problems related to centration, apply model-based reasoning, Wason selection task, aliefs scenarios, etc. Multiple approaches to AI have been attempted, with rule-based symbolic systems now largely replaced by connectionist systems trained on large datasets. Cognitive Science can give us tools for evaluating or improving the human-ness of AI systems. AI and Society: ○ “Garbage in, garbage Out” Bias: racist chatbots, racist YouTube recommendations, and racist prison sentences. ○ Vicious feedback loops: identify an area for extra policing → more criminal activity found in that area → identify it as a need for more policing → etc. ○ Social harms: AI can become super intelligent and outsmart humans Used for bad purposes Put in charge of important tasks and makes mistakes Employers hire fewer employees

Use Quizgecko on...
Browser
Browser