Summary

These lecture notes cover cognitive psychology, including methods of studying it (introspection, experiments, neuroscience, neuropsychology, and computational modeling). The notes discuss various methods and their limitations, and explore recurring themes such as computer models of information processing, serial vs. interactive processing, and bottom-up vs. top-down processing.

Full Transcript

Lecture 1c ========== - What is cognitive psychology - The branch of psychology that is interested in the processes involved in acquiring, storing, and transforming information - Cognitive psychologists are interested in how human information processing occurs, and t...

Lecture 1c ========== - What is cognitive psychology - The branch of psychology that is interested in the processes involved in acquiring, storing, and transforming information - Cognitive psychologists are interested in how human information processing occurs, and the mental representations involved in these processes - Cognition is not something you can directly see - We need to find alternative methods to study them - Methods of studying cognitive psychology: - Introspection - Systematic examination of one's own thoughts and mental state - An early method used by scholars of psychology - Limitations - Mental processes occur too quickly for us to access them - E.g. it takes 200 milliseconds to identify a strong of letters as a word - There is no way to validate whose interpretation is correct - Overconfident but ultimately wrong intuitions - E.g. pictures that look like the apple logo, choose which one is correct. - We still use introspection often in modern day research - Surveys, questionnaires, think-aloud procedures that ask us to report our thoughts and feelings - Experiments - Use of "behavioral evidence" to infer cognitive processes - Trying to infer from your performance of the task, what is the underlying cognitive process - Behavioural evidence usually obtained from experiments with healthy participants (usually undergraduate students) - Behavioral measures: speed and accuracy - Tightly controlled conditions and clever designs - Limitations - Not ecological valid - E.g. how useful is this study in our lives - We sacrifice making things nice and interesting to control for everything and the behavior is due to the experimental manipulation - Neuroscience - Functional neuroimaging methods: fMRI, EEG, MEG, fNIRS - Brain activity + behavioral measures - fMRI has higher spatial than temporal resolution - EEG has higher temporal than spatial resolution - Limitations - Brain activity is hard to interpret - E.g. the coloured spots on brain journals is hard to say what they mean - E.g. seeing if they are different from baseline brain activity - Just because you understand brain activity doesn't mean that you will understand it - Indicative of associations between brain areas and behavior, but does not tell us exactly how the processing occurred - Assumption of "functional specialisation" is a suspect: brain areas interact with each other in highly complex ways to produce behavior - It is not that each part of the brain has 1 fucntion. There is a huge amount of integrity and interdependence in brain function - E.g. the use of network science to see how each of these brain areas interact with each other - Neuropsychology - Looks at individuals with brain lesions due to injury and disease - Look at their behavior and inferring if a certain part of the brain is impaired, what kind of cognitive functioning will be impaired from there - We can learn a lot about the cognitive system when it is not working as expected - Dissociation: comparison with a healthy group - Double dissociation: a patient that can do X but not Y, another patient that can do Y but not X - Limitations - Limited number of patients with unique patterns of impairment - Can the pattern of one individual generalise? - No because we cannot force the patients to be injured in the exact same area of the brain - E.g. of neuropsychology being studied: Henry Moliason - Computational modelling - Computer programs to implement different models of human cognitive functioning - Computer programs written by psychologists to implement different methods of how humans think - AI is NOT a computational model of human cognition: it produces intelligence outcomes using processes that are very unhuman-like - E.g. AI can read all texts off Wikipedia, no human can ever do that - Symbolic vs connectionist approach - Symbolic: subprogrammes - Connectionist approach: neuro-network, interconnecting nodes each with a specific function - Limitations: - Requires highly specialised skill sets that are not often taught to psychologists - E.g. coding and programming - Does not consider role of emotion and motivation - Highly abstract that try to test a human process - Recurring themes - Computer model of information processing metaphor (input\>internal processes -\> output) - Serial vs. interactive processing (one thing at a time or can multiple sources of information interact during processing?) - Bottom-up vs. top-down processing (externally or internally driven? Influence of the environment vs. own knowledge) - There is a difference, but these 2 things interact with each other Reading 1 -- what is cognitive psychology ========================================= - **Cognitive psychology** is concerned with the processes involved in acquiring, storing, and transforming information. - cognitive psychology tries to understand the code on which human information processing relies and the processes operating on that code. - The term **mental representation** is often used to refer to the code. What methods have been proposed to study the human mind ------------------------------------------------------- ### Introspection - to look inside and systematically examine one's own thoughts. - Introspection and reasoning were the ways to access this true, cosmic information. - it is no longer the most important way of studying human cognition. - many processes happen so fast that we do not have conscious access to them. - 4 limitations to introspection - We are largely unaware of many of the processes influencing our behaviour. We are generally consciously aware of the *outcome* of our cognitive processes rather than those *processes* themselves - The reports of our conscious experience may be distorted (deliberately or otherwise) - There is a delay between having a conscious experience and reporting its existence. As a result, we may sometimes forget part of our conscious experience before reporting it. - Introspection has no way of ascertaining what is going on when two (groups of) persons differ in their introspection. ### Observation and manipulation - People, events, and things are linked and remembered on the basis of three principles: contiguity or closeness, similarity, and contrast. - Behaviorism: an approach to psychology that emphasises a rigorous experimental approach and the role of conditioning in learning - Learning occurs when an association is formed between a stimulus and a response. - behaviourists' emphasis on *external* stimuli and responses should be accompanied by a virtual ignoring of *internal* mental and physiological processes. ### Theory building, verification, and falsification - The use of intellect to understand observations is called theory building. - Theory usually refers to a time- honoured framework that explains a wide range of empirical findings; a model is more modest and refers to the initial explanation of a more limited issue. - Two further aspects are important in theory or model building. - The first is verification: Researchers verify systematically that the understanding (the theory) is in line with all observations. - The second principle is falsification: If a phenomenon is understood, it is possible to make predictions of what must happen if the theory is correct. ### Information-processing approach - Processing directly affected by the stimulus input is often described as **bottom-up processing**. - Serial processing: only one process occurs at any moment in time. - With our limited processing capacity, there is generally too much information in the sensory stores for us to attend to all of it. - we attend to only *some* of the available information, which then proceeds to the short-term store. ### Top-down processing - processing is not exclusively bottom-up but also involves top-down processing. - **Top-down processing** is processing influenced by the individual's expectations and knowledge rather than simply by the stimulus itself. A diagram of a diagram Description automatically generated - if attentional processes determine which fraction of the information available in the sensory stores enters short-term memory, top-down processes such as our goals and expectations are likely to influence what we attend to. - cognition involves both bottom-up and top-down pro- cessing. - This is called *interactive processing*. - Information coming from the senses (bottom-up) is combined with expectations on the basis of the context (top-down) to optimise processing efficiency. ### Parallel processing - some of the processes involved in a cognitive task occur at the same time -- this is **parallel processing**. ### Contemporary Cognitive Psychology - four major approaches to cognitive psychology - *Experimental cognitive psychology:* This approach involves carrying out experiments on healthy individuals (often psychology undergraduates). Behavioural evidence (e.g., participants' level of performance) is used to shed light on internal cognitive processes. - *Cognitive neuroscience:* This approach also involves carrying out experiments. However, it extends experimental cognitive psychology by using evidence from brain activity (as well as from behaviour) to understand human cognition. - *Cognitive neuropsychology:* This approach also involves carrying out experiments. Although the participants are brain-damaged patients, it is hoped the findings will increase our understanding of cognition in healthy individuals as well. Cognitive neuro- psychology was originally closely linked to cognitive psychology but now is also linked to cognitive neuroscience. - *Computational cognitive science:* This approach involves developing computer models based in part on experimental findings to explain human cognition. #### Experimental cognitive psychology - Such experiments are typically tightly controlled and "scientific." - the findings of experimental cognitive psychologists have played a major role in the development and subsequent testing of most theories in cognitive psychology. - Experimental cognitive psychologists typically obtain measures of the speed and accuracy of task performance. - An important phenomenon in cognitive psychology is the **Stroop effect** (Stroop, 1935) - The finding that naming the colours in which words are printed takes longer when the words are conflicting colour words (e.g., the word - A concern sometimes raised against cognitive psychology is that how people behave in the laboratory may differ from how they behave in everyday life. - In other words, laboratory research may be low in **ecological validity** -- the extent to which the findings of laboratory studies are applicable to everyday life. - it is far better to carry out well-controlled experiments under laboratory conditions than poorly controlled experiments under naturalistic conditions - econd, to our knowledge, there are no studies in cognitive psychology that report *opposite* results in the real world than in the lab (things may be different for social psychology) #### Cognitive neuroscience - provides us with information about brain activity during performance of cognitive tasks as well as behavioural evidence. - Carving up the brain in different areas ![](media/image2.png) - The brain is more than the cerebrum - Underneath the cerebrum, in the middle of the head, there are a number of small structures, called subcortical structures. - Three subcortical structures will be important for the rest of the book: the hippocampus, the amygdala, and the thalamus (Figure 1.9). - The **hippocampus** is a subcortical structure particularly important for **memory encoding and spatial knowledge** (e.g., orienting yourself and finding a way to a target). - There is one part on the left side of the brain and one on the right. - The **amygdala** consists of a left and a right structure in front of the hippocampus. This structure is particularly active in situations that provoke fear and that are **emotionally arousing**. - There is evidence that stimuli (in particular dangerous ones) can directly activate the amygdala without elaboration by the cortex. - The **thalamus** is at the very centre of the brain and functions as the brain's relay station connecting the various parts (although there are also many direct connections between different parts of the brain). - It is a structure involved in regulating the state of consciousness (e.g., asleep, awake, in a coma \...). - The **cerebellum** ("little brain") is a structure situated at the back of the head underneath the cerebrum. It is crucial for motor control (such as maintaining balance and performing movements), but also critically involved in the fluent running of cognitive processes. A diagram of a brain Description automatically generated - Functional magnetic resonance imaging (fMRI) - MRI scans can be obtained from numerous different angles but only tell us about the *structure* of the brain rather than about its *functions.* - What is measured in fMRI is based on assessing brain areas in which there is an accumulation of oxygenated red blood cells suggestive of activity - Second, it allows us to find out whether two tasks involve the same parts of the brain in the same way, or whether there are important differences. - Event-related potentials - Basic cognitive processes last less than 1 second and, therefore, must be measured with a resolution of 1 millisecond (usually abbreviated as ms) or a few milliseconds. - An EEG is based on recordings of electrical brain activity measured at the surface of the scalp - Very small changes in electrical activity within the brain are picked up by scalp electrodes. - However, spontaneous or background brain activity sometimes obscures the impact of stimulus processing on the EEG recording. - This problem can be solved by presenting the same type of stimulus several times. - This method produces **event-related potentials (ERPs)** from EEG recordings and allows us to distinguish genuine effects of stimulation from background brain activity. - there are limitations with the use of ERPs. - Remember that a given stimulus needs to be presented several times in order to produce consistent ERPs. - That works well when participants process the stimulus in the same way on each trial, but is inappropriate when processing differs over trials. - Evaluation - ERPs and brain-imaging techniques provide useful information about the timing and location of brain activation during performance on an enormous range of cognitive tasks. - it is often hard to *interpret* the findings from brain-imaging studies. - Baseline conditions - when researchers argue that a given brain region is active during the performance of a task, they mean it is active relative to some baseline. - We might argue that the resting state (e.g., participant sits with his/her eyes shut) is a suit- able baseline condition. However, the brain is very active even in the resting state and performing a task increases brain activity by 5% or less. - most brain activity we observe reflects basic brain functioning. - researchers are particularly attentive to *increased* brain activity as a result of task performance reflecting task demands - however, there is often *decreased* brain activity in some brain regions - brain functioning is much more complex than often assumed. - brain-imaging techniques only indicate there are *associations* between patterns of brain activation and behaviour - We can't be certain that involvement of the prefrontal cortex is necessary or essential for performance of the task. - most brain-imaging research is based on the assumption of - **functional specialisation** -- the notion that each brain region is specialised for a different function. In fact, matters are often much more complex. - The performance of a given task is often associated with activity in several brain regions at the same time, and this activity is often integrated and coordinated. - It is harder to identify brain net- works involving coordinated brain activity than to pinpoint specific regions active during task performance (Ramsey et al., 2010). - knowing when and where in the brain processing takes place does not answer the more important question of "how" the pro- cessing is done. #### Cognitive neuropsychology - **lesion** -- structural damage to the brain caused by injury or disease. - Psychologists discovered that they can learn useful aspects about the workings of the brain by studying its malfunctioning. - Major assumptions - One such assumption is that of **modularity**, meaning that the cognitive system consists of numerous modules or processors operating relatively independently of each other. - It is assumed these modules respond to only *one* particular class of stimuli. - the modules or processors are organised is very similar across people. - If this assumption is correct, then we can generalise the information obtained from one brain-damaged patient to draw conclusions about the organisation of brain systems in other people. - If the assumption is incorrect, then the findings from a single brain-damaged patient won't generalise. - Subtractivity - brain damage impairs one or more modules, but can't lead to the development of any new ones or to the use of new processing strategies. - As a result, the contribution of brain areas to a particular function may be underestimated if the patient manages to perform reasonably well on the basis of alternative strategies. - One way cognitive neuropsychologists understand how the cognitive system works is by searching for dissociations. A **dissociation** occurs when a patient performs at the same level as healthy individuals on one task but is severely impaired on a second one. - To avoid this problem, cognitive neuropsychologists are particularly interested in a **double dissociation**. - An important issue cognitive neuropsychologists have addressed is that of whether to focus on individuals or groups in their research. - In research on healthy individuals, we can have more confidence in our findings if they are based on fairly large groups of participants. - However, the group-based approach is problematical when applied to brain-damaged patients because patients with apparently the same condition typically differ in the pattern of impairment. #### Computational cognitive science - A more detailed (and challenging) way to study cognition is by writing computer programs doing the same things as brains -- this is computational cognitive science. - **Computational modelling** involves programming computers to implement a theory or model of human cognitive functioning. - artificial intelligence involves constructing computer systems that produce intelligent outcomes, but the processes involved need not be the same as those in humans. - computational models require good mathematical knowledge and computer programming skills - symbolic models - Symbolic language consists of symbols (referring to the mental representations) on which logic operations are performed (in particular, if-then loops). - the most ambitious symbolic model built was the *Adaptive Control of Thought -- Rational (ACT-R) model* by Anderson. The model is ambitious because it aims to be potentially applicable to all different human functions. - There are four main modules, each of which can be used across numerous tasks: - 1\. *Retrieval module:* Maintains the retrieval cues needed to access stored information. - 2\. *Goal module:* Keeps track of an individual's intentions and controls information processing. - 3\. *Imaginal module:* Changes problem representations to facilitate problem solution. - 4\. *Procedural module:* Uses various rules to determine what action will be taken next; it also communicates with the other modules. - What is especially exciting about Anderson et al.'s (2008) version of ACT-R is that it combines computational cognitive science with cognitive neuroscience. What that means in practice is that Anderson identified the brain areas associated with each module of the program ![A diagram of a brain Description automatically generated](media/image4.png) - Connectionist models - networks of interconnected nodes could solve many tasks thought to require symbolic programming - connectionist networks: consist of units or nodes that are connected in various layers with no direct connection from stimulus to response - We will focus here on *two* key features on which connectionist networks differ: Whether their representations are localist or distributed, and whether the models can learn new information. - Localist representations - the number 1 would not activate one node in the input and output layer, but would activate all the nodes to a certain extent. The number 2 would also activate all the nodes, but with a slightly different pattern, so that it can be discerned from the number 1, and so on. - information is localised in a few nodes - Distributed representations are harder to grasp, but they have the great advantage that the network can process new stimuli resembling previously learned stimuli - Whether a model can learn new information - Localist models tend to be end-state models; that is, models that contain all the required information and are fully trained. - Distributed models, in contrast, tend to be learning models. For this reason, models with distributed representations are likely to be the future of computational modelling (just like they are already in artificial intelligence), even though they are more difficult to understand and integrate in a general theory of the human mind. ### Growing Interest in individual differences - The study of individual differences is rapidly gaining in importance. - There are two main reasons for this. - First, individual differences are likely to have practical consequences. - A second reason why individual differences are interesting is that they provide information about the underlying processes. ### Meta-analysis - A **meta-analysis** involves **combining the data from a large number of similar studies into one very large analysis.** - To make the findings of the different studies comparable, the effects are translated into a standardised effect size. Usually, Cohen's d is used. - Although meta-analyses are a good way to summarise findings of tens of studies, they are not a cure-all. Sharpe (1997) identifies three problems with meta-analyses: - The "Apples and Oranges" problem: Studies that aren't very similar to each other may nevertheless be included within a single meta- analysis. - The "File Drawer" problem: It is generally harder for researchers to publish studies with nonsignificant findings. Since meta-analyses have little access to unpublished findings, the studies included may not be representative of the studies on a given topic. - The "Garbage in--Garbage out" problem: Many psychologists carrying out meta-analyses include all the relevant studies they can find. This means that very poor and inadequate studies are often included along with high-quality ones. Week 2 ====== Lecture ------- Lecture objectives: ------------------- 1. the difference between sensation and perception ----------------------------------------------- 2. fundamental principles of perceptual organisation from the Gestalt psychologists -------------------------------------------------------------------------------- 3. Theoretical frameworks and experimental findings that explain the processes involved in pattern recognition, object recognition and face recognition ---------------------------------------------------------------------------------------------------------------------------------------------------- gain appreciation for: ---------------------- - object and face recognition abilities, in contrast to those with perception disorders ------------------------------------------------------------------------------------- - why we find visual illusions and ambiguous images compelling, surprising, and funny ----------------------------------------------------------------------------------- - Perception is not the same as sensation - Fundamental principles of perceptual organisation from the Gestalt psychologists - Theoretical frameworks and experimental findings that explain the processes involved in pattern recognition, object recognition, and face recognition **Sensation vs perception** - Sensation: **The intake of information** by receptors and the translation of this information into signals that the brain can **process as images, sounds, smells, tastes, etc.** - Perception: **the interpretation and understanding** of these sensations **How do we transition from sensation to perception** 1. Usually the process is so fast we are unable to notice it 2. ambiguous or degraded stimuli, or visual illusions, like the previous example, illustrate that perception and sensation are not the same process **Law of perceptual organisation** 3. The Gestalt Psychologists' key contribution to psychology is the law of pragnanz 1. Percept is **more than a sum of sensations** - What we **perceive** is the **simplest possible organisation of the visual environment** 4. **The gestalt laws** 2. Law of proximity - When things are close together, we tend to group them together 3. Law of similarity - When things look similar, we group them together 4. Law of continuation - We see the 2 lines as separate curves instead of 2 unsmooth V's being connected together 5. Law of closure - We conclude that it is a circle instead of a picture of curvy lines **Figure-ground segregation** - The visual environment is separated into the figure, which has a distinct form, and the ground, which lacks form - The figure often seems to be more salient - The figure is also perceived to be "more important", and to be in front of the ground - The figure is the people, the ground is the vase - Figure and ground is very subjective **Figure-ground segregation vs. object recognition** - Figure-ground segregation *occurred before* object recognition - recent studies suggest that this is not necessarily the case: if you recognise the object, knowledge of the object helps you with figure-ground segmentation **Evaluation of gestalt psychologists** - **Positive aspects** - Discovered important underlying principles of perceptual organisation - **Negative aspects** - De-emphasized **contribution of experience** **and top-down knowledge** in perception - Not able to explain **underlying processes of perception** **Lecture discussion (what can we learn)** ![A cat lying on a wood floor Description automatically generated](media/image7.png) - Law of similarity (colour ) - Law of continuation/closure (lines on the floor and on the cat) - Law of proximation (cat is on the floor) **Pattern recognition** - Refers to the identification of 2 dimensional patterns - **Necessary for object recognition** - Requires the **matching of the stimulus to a category** of objects stored in memory **Theories of pattern recognition** - Template theories - A pattern is recognised when it **closely matches with a template stored in memory** - A template is a form or pattern that is stored in long-term memory - Feature theories - A pattern consisting of a **set of features or attributes** - Patterns are **matched** when they share the **same set of features** - E.g. template theory here is the ability to **recognise the shape through the fundamental knowledge that A looks like this "A".** Feature theory is the ability to recognise that "A" has 3 line segments and has a pointy tip - Limitations of template theories - **Unrealistic**; especially when the stimulus can take on **different forms**, or **viewed from different perspectives** - Limitations of feature theories - Incorrectly assumed that local processing occurs before global processing **Hubel and Wiesel's** discovery of feature detectors - if we process the basic features of a visual stimuli we should be able to detect brain cells that process these basic features - Found a device to be able to detect specific brain cells that can detect such basic features - **Simple cells**: absence of presence of very **simple features like horizontal or vertical lines** at a particular location in the visual field - **Complex cells**: **responsive to features independent of location and to combinations of features** - Transition from feature to objects **Contribution of top-down processes** - Object superiority effect: the finding that a feature is **easier to process** when it is part of a **meaningful object** (than when it is part of an unknown form) - It is easier to see the face than the man playing the sax because we can identify the face easier nd we see it everyday **Summary of pattern recognition** - both bottom up and top down processes matter and interact together for pattern recognition to occur - When we are recognising patterns, there is an **involvement of both bottom-up and top-down processes** - Top-down effects become more apparent when the stimuli is ambiguous - Template or feature theory? - Template theory provides a **fast recognition for familiar stimuli** but not too inflexible. However, it may become rather inflexible after some time - Feature theory assumes that **processing of features occurs before holistic processing**, but de-emphasizes top-down or environmental influences **Object recognition** - Occurs when we have the stimulus and the pattern really fits the mental representation - A key theory of object recognition is Biederman's recognition-by-components theory - Perceptual organisation and pattern recognition work together to lead to **object recognition**, which occurs when a **mental representation is activated strongly enough** to be selected as the most likely interpretation of the stimulus **Biederman's Geons: Recognition-by-components** - Edges are extracted from the stimulus which are combined into basic shapes or components called geons (geometric ions) - Geons are **object primitives**, the building blocks of object recognition. Object recognition relies on the **identification of geons** ![A diagram of a cup Description automatically generated](media/image10.png) - In this example, when lines come together, it is easier for object recognition because we can identify the geon **Counter-evidence: What if objects do not have geons** - **Some objects do not have geons but silhouettes**: clouds, fire etc. - geons require lines, but somehow we can still **identify the objects based on their silhouettes** **Foster & Gilson (2002): Object recognition depends on the "view"** - participants are showed a set of images, each with a **pair of objects that may be the same but placed at different angles** - **SAME (different viewpoint)** - **DIFFERENT (different object, same viewpoint)** - **DIFFERENT (different object, different viewpoint)** - **RT\_c is slower than RT\_b** **Evaluation of recognition-by-components theory** - the classic theory assumes that invariant geon-like components are involved in object recognition and hence **viewpoint-invariant** - dependent on how familiar we are with the object - counter-evidence from Foster nd Gilson (2002): we use both components and viewpoint information for object recognition - depends on the **familiarity of the object** - theory may not work for objects that do not have "geons" **Face recognition** - we are good with familiar faces, and bad with unfamiliar faces - holistic - we look at a face and we are not braking it down into its features but we process all of it holistically and globally - part whole effect memory study - composite face effect - the top half of the face is the same, but the bottom part is not - when the face is aligned, it is difficult to convince yourself that the top half of the face is the same - when the face is misaligned, that's when you realise they are actually the same Different faces of different types of faces Description automatically generated with medium confidence - face inversion - holistic processing of an upside down face - when all the parts of the face is there, we treat it as if everything is normal - but when the face is upright and we zoom in on the details, that's when we realise that something might be wrong ![](media/image12.png) **Are you a super recognise, or...?** - face blindness, also known as prosopagnosia, is a condition where **recognition of faces is severely impaired**, but with little to no impairment of object recognition **Disorders of perception** - apperceptive agnosia: object recognition is impaired due to **deficits in perceptual processing** - associative agnosia: perceptual processes are intact but **problems accessing knowledge about objects from memory** - early vs late stages of object recognition Reading ------- ### Introduction - According to Sekuler and Blake (2002, p. 621), it is "the acquisition and processing of sensory information in order to see, hear, taste, or feel objects in the world; it also guides an organism's actions with respect to those objects." ### From sensation to perception #### Sensation vs. perception - **Sensation** refers to the intake of information by means of receptors and the translation of this information to signals that the brain can process as images, sounds, smells, tastes, and so on. - **Perception** in addition involves the interpretation and understanding of sensations. ### Using illusions to discover underlying processes - Another way to study the transition from sensation to perception is by means of *visual illusions*. - In an **illusion** you experience **(perceive) something other than what is physically presented**. - illusions reveal the processes involved in perception. - it is important to keep in mind that the input your brain receives is sensory input coming from a flat page. - However, your brain does not simply record the sensations. It tries to interpret them. - For a start, it will assume that very **few objects in the world are flat.** - So, it will try to project depth in the flat stimulus that comes from the back of your eyes (notice that all information coming from the eyes is two-dimensional, as the eyes do not register how far the light has travelled before it reaches them). - Second assumption: the brain seems to make is that **light usually comes from above**. - These 2 assumptions lead to interpret the figure with bulging disks on the diagonals and receding disks in-between. - If the light comes from above, then a bulging disk will look brighter at the top than at the bottom, and the other way around for a receding disk. - When the figure is turned upside down, the same assumptions will lead to the perception of receding disks on the diagonals and bulging disks in-between. #### Perception as an active interface between reality and action - perception involves more than the passive registration of sensations. - requires the active regrouping and restructuring of the input on the basis of innate structures and previous experiences. - For this reason, Hoffman et al. (2015) argued that we should not compare perception with stimulus recording (filming). ### Perceptual organisation - One of the main challenges the perceiving brain has to solve is to decide which parts of the environment go together and which belong to different objects. - The first systematic attempt to study perceptual segregation (and the perceptual organisation to which it gives rise) was made by the *Gestalt psychologists*. - Gestalt psychologists were named as such, because they claimed that the percept (the Gestalt, which is the German word for "figure") was more than the sum of the parts (as indeed we saw earlier: perception is more than the sum of sensations). - The law of Pragnanz is the notion that the simplest possible organisation of the visual environment is what is perceived ##### The Gestalt Laws ![A paper with a number of dots and circles Description automatically generated with medium confidence](media/image15.png) - The law of proximity: The fact that three horizontal arrays of dots rather than vertical groups are seen in Figure 2.4(a) indicates that visual elements tend to be grouped together if they are close to each other - The Law of similarity: elements will be grouped together perceptually if they are similar. Vertical columns rather than horizontal rows are seen because the elements in the vertical column are the same whereas those in the horizontal rows are not. - Law of continuation: those elements requiring the fewest changes or interruptions in straight or smoothly curving lines. - Law of closure: missing parts of a figure are filled in to complete the figure. Thus, a circle is seen even though it is incomplete. #### Figure-ground segregation - perceptual organisation results in **figure--ground segregation** - One part of the visual field is identified as the figure, whereas the rest is less important and forms the ground. - The Gestaltists claimed that the figure is perceived as having distinct form or shape, whereas the **ground lacks form**. In addition, the figure is perceived in front of the ground, and the contour separating the figure from the ground belongs to the figure. #### Findings - Proximity and closeness were very powerful cues when deciding which contours belonged to which objects. In addition, the cue of good continuation made a positive contribution. - past experience with the figure has a big influence on your figure--ground segregation. - figure--ground segregation occurs very *early* in visual processing and so **precedes object recognition.** - The findings of Grill-Spector and Kanwisher (2005) suggest that the processes involved in figure--ground segregation and object recognition are the same. - figure--ground segregation seems to work in conjunction with the processes involved in object recognition, so that for familiar objects, object knowledge typically contributes to figure--ground segregation ### Pattern recognition - **Pattern recognition** refers to the identification of two-dimensional patterns. - It is an essential step in object recognition and is achieved by matching the input to category information stored in memory. - pattern recognition involves matching the stimulus to a category of objects (recognising an object as "a" newspaper) rather than a single, specific object ("the" newspaper I read yesterday). - A key issue in pattern recognition is *flexibility*. It is important that the **input need not fully match the stored category information.** - In theory, pattern recognition can be based on two types of information. - On the one hand, stimuli may be stored in memory as whole percepts (Gestalts); on the other hand, they may be stored as **lists of features**. - The former are called template theories, the latter feature theories. - both types of information are used to **store and access information about stimuli in memory**. #### Template theories - we have *templates* (forms or patterns stored in long-term memory) corresponding to each of the visual patterns we know. - A pattern is recognised on the basis of which template provides the closest match to the stimulus input. - A modest improvement to the basic template theory is to assume that the visual stimulus undergoes a normalisation process. - This process produces an internal representation of the visual stimulus in a standard position (e.g., upright), size, and so on *before* the search for a matching template begins. - Another way of improving template theory would be to assume that **there is more than one template for each stimulus** (e.g., a face in frontal view, in profile view, midway in-between and so on). - template theories are ill-equipped to account for the flexibility shown by people when recognising patterns. - The limitations of template theories are especially obvious when the stimulus belongs to a category that can take many different forms (e.g., letters written in different cases and fonts). - On the other hand, template theory offers a good explanation for the fast recognition of **well-known stimuli.** - One of the reasons why we become more efficient in processing familiar visual input may be that **we develop templates for this input** (Goldstone, 1998). #### Feature theories - feature theorists might argue that the **key features of the capital letter "A" are two straight lines and a connected cross-bar.** - This kind of theoretical approach has the advantage that visual stimuli varying greatly in size, orientation, and minor details can be identified as instances of the same pattern. - Neisser (1964) compared the time taken to detect the letter "Z" when the distractor letters consisted of straight lines (e.g., W, V) or contained rounded features (e.g., O, G) - Performance was faster in the latter condition because the **distractors shared fewer features with the target letter Z.** - Most feature theories assume that pattern recognition involves local processing followed by more global or general processing to integrate information from the features. - global processing can *precede* more specific processing - we often see the forest (global structure) before the trees (features) rather than the other way round. - the level processed first depends on the ease with which the features can be discerned. - Attention allocation (which part of the visual stim- ulus is attended to) is another factor influencing whether global processing precedes local processing ##### Feature detectors - They studied cells in parts of the occipital cortex (at the back of the brain) associated with the early stages of visual processing. - Hubel and Wiesel discovered two types of neurons in primary visual cortex: simple cells and complex cells. - **Simple cells** have **"on" and "off" regions** with each region being **rectangular in shape.** - These stimuli respond most to dark bars in a light field, light bars in a dark field, or straight edges between areas of light and dark. - Any given simple cell only responds to stimuli of a particular orientation at a particular position in the visual field. - **Complex cells** differ from simple cells in that **they respond to the presence of features independent of their position in the visual field and to combinations of features.** - There are **many more complex cells than simple cells**, distributed over various layers of increasing complexity. - At the end, the complex cells feed into cells that fire when the feature combination corresponds to a previously encountered part of a meaningful object. ### Top-down processes #### The object superiority effect - Stimulus features play an important role in pattern recognition. However, feature theories neglect the importance of context and expectations. - According to feature theorists, the target line should *always* activate the same feature detectors. - As a result, the coherence of the form in which it is embedded shouldn't affect detection. - target detection was best when the **target line was part of a three- dimensional, meaningful form.** Weisstein and Harris called this the **object superiority effect**. - in real life, features may belong to different objects with different interpretations and that the brain has to figure out which parts belong together and how to interpret them. - Because the stimulus is degraded, perception is more effortful than usual but still reflects the processes normally taking place. - pattern recognition **doesn't depend solely on bottom-up processing** involving features or other aspects of visual stimuli. **Top-down processes** are also involved. ### Visual object recognition - Perceptual organisation and pattern recognition lead to object recognition when a mental representation is activated strongly enough to be selected as the most likely interpretation of the stimulus. - This can be illustrated with different types of stimuli, which are thought to involve both general and stimulus-specific processes. #### Recognition-by-components theory - According to Biederman, fea- tures are most important. Edges are extracted from the stimulus, which correspond to differences in surface characteristics such as **luminance, texture, or colour**. - The edges provide a line drawing description of the object and they are combined into basic **shapes or components called geons** (geometric ions) - Geons are blocks, cylinders, spheres, arcs, and wedges - 36 different geons - the reason for the richness of the object descriptions provided by geons stems from the different possible spatial relationships among them. - a cup can be described by an arc connected to the side of a cylinder. - According to Biederman, geons form the building blocks on which object recognition is based and geon-based information about common objects is stored in long-term memory. - As a result, object recognition depends crucially on the **identification of geons** - the geons of an object can be **identified from different viewpoints**, making object recognition viewpoint-invariant (unless the viewpoint hides geons that are crucial for recognition). - **naming an object is faster when the object has been named before** rather than when it is named for the first time, a phenomenon known as **repetition priming**. - We are most sensitive to those visual features of an object directly relevant to identifying its geons. - What seems to matter is exposure to a great variety of naturally occurring objects in the world around us. - According to Biederman (1987), the concavities (hollows) in an object's contour provide especially useful information. - Object recognition was harder to achieve when parts of the contour providing information about concavities were omitted than when other parts of the contour were deleted. - Recognition-by-components theory strongly emphasises **bottom- up processes in object recognition.** - However, top-down processes depending on factors such as expectation and knowledge are also important, especially when object recognition is difficult. - Recognition-by-components theory also ignores the contribution that object templates can make. - Evidence for this comes from the finding that the overall shape of an object contributes to object recognition (Hollis et al., 2021). - Morgenstern et al. (2021) showed that a connectionist model trained on silhouettes only (Figure 2.16) was very good at processing objects, even novel objects. - Some objects (e.g., clouds) don't have a consistent pattern of identifiable geons. - A final limitation of the recognition-by-components theory is that it **accounts only for fairly unsubtle perceptual discriminations**. It ex- plains in part how we decide whether the animal in front of us is a dog Silhouettes of animals on a white background Description automatically generated ![A pink paper with text Description automatically generated](media/image22.png) ### Does viewpoint affect object recognition? - Biederman (1987) claimed that object recognition is equally rapid and easy regardless of the angle from which an object is viewed as long as the same number of geons are visible to the observer. - In other words, he assumed that object recognition is viewpoint-invariant. - However, other theorists (e.g., Friedman et al., 2005) argue that object recognition is generally **faster and easier when objects are seen from certain angles** (especially those with which we are most familiar). - we make use of all available information in object recognition rather than confining our- selves to only some of the information. - Object recognition **involves both viewpoint-dependent and viewpoint-invariant representations**. - viewpoint-invariant mechanisms are typically used when object recognition involves making easy discriminations - viewpoint-dependent mechanisms are more important when the task requires difficult within- category discriminations or identifications - Object recognition is typically not influenced by the object's orientation when *categorisation* is required. - object recognition is significantly slower when an object's orientation differs from its canonical or typical viewpoint when *identification* is required. Thus, it is viewer-centred with identification. - Viewpoint-invariant representations follow from feature-list theories of pattern and that viewpoint-dependent representations are more in line with template theories. - The fact that evidence for both is found, again suggests that both feature lists and templates are involved in object recognition. #### Cognitive neuroscience - object recognition involves both viewpoint-invariant and viewpoint-dependent aspects has received further support from research in cognitive neuroscience - Neurons vary in invariance or tolerance (Ison & Quiroga, 2008) - Neurons responding almost equally strongly to a given object regardless of its orientation, size, and so on possess high invariance or tolerance. - In contrast, neurons responding most strongly to an object in a specific orientation or size have low invariance. Viewpoint-invariant theories would expect to find many cells of the former type, whereas viewpoint-dependent theories would predict many cells of the latter type. - both types of neurons are prominent in the inferotemporal cortex, with more viewpoint-dependent cells in the posterior part of the inferotemproal cortex (close to the visual cortex) and more viewpoint-invariant cells in the anterior part. ### Disorders of object recognition - Visual agnosia: A condition in which there are great problems in recognising visual objects even though visual sensations still reach the brain and the person still possesses much knowledge about the object. - 1\. *Apperceptive agnosia*: Object recognition is impaired because of deficits in perceptual processing. - 2\. *Associative agnosia*: Perceptual processes are essentially intact, but there are **difficulties in accessing relevant knowledge about objects** from long-term memory on the basis of visual input. - Apperceptive agnosia refers to the impossibility to go from sensation to perception (see p. 44), whereas **associative agnosia** is related to problems in **activating the meaning of the input even though the pattern has been recognised.** - Apperceptive agnosia: great difficulties in shape discrimination (e.g., discriminating between rectangles and squares) and in copying drawings. ### Face recognition - Face-recognition performance was above chance even with only one fixation, and was as good with two fixations as with three or unlimited fixations. - The fixations are predominantly on the eyes and the nose. - most first fixations fall on the eyes and the nose, with more fixations on the eyes for European faces than for Asian faces. ### Face recognition by eyewitnesses - although face recognition is rapid, it is also difficult for the recognition of unfamiliar faces. - Burton (2013) argued that face perception is so difficult because there is much variability in the appearance of a face. - we are blind to this variability, because we easily recognise familiar people even under extremely degraded circumstances. - According to Burton (2013), this difference in recognition performance is due to the fact that we have seen a familiar face many times under various circumstances. - As a result, we have somehow stored the variability present in the appearance as part of our memory of that person's face - Because we do not have the same experience with a new face, we can only recognise the face when the viewing conditions are very similar to the ones we originally experienced. - Therefore, it is very easy for us to recognise a photograph of a previously seen unfamiliar face when exactly the same photo is used in ### Face vs. object recognition #### Holistic processing - Holistic processing: Processing that involves integrating information from an entire object (especially faces) - Information about specific features of a face can be unreliable because different individuals share similar facial features (e.g., eye colour) or because an individual's features can change (e.g., skin shade; mouth shape). - This makes it desirable for us to process faces holistically. - In the **part-whole effect**, the memory of a part of a face is more accurate when **presented within a complete face than on its own.** - The second finding pointing to holistic processing of faces is the **composite face illusion**. - The finding that the top half of a face **looks different when combined with bottom halves** of other faces. - **face inversion effect**: The final indication that faces are processed holistically is the observation that we find it much more difficult to recognise faces shown upside down or even to notice that something is wrong with these faces - inversion, part-whole, and composite effects reflect different perceptual mechanisms and are not assessing a single skill of holistic face pro- cessing. ### Face blindness: prosopagnosia - A condition of severe impairment in face recognition with little or no impairment of object recognition; popularly known as "face blindness." - prosopagnosia can also be a developmental deficit. Some 2% of the general population has very bad face recognition skills, referred to as *developmental prosopagnosia* - prosopagnosics have lower object recognition skills than people with typical abilities. - students with developmental prosopagnosia on average performed worse on a car recognition task. - although some of the processes in face and object recognition are shared, there is plenty of evidence that specific processes are also involved, so that people can be much worse at face recognition than at object recognition, and vice versa. - These findings suggest that partly different processes (and brain areas) underlie face and object recognition. ### Face recognition network in the brain - Fusiform face area: An area in the fusiform gyrus that is associated with face processing; the term is somewhat misleading given that the area is also associated with the processing of other categories of visual objects. - face perception differs from many other forms of object processing, because we have far more expertise in re- cognising individual faces than individual members of other categories. They called this the *expertise hypothesis*. - According to Gauthier and Tarr (2002), the fusiform face area is NOT specific to face processing, but is used for processing *any* object cate- gory for which the observer possesses special knowledge. - the fusiform face area is definitely involved in face processing and face recognition. However, the notion that the processing is *localised* in this area is incorrect. - Face processing involves a network of brain areas including the fusiform area. - In addition, the involvement of the fusiform area is not limited to face perception. It is also involved in the processing of other types of objects, in particular when participants have extensive experience with these objects. - the activity is greater in the right hemisphere. ### ### Models of face recognition #### The duchaine and Nakayama model ![](media/image24.png) - Initially, observers decide whether the stimulus they are looking at is a face (face detection). - This is followed by processing of the face's structure (structural encoding), which is then matched to a memory representation (face memory). - The structural encoding of the face can also be used for recognition of facial expression and gender discrimination. - First, the initial stage of processing involves deciding whether the stimulus at which we are looking is a face (face detection) - Second, ***separate* processing routes** are assumed to be involved in the processing of facial identity (who is the person?) and facial ex- pression (what is he/she feeling?). - some individuals should show good performance on facial identity but poor performance on identifying facial expression, whereas others should show the opposite pattern. - Third, it is assumed that we retrieve personal information about a person *before* recalling their name. The person's name can *only* be recalled provided that some other information about him/her has already been recalled. - Brédart et al. (2005), however, reported evidence against the idea that name retrieval *always* occurs after activation of person information. They found that members of a Cognitive Science Department could name the faces of their close colleagues faster than they could retrieve personal information about them. This was because the participants had been exposed much more often to the names of their colleagues than to other personal information. #### The face-space model - memories for faces can be thought of as places in a multi-dimensional space. - Each dimension represents a feature of a face; for example, the distance between the eyes, the face's length-to-width ratio, the position of the eyes, the length of the nose, the position of the mouth, and so on. - For each dimension, many faces have values in the middle (typical) range. - This part of the face-space is densely populated. Some faces, however, have extreme values on one or more dimensions. - They are in an area of the space where few other faces are to be found (because few faces have such extreme characteristics). - it is more difficult to memorise typical faces with average values on the dimensions than faces with extreme values on one or more dimensions - Faces are easier to recognise when their deviances from the mean are exaggerated. - These are so-called caricature faces. In contrast, faces are more difficult to recognise if the features are shifted toward the mean value. ### Deep learning - ideas about face recognition (and visual perception in general) are strongly influenced by dramatic changes in artificial intelligence ### Super-recognisers - individuals have extremely poor face- recognition skills. - There is also evidence of individuals with exceptional face-recognition abilities - Genetic factors likely help explain the existence of super- recognisers. - The face-recognition performance of identical twins was much more similar than that of fraternal twins. - extroverts are better at recognising faces than introverts. According to Lander and Poyarekar (2015), this is because extroverts are more interested in people and, therefore, have more practice at recognising faces. So, both genetic factors and practice mean that some people are better at recognising faces than others. ![](media/image26.png) ### Perception and action - How has the human species survived given that our visual perceptual processes are apparently so prone to error? - Part of the answer is that most visual illusions involve artificial figures, probing extremes where the usual perceptual processes no longer work properly ### Two visual systems: perception and action - According to Milner and Goodale (2008), we have *two* visual systems. - There is a vision-for-perception system used to identify objects (e.g., to decide whether we are confronted by a cat or a buffalo). - used when we look at visual illusions - vision-for-action system used for visually guided action. - provides accurate information about our position with respect to objects. - It is the system we generally use when avoiding a speeding car or grasping an object. - There is also a "where" or "how" pathway (the dorsal pathway) going to the parietal cortex (Figure 2.31) corresponding to the vision-for-action system. Note, however, that these two pathways aren't separated neatly and tidily, and there is considerable interchange of information between them (Rossetti et al., 2017). ### Findings - the mean illusion effect was greater with the vision-for-perception system than with the vision-for-action system. - The findings of Skervin et al. (2021) are consistent with *the planning-control model* - According to this model, per- ception and action are not completely separate, as assumed by the two- system theory. - In the planning phase, actions are influenced by visual illusions. In the execution, however, kinaesthetic feedback mechanisms correct the initial miscalculation. - The correction can occur during the actual movement (e.g., when the opening of a grasping movement is corrected for an initial overestimation of the size of the object) or just after the movement (as in the stair climbing study). ### In sight but out of mind #### Inattentional blindness - an **unexpected object** (i.e., the gorilla) **attracts more attention** and is more likely to be detected when it is When it is different from the task- relevant stimuli, it tends to be ignored. - As before, the gorilla's presence was detected by only 42% of observers when the attended team was the one dressed in white. - However, the gorilla's presence was detected by 83% of observers when the attended team was dressed in black. - The observation that we can be **blind to something major happening** before our eyes because it is not in the centre of our attention is called **inattentional blindness**. #### Change blindness - The failure to detect that an object has moved, changed, or disappeared (e.g., one stranger being replaced with a different one) is called **change blindness**. - Levin et al. used the term **change blindness** to describe our wildly optimistic beliefs about our ability to detect visual changes. - Galpin et al. (2009) found that drivers viewing a complex driving scene often exhibited change blindness to items that *seemed* relatively unimportant to them #### When is change blindness found - Observers are much more likely to detect a change when told in advance to expect one (intentional approach). - Beck et al. (2007) found that observers detected visual changes 90% of the time using the intentional approach but only 40% using the incidental approach. - our long-term memory for complex scenes can be much less impressive than we believe to be the case - changes were much more likely to be detected when the changed object had received attention (been fixated) before the change occurred. - Second, **change detection was much better when there was a change in the type of object rather than merely swapping one member of a category for another** (token change). #### What causes change blindness - change blindness (and its opposite, change detection) depends on attentional processes. - we detect changes when we are attending to an object that changes, and we show change blindness when not attending to that object - observers are more likely to detect changes in an object when it has been fixated prior to the change - Jensen et al. (2011) enumerated the processes required for successful performance in change blindness tasks. There are *five* of them: 1. Attention must be paid to the change location. 2. The pre-change visual stimulus at the change location must be encoded into memory. 3. The post-change visual stimulus at the change location must be encoded into memory. 4. The pre- and post-change representations must be compared. 5. The discrepancy between the pre- and post-change representations must be recognised at the conscious level. - change blindness occurs because we sacrifice perceptual accuracy to some extent, so that we can have continuous, stable perception of our visual environment. ![](media/image29.png) ### Does perception require conscious awareness - unconscious perception or **subliminal perception: perceptual processing occurring below the level of conscious awareness that can nevertheless influence behaior** - it has become clear that unconscious perception exists, but does not have the strong effects Vicary attributed to it - The case for subliminal perception apparently received support from the notorious "research" carried out in 1957 by James Vicary, who was a struggling market researcher. He claimed to have flashed the words HUNGRY? EAT POPCORN and DRINK COCA-COLA for 1/300th of a second (well below the threshold of conscious awareness) numerous times during showings of a movie called *Picnic* at a cinema in Fort Lee, New Jersey. Vicary claimed there was an increase of 18% in the cinema sales of Coca-Cola and a 58% increase in popcorn sales. ### Empirical evidence of unconscious processing - To investigate unconscious visual perception in people with intact vision, we can present stimuli very briefly. - The procedure works best if the stimulus is immediately followed by another stimulus, called a *mask*. - The mask overwrites the stimulus and interrupts its processing. #### Brief stimulus presentation #### Eye movements #### Neuropsychological findings - there is evidence that some patients with prosopagnosia can **identify faces without consciously recognising the faces.** This evidence comes from the *galvanic skin response*. - When a person becomes aroused by activation of the sympathetic nervous system, this can be measured at the level of the skin. It forms the basis of the lie detector, which measures the person's arousal when confronted with a (threatening) question. - **Blindsight:** An apparently paradoxical condition often produced by brain damage to the early visual cortex in which there is behavioural evidence of visual perception in the absence of conscious awareness. #### Issues - there is good evidence that human perception requires little consciousness. However, is it really unconscious or is it degraded conscious perception - Objective thresholds are also less convincing than is claimed by proponents of unconscious perception. - Researchers who are not interested in finding an effect tend to test more sloppily and may transfer their low expectations to the participants. - blindsight patients differ in the degree to which they perceive in the blind parts of their visual field. Is the vision truly absent or degraded to such an extent that useful perception is no longer possible, but some gross perception remains #### Evaluation - unconscious perception is very similar but inferior to conscious perception. - unconscious perception is limited to the processing of simple stimuli and the activation of directly related responses - perception operates largely outside of consciousness, informing us about the environment. - it is to our advantage if our brain automatically processes the incoming sensory input and provides our working memory with meaningful end products. - According to this view, human information processing involves numerous special unconscious processors working in parallel. These processors are distributed across brain regions, with each processor performing specialised functions (e.g., colour processing, motion processing). This processing is very similar regardless of whether a stimulus is consciously perceived or not. - a combination of bottom-up processing and top-down control procedures produces synchronised activity in large parts of the brain. - information in consciousness is globally available to all special processors, which can make use of it to optimise their functioning, just as all support teams in a theatre must have access to what is happening on stage to synchronise their activities. - Unconscious processes are there to help conscious thought. Week 3 ====== Lecture ------- Lecture objectives: ------------------- - what attention is and how it is different from perception --------------------------------------------------------- - the difference between goal-directed and stimulus-driven attentional systems ---------------------------------------------------------------------------- - discoveries about what characterises auditory and visual selective attention ---------------------------------------------------------------------------- Gain an appreciation for: ------------------------- - how multitasking has detrimental impacts on various activities, such as driving ------------------------------------------------------------------------------- - the ubiquity of visual search in our daily lives ------------------------------------------------ - the limited amount of attention we each have a deeper appreciation for why it is important to allocate it carefully ------------------------------------------------------------------------------------------------------------------- **Language Experiment** - Congruent conditions (small H in big H) should be **faster than incongruent conditions** (small H in big S) -\> supported - Incongruent condition when identifying local letters should be much slower due to greater influence from global letter (big H) -\> not supported - Navon reported that for global condition, there is little to no influence of the local letter (small s in big H) **Significance of Navon's study** - Showed the dominance of global processing - Provided important counter evidence that perception is necessarily piecemeal and build up from features/components - The class results suggests that the influence of global/local (big/small) information can be flexibly adapted **What is attention** - Attention (William James, 1890) - Basically attention is what you are focusing on in your mind - it is the "focalisation" of consciousness **Types of attention** 5. **Top-down**, active: attention is controlled by the **individual** based on their **goals or expectations** (internal) 6. E.g. Listening in lecture, traffic light as you drive 6. **Bottom-up**, passive: attention is controlled by **external stimuli** (external, environmental factors) 7. E.g. following stimuli, **Two attentional systems in the brain** - Goal-directed system (top-down) - **Influenced by expectations, active knowledge, and current goals**. what you are trying to do in your own time - also known as **endogenous attention** control - Stimulus-driven system (bottom up) - Invoked when it is **unexpected and the stimulus is unimportant** - potentially important stimulus - They act like a "circuit-breaker" to redirect your attention to it because it is surprising **Auditory selective attention** - early studies of dichotic listening and shadowing (Cherry) showed that listeners were poor at reporting information from the second unattended ear - listeners used physical information (e.g. gender of speaker, voice features) as a cue to maintain attention to - **listeners were especially poor** when the **messages presented to both ears were from the same speaker** - Dichotic: listening to 2 messages in each ear and then shadow the audio - People are not great at this task - Can **repeat** the words said at the ear they are **attending** to - To the ear they are not attending to, they are **unable to regurgitate the words** - Listeners use physical information as a cue to maintain attention to - Listeners were especially poor when the messages presented to both ears were from the same speaker **Where is the bottleneck?** 7. Different theories 8. Early - **Broadbent's** early selection theory - Briefly stored in a **sensory buffer** - Input is quickly lost **unless attended to quickly** 9. Flexible -- Treisman's **attenuation theory** - Processing of input **begins with physical/acoustic properties** and **continues to its meaning** - Depends on availability of **processing capacity** - Most support from overall literature 1. Whenever you devote attention to anything, it is a effortful process 10. Late -- Deutsch and deutsch's late selection theory - We **fully analyse any stimuli** that comes to us - The **input that is most relevant to the task is reported** **What factors help us to pay attention to an auditory message** - Bottom-up and top-down systems interact **Bottom-up factors** - **temporal coherence**: track similarity of auditory signal over time (distinctive features) - location of auditory signal - enhance attended messages and suppress unattended message **Top-down factors** - Familiarity with speaker - Expectations about the **meaning of message** (sentences are easier than random words) - Integrate other sources of information to **help maintain and keep the message** **Posner cueing task (1980) demo** A screenshot of a computer Description automatically generated - Different cross and target presentation time to prevent an expectancy to form amongst participants - Neutral tasks are neutral tasks because you cannot use the outline as a cue of where the task is going to be **Prediction from Posner's cue task** - Reaction time (RT) RT valid (fastest) \< RT neutral \ proof of object-based attention A diagram of a diagram of a diagram Description automatically generated ![A graph of different sizes and a different object Description automatically generated with medium confidence](media/image35.png) ---------------------------------------------------------------------------------------------------------------------------------- **What happens to the unattended stimuli** - Unattended stimuli receive some processing - We can be distracted by stimuli because evolutionary we are **evolved to be vigilant to threats in our environment.** - Our bottom-up attention **gets directed to sudden and salient stimuli** beyond our top-down control - How effective the distraction is depends on the following factors - Features on the stimuli itself: especially **salient and distracting** - E.g. handphone - Situational factors: **task load and relevance** - Individual differences in distractibility and personality - We can be distracted by internal stimuli - E.g. our own thoughts **Multitasking: real world implication** - Defined **as doing 2 or more things at the same time** does attention is divided among tasks - E.g. distracted driving, texting while walking **Costs of multitasking** - you are not necessarily more efficient when multitasking (even if it seems like it) - attentional resources are divided leading to poorer performance - even though practice can improve dual task performance, there is almost always some evidence **of interference effects** (Maquestiaux & Ruthruff, 2021) **Guided search model (wolfe, 2021)** - Attention plays an important role in 2 ways - Attention selects items to "**bind" their features into recognisable** objects - Attention "**guides" the search** to process scene information efficiently - spatial, dynamic "priority map" that is updated as the search unfolds **Why is the second red T harder to find** - Salience, single feature (e.g. colour) can be detected in parallel throughout the priority map - A "**pop-out**" effect - **When 2 or more features** are needed (e.g. colour + letter identity), attention is needed to **bind them together** for search - Resource intensive and search has to be serial, not parallel Reading ------- - James (1890) distinguished between "active" and "passive" modes of attention. - Attention is active when controlled in a top-down way by the individual's goals or expectations. - It is passive when controlled in a bottom-up way by external stimuli - **Selective attention** (or focused attention) is studied by presenting people with two or more stimulus inputs at the same time and instructing them to respond to only one. - **Divided attention** is studied by presenting at least two stimulus inputs at the same time, which participants must simultaneously pay attention to (and respond to). ### Selective auditory attention #### Where is the bottleneck? - a bottleneck in the processing system can seriously limit our ability to process two or more simultaneous inputs. - there is a filter (bottleneck) *early* in processing that allows information from one input or message through it on the basis of its physical characteristics. The other input remains briefly in a sensory buffer and is rejected unless attended to rapidly. - the location of the bottleneck is more *flexible* than Broadbent had suggested. - At one extreme was Broadbent (1958). He argued there is a filter (bottleneck) *early* in processing that allows information from one input or message through it on the basis of its physical characteristics. The other input remains briefly in a sensory buffer and is rejected unless attended to rapidly. - Treisman's attenuation theory: proposed an *attenuation theory*, according to which listeners start with processing based on physical cues, syllable pattern, and specific words and move on to processes based on grammatical structure and meaning. - If there is insufficient processing capacity to permit full stimulus analysis, later processes are omitted. - Deutsch & Deutsch's late selection theory: *all* stimuli are fully analysed, with the most important or relevant stimulus determining the response. This *late selection theory* places the bottleneck in processing very close to the response end of the processing system. #### Recent developments - Corbetta and Shulman (2011) made a distinction between these two attentional systems in the brain. - The first is a goal-directed attentional system **influenced by expectations, knowledge, and current intentions.** - This system makes use of **top-down processes.** - Second, there is a stimulus-driven attentional system which uses bottom-up information. This system takes effect when an unexpected and potentially important stimulus occurs (e.g., a noise to your left side). - It has a **"circuit- breaking" function**, meaning that attention is redirected from its current focus. - Subsequent research has shown that both systems closely interact with each other (Moore & Zirnsak, 2017). #### Bottom-up processes - Shamma et al. (2011) pointed out that the sound features of a given source typically will show *temporal coherence*. - Auditory signals retain similarity over time, which can be used to distinguish the source from distractors. - If listeners can identify a distinctive feature of the target voice, they can follow it and pick up other accompanying audio features via temporal coherence. - it will be easier to distinguish inputs of children's voices when listening to a man than when listening to another child. - location was a third important bottom-up signal, a finding confirmed in subsequent research (Lewald et al., 2018). - Task-irrelevant stimuli close in space to task stimuli are more distracting than those further away. - Bottom-up selection seems to work by the *enhancement* of the attended message combined with *suppression* of the unattended message (Bronkhorst, 2015; Tóth et al., 2020). #### Top-down processes - These processes are possible because of the existence of extensive descending pathways from the auditory cortex to brain areas involved in early auditory processing (Robinson & McAlpine, 2009). - Top-down factors depend on listeners' knowledge and/or expectations and these have been shown to facilitate the segregation of speech messages. - For instance, it is easier to perceive a target message accurately if the words form sentences rather than consisting of random sequences of words (McDermott, 2009). - Familiarity with the target voice is also important. - Accuracy of perceiving what one speaker is saying in the context of several other voices is higher if listeners have previously listened to the speaker's voice in isolation (McDermott, 2009). - use *visual* information to follow what a given speaker is saying - Processing of the attended message was enhanced when participants could see a movie of the speaker talking while listening to the message. This prob- ably occurred because the visual input made it easier to attend to the speaker's message. ### Selective visual attention - After the initial studies on auditory attention, researchers became more interested in visual attention. Why is this? - One reason is that vision is our most important sense modality, with more of the cortex devoted to it than to any other sense. - Another reason is that humans make finer distinctions between locations in the visual field than in the auditory surroundings. - Finally, it is easier to control the presentation times of visual than of auditory stimuli. #### Posner's paradigm - people cannot move their eyes without first shifting their visual attention. Attention goes first, followed by the eyes (Deubel & Schneider, 1996). The fact that visual attention can shift without moving the eyes is called **covert attention**. - Posner (1980) further discovered that there are two ways to redirect attention. One makes use of arrows, as described earlier. In this case, attention is driven top-down by the participant's intentions. - Posner called this *endogenous* attention control. - Another way to have the attention shifted is by presenting a salient stimulus in the periphery. Then, the attention seems to be captured bottom-up by the new stimulus, a phenomenon Posner called *exogenous* attention control. #### Spotlight, zoom lens, or split - Posner's (1980) findings suggest that visual attention works like a spotlight. A spotlight illuminates a relatively small area and can be redirected to focus on any given object - Other psychologists, however, have compared visual attention to a zoom lens (e.g., Eriksen & St. James, 1986). They argued that we can increase or decrease the area of focal attention at will, just as a zoom lens can be moved in or out to alter the visual area it covers. - Awh and Pashler's (2000) findings show **split attention**, in which attention is directed to two regions of space not adjacent to each other. This suggests that attention can resemble a double spotlight. #### What is selected - The spotlight and the zoom-lens models imply that we selectively attend to an area or region of space. This is space-based attention. - Alternatively, we may attend to a given object; this is object-based attention. - Object-based attention seems likely since visual perception is mainly concerned with objects of interest to us ![A screenshot of a book Description automatically generated](media/image38.png) #### What happens to unattended stimuli - unattended visual stimuli are processed less thoroughly than attended ones. - **Neglect:** A disorder of visual attention in which stimuli or parts of stimuli presented to the side opposite the brain damage are undetected and not responded to; the condition resembles **extinction** but is more severe. #### Distraction effects - unattended stimuli receive some processing means they can distract us - this is the main reason why some salient stimuli can catch our attention bottom-up, irrespective of the top-down control we are exerting at that moment. - What factors determine how distracted we are by task-irrelevant stimuli? - One factor is stimulus novelty. - Our attention is easily captured by new stimuli (Bendixen et al., 2010). Another factor is stimulus valence. - We are especially likely to be distracted by stimuli associated with potential danger. - Positive stimuli are also more likely to attract attention. - Anderson et al. (2011) reported that stimuli previously associated with large rewards were more distracting than stimuli associated with small rewards. - A third factor determining the distraction power of unattended stimuli is how relevant they are to the current task goals. - Sometimes, this factor can even override the effects of salience and distinctive- ness. - the extent to which we are distracted by irrelevant stimuli also depends on the demands of the current task. - Tasks vary in terms of perceptual load: some tasks (high load) require nearly all our perceptual capacity whereas others (low load) do not. - Forster and Lavie predicted that people would be less susceptible to distraction while performing a high-load task than a low-load one -- a high-load task leaves little spare capacity for processing distracting stimuli. - we can also be distracted by *internal* stimuli (e.g., task-irrelevant thoughts or "mind wandering"). - There is evidence for two competing attention-capturing networks in our brain: one directed to the outside world (the dorsal attention network) and one directed to the inside world (the default mode network), involved in self-reflection, autobio- graphical thinking, and day dreaming (Spreng et al., 2013). - Participants have significantly more task-irrelevant thoughts while performing a low- load task than a high-load task - individuals with anxious personalities are more distractible than those with nonanxious personalities. - distractibility can be considered as a personality trait, with some people being worse at maintaining attentional focus than others. - There is more evidence for individual differences in mind wan- dering (Welhaf et al., 2020; Meier, 2021), although the origins of these differences are not yet well understood (Robison et al., 2020). - the degree to which attention is diverted to task- irrelevant stimuli depends on several factors. - Some stimuli are more distracting than others, stimuli are more distracting in some situations than in others (task relevance, task load), and some individuals are more easily distracted than others. ### Cross-modal effects - In the real world, however, we very often encounter visual and auditory stimuli at the same time or visual and tactile stimuli. - One possibility is that attentional processes in each sensory modality (e.g., vision; hearing) operate *independently* of those in all other modalities. In fact, that is incorrect. - We typically combine or **integrate information from different sense modalities at the same time** -- this is **cross-modal attention**. - visual selective attention is supported by a congruent sound and hindered by an incongruent sound, in line with what is predicted by cross-modal attention. #### Ventriloquist illusion - Ventriloquists try to speak without moving their lips while at the same time manipulating the mouth movements of a dummy. - It seems as if the dummy rather than the ventriloquist is speaking. - Something similar happens at the movies. We look at the actors and actresses on the screen and see their lips moving. The sounds of their voices are actually coming from loudspeakers to the side of the screen, but we hear those voices coming from their mouths. - For the ventriloquist illusion to occur, - First, the visual and auditory stimuli must occur close together in time. - Second, the sound must match *expectations* raised by the visual stimulus (e.g., high-pitched sound apparently coming from a small object). - Third, the sources of the visual and auditory stimuli should be close together in space. - *Why* does vision capture sound in the ventriloquist illusion? - The main reason is that the visual modality typically provides more precise information about spatial location. - However, when visual stimuli are severely blurred and poorly localised, sound captures vision (Alais & Burr, 2004). - Thus, we combine visual and auditory information effectively attaching more weight to the more informative sense modality. ##### Rubber-hand illusion - In this illusion, the participant sees a rubber hand that appears to extend from his/her arm while their real hand is hidden from view. Then the rubber hand and the real hand are both stroked at the same time. Participants perceive that the rubber hand is their own hand. - the rubber hand illusion is primarily an example of **demand characteristics**, the desire of participants to please the experimenter by performing the behaviour they think is expected. A screenshot of a screen Description automatically generated![A close-up of a text Description automatically generated](media/image40.png) ### ### Visual search - Some visual searches are much easier than others. Usually, these are searches in which targets differ from distractors on a salient characteristic. - A moving target is also conspicuous. - observers indeed rapidly detected a moving target among stationary distractors. In contrast, it took much longer to detect a stationary target among moving distractors. #### The guided search model - guided search model: a widely used model to understand and predict findings in visual search - According to the model, when we see a scene, we can only recognise a few objects around where we are looking. Attention is needed in two ways. - First, it is needed to select items so that their features can be "bound" into recognisable objects. - Second, it is needed to "guide" the search so that items in the scene are **processed in an intelligent order.** - five sources of information: (1) bottom-up and (2) top-down feature guidance, (3) prior history, (4) reward, and (5) scene information. - Selective attention is guided to the most active location in the priority map approximately 20 times per second. - To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. #### Bottom-up feature guidance - Treisman and Gelade (1980) asked observers to detect a target in a visual display of between one and 30 items. - The target was defined by a single feature (e.g., a red letter **S** among blue letters) or by a combination of features (a red letter **S** among blue letter **S**s and red letter **T**s). - In the latter case, all non-target letters shared one feature with the target (either the colour red or the S shape) and the target was defined by a *conjunction* of the features (red + S). - When the target was defined by a single feature, observers detected the target about as quickly regardless of the number of distracto. - In contrast, observers found it much harder wh

Use Quizgecko on...
Browser
Browser