WK 2 Lectures Combined PDF Psychological Foundations of Mental Health
Document Details
Uploaded by StraightforwardMaple
King's College London
Dr Charlotte Russell
Tags
Summary
These are lecture notes on cognitive processes and representations in psychology, focusing on perception. The notes explain the challenges of perception, including the interpretation of sensory information, dealing with incomplete information, and extracting important data from the environment. Examples relating to daily life and potential dangers are included.
Full Transcript
Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 1 Perception - Part 1 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 3...
Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 1 Perception - Part 1 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 3 Let’s start by looking at what the challenges are for perception. The crucial point is the incoming sensory information from our eyes, ears, touch receptors must be interpreted. It’s not possible to passively absorb all of this information and thereby perceive it. Why must it be interpreted? First it must be interpreted and the incoming information arriving in our sensors does not contain all the information that we need in order to perceive it accurately. For example, a cup partly behind a jug is still a cup, and we immediately interpret it as such rather than thinking, look there’s a part of a cup with no handle behind that jug. Secondly, although information is often incomplete, there’s a huge amount of it, and we don’t need all of this to function well or achieve our goals. For example, when we sit down in our clothes on a chair, the touch receptors all over our body where our clothes are and where the chair is in contact with are constantly being activated, and this information is going into our brains. However, it doesn’t make sense for us to continuously perceive or list them as sensory information. Thirdly, from this incomplete and yet overwhelming input, we must extract what’s important to us. Some of this information is extremely important, indeed, and this could be for simple goal- directed reasons like wanting a cup of tea and looking for a cup in the earlier example. But also a crucial task is to be consciously perceiving as soon as we can what might be dangerous to us. For example, we must become aware if something is becoming too hot, for example, the chair you’re sitting on or whether a smell has entered the environment and it could be toxic or perhaps whether a loud bang nearby is part of ongoing background work or something that’s imminently dangerous. You can get a sense yourself of the fact that our brain is constructing and interpreting our sensory input into this conscious perception. If you think of a time maybe when fear or worry could have caused you to perceive something incorrectly, think about waking up from a nightmare in a dark bedroom. You might perceive a mound of clothes as a scary figure, or a squirrel in a bush might sound like a much larger entity if you’re rushing through a park late at night. So our brain must select the input of greatest importance and allow us to consciously perceive this as accurately as possible in spite of the poor quality of information in our environment and the vast amount of unnecessary and irrelevant stimulation. Transcripts by 3Playmedia Week 2 © King’s College London 2017 1. Slide 4 All our sensors are important to us, and they all provide unique information that enables us to function well and enjoy our lives. However, I’ve chosen to focus on vision for this topic. There are a number of really important reasons for this. Firstly, a large amount of the brain is concerned with vision. In fact, it’s the only sense to have an entire lobe of the brain dedicated to it, which is known as the occipital lobe or cortex, and I’ll talk about this more in a moment. Secondly, seeing is actually a really difficult task for our brains and studying vision can give us a great insight into the fact that perception is not passive but it involves this construction and interpretation that I was talking about a moment ago. Thirdly, interacting with our environment is crucial to us as human beings. And arguably, vision is the sense that provides us with the best source of information to enable this interaction. For example, it provides us with excellent information of what is out there. Is there a person smiling at us as we walk into the room? What do they look like? Is that the friend that we’ve come to meet? But also it tells us where these important things are. Is the person that we’re looking at to the right or the left of a waiter carrying a huge tray of drinks that we should avoid? And vision, also using this what and where information, provides us with evidence to enable us to prepare for action. This could be simply walking towards our friend, waving back at them, or maybe running away if we’ve smiled at the wrong person. Slide 5 Before we move on to the neural pathways that are involved in vision, let’s have a look at some common misconceptions about how visual perception works. Often, similar misconceptions are held about other sensors too, and so it’s useful for you to think about these senses as we discuss vision. Firstly, we often hear people describe vision as taking place as if the visual world is projected into our eyes and brains like in a cinema. This and similar descriptions suggest that casual observers might believe visual perception to be an automatic or effortless process. Leading on from this misconception, is the suggestion that this would mean our eyes send an exact copy of what’s in front of us at any moment directly to our brains. For example, if you’re looking at a chair, the idea would be it would arrive in your occipital cortex as an intact mental image of a chair. Thirdly, we experience or feel that we experience a rich and continuous visual environment in which we are fully perceiving everything around us at any moment. But all of these misconceptions are false. Vision must interpret the visual input and construct a representation. And much of how we think that we’re seeing the world is actually an illusion, and this will become clear by examining visual processing within the brain. Slide 6 Understanding visual perception means understanding the root of visual processing from the eyes to and through the brain. Learning about visual cortex, which as I’ve mentioned is called the occipital lobe or occipital cortex, will demonstrate to you the complexity of vision and reveal how it’s not simply a passive projection of the outside world into the brain. First, let’s look at these figures of the visual field, eyes, and the main tracts for passage of information from the retina to the brain. In the part of the figure labelled visual field, the lighter colours on each side are where information is only going into one eye, the eye on that side of space. The darker shaded areas moving more centrally show where the visual information is actually entering both eyes. This means that only the middle section of the visual field is perceived binocularly, that means in 3D. The coloured sections you can see in the back of the eyes is representing the retinas, and these are where all the rods and cones are, rods and cones, the light receptor cells that we need in order to see. Transcripts by 3Playmedia Week 2 © King’s College London 2017 2. Rods work when we have very little light available in the visual field. They’re not sensitive to colour information, and so vision in the dark is black and white. The cones, which are sensitive to colour information, work only in well-lit conditions and enable colour vision. They are found only on the part of the retina labelled fovea in the figure right at the back of the eye in the middle. If you look at the central arrows, you’ll see that directly fixating an item so that it’s central to the visual field means that this information is what falls directly onto the fovea. One of the implications of these first two points, that binocular vision is only possible for the middle part of the visual field and that colour vision only occurs for the items that we’re directly fixating, is that our mental representation of seeing all the world around us in rich, three-dimensional colour and detail is not, in fact, the case. Our brains fill in this information for us. More about this filling in will come later. After the retina and fovea, the visual input leaves both eyes by the optic nerve. And you can see on the figure that these two sources of input then meet at the optic chiasm. At this point, the information from each side of the visual field, the left and the right, are brought together as separate streams and are directed to opposite sides of the brain. Information from the left visual field goes to the right hemisphere, and from the right visual field, it goes to the left hemisphere. Look at the color-coded streams from each side of the visual field in the figure. The separate sources of information then go to a subcortical area of the brain called the thalamus. The thalamus is analogous to a hub for sensory information entering the brain relaying incoming input to relevant parts of the cortex for the more detailed processing. The part of the thalamus specifically concerned with visual information is known as the lateral geniculate nucleus, or the LGN. The LGN directs most at this visual information via the optic radiations that you can see in the figure to an area of the occipital lobes known as primary visual cortex, or V1, or sometimes striate cortex. These three terms all refer to the same thing. Slide 7 Two important principles of vision are useful to understand in order to help the organisation and functions of the occipital lobes make sense to you. The first is that vision is organised hierarchically. The brain starts by processing the most simple properties of the visual input, and then more complex properties are added in as neural processing continues. That is, dots and lines are extracted first from the input. Edges are then added, then the dots, lines, and edges are formed into objects. The movements of the object are then added and so forth. So this makes it clear there’s no representation of, for example, a chair or such things arriving in V1 from the visual input in front of you. Second, vision is modular that means specific parts or modules in visual cortex deal with specific types of visual information. V1 deals with the fundamental elements of our visual input. For example, the lines making up everything falling within our current fixation, if V1 is damaged, we become cortically blind, or if part of it is damaged, we’re blind for input from the relevant part of the visual field. V1 is crucial for the most fundamental extraction of visual input from the incoming information, and without this extraction, no further analysis of the information is possible, Hence, the blindness. Further examples of specific modules are V4, which deals with colour information. And so damage to V4 in both hemispheres leads to loss of colour vision. But as V1, et cetera is intact, you would have preserved vision for forms of objects and their motion. V5 is a different module within visual cortex, and it deals with motion. So damage here leads to motion blindness, but all other aspects of the visual scene are processed accurately. If you look at the figure displayed here, you can see the labels of V1, primary visual cortex, and Transcripts by 3Playmedia Week 2 © King’s College London 2017 3. onwards. And I use the word onwards as you might be able to tell from what I’ve said so far that once visual information arrives at V1, it then makes its way forward in the brain through the other occipital regions. And I’ll go into some detail of these paths in the next slides. Slide 8 Looking at the image on the slide, you can see two arrows moving forwards through the brain from the blue shaded area of early visual cortex, or V1. The top green arrow here is moving into parietal cortex, and this stream represented by this arrow is known as the dorsal stream. The bottom pink arrow is known as the ventral stream. The term dorsal refers to locations nearer the top of the brain, and the term ventral to locations nearer the bottom. The dorsal stream of visual processing is often called the where stream, and the ventral stream is often called the what stream, and I’ll now describe why that is. Slide 9 The ventral stream, as you can see in the figure, runs from V1 to the temporal cortex. The type of information processed in the ventral stream concerns what the visual elements are, starting with the construction of simple forms, then shapes, and finally whole objects. This includes some dedicated processing of category-specific information, for example, faces. Also V4 that I mentioned earlier is part of the ventral stream dealing with colour recognition. The function of these areas in visual experience has been demonstrated very clearly in neuropsychological studies of patients suffering from visual agnosia. This is a disorder in which following damage to parts of the ventral stream, patients become unable to recognise objects or even when more severe and affecting areas nearer to primary visual cortex, simple shapes. These patients can see and they know that something is being presented to them, and they can recognise simple features like lines or corners, yet they remain entirely unable to visually recognise what something is. The specificity of the problem to vision is made very clear if the patients are allowed to explore the item with their hand, and they can often identify it immediately. It’s not that fundamental vision has been lost and it’s not that knowledge of what the objects are have been lost as might occur in some types of dementia. It’s the combination of basic visual features with its object-related characteristics. Slide 10 The dorsal visual stream running from primary visual cortex to parietal cortex is primarily concerned with spatial information. That is where things are in the world around you where they are compared to you and where they are in relation to one another. It’s also crucial for 3D vision and V5, the motion area, is also part of this stream. Nearer imaging studies dissociating the two streams, for example, Haxby in 2009, have shown activations in dorsal stream for location judgments and ventral stream for object recognition decisions. Our ability to spatially represent the outside world is linked with attention, which I’ll be talking about in detail in the next topic. And the primary attention area in the brain is, indeed, parietal cortex, i.e. it’s part of the dorsal visual pathway. The disorders that follow damage to dorsal stream cause striking spatial representational deficits primarily affecting the side of space opposite the brain injury, and I’ll talk about this in the next section. Transcripts by 3Playmedia Week 2 © King’s College London 2017 4. Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 1 Perception - Part 2 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 2 So constructing our visual world, we’ve seen that we don’t have an exact copy of the visual world sent directly to our visual cortex. And we know that the inputs process serially, with basic dots being extracted first, followed by lines, then edges, and finally, objects. We’ve also seen that colour information is only processed for items falling onto the fovea at the back of the eye, and that 3D information is only gained from the perifoveal part of the visual field. This means that, neurally, we are not processing the world around us in the rich and detailed way in which we believe we are experiencing it. So what is an important mechanism for enabling us to experience the world in the way we believe? Simply, the fact that we constantly make eye movements, which are known as saccades, means that we make successive fixations across the visual field. The images that we process, then, are integrated automatically and very cleverly by our brains. Have a look at the image on the slide here. It’s a representation of what happens when we make a saccade. An eye movement causes a huge movement of everything in your visual field. And yet, these are compensated for by dedicated mechanisms in parietal cortex, which results in the illusory perception of a stable and detailed visual world. Slide 3 The phenomenon of change blindness that was first shown by Rensink, in the paper I’ve given you as a reference, shows very well that our illusion of seeing the world around us in rich detail is not true, and that we’re actually very inaccurate at perceiving things that are right in front of us. So we don’t process very well, as I’ve already talked about, items that we’re not directly fixating, even though our impression is, we survey a scene as if perceiving the whole thing. So this phenomenon, which is also related to the next topic of attention, shows you, as you look for the change in the image, that it’s rather hard to find. So can you see what’s changing? Hopefully, it will take you a long time to detect these large changes. And it’s showing you that the moment-to-moment representation in the visual field that we possess is not very detailed. Otherwise, it wouldn’t be an effortful search to look for this change. What Rensink did was interleave the original scene and the changed scene with a blank grey, which causes a flicker. And this flicker means that there’s motion across the whole image as it’s flashing up. And this masks the transient motion that’s associated with the change directly. Transcripts by 3Playmedia Week © King’s College London 2018 1. When you’ve spotted the item that’s changing, it becomes trivially easy to see it. In fact, it appears as if it’s flashing on and off. And this is because our attention has now been captured by the change, which I’ll talk about more in the next topic. Slide 5 To further convince you of the power of our minds to create the illusion of a rich and continuous visual scene, let’s just consider, for a moment, the image on the slide. What I want you to focus on is the area labelled “blind spot.” This is where the optic nerve fibres take the visual input from the retina to the brain. We have no rods or cones here at all, and therefore, no visual input is processed here. So we’re blind for whatever part of the visual field corresponds to the blind spot during that particular fixation. However, we’ve got no perception of constantly missing a small part of the visual field. This is filled in, in an illusory way, by the brain, which gives us the mental representation of a continuous perception across the whole field. Slide 6 If you’re interested in finding your own blind spot, have a look at the blind spot exercise following this lecture. The neural processors that enable us to fill in our blind spot are linked to the perception of some visual illusions. We’ll have a look, now, at some striking illusions. And experiencing them is an excellent way to gain some insight into these constructive and interpretive aspects of how we create a mental representation of the world around us. First, at the start of this topic, I outlined why perception is an effortful and intricate process. And this included the fact the input from the world contains insufficient information for complete perception. Without there being some interpretation based on the rules of perception understood by the visual system, we wouldn’t be able to perceive properly. Look at the image here of the elephant. Do you see a whole elephant? I’m sure that you do, and I’m sure that you don’t perceive it as being half an elephant with the back section missing. But of course, there’s no visual information there that’s completing the back of the elephant. What your visual system is doing is being adept at computing certain expectations that we have. On one of these, if there’s an occluding item, it’s likely to be masking part of the image. Look at this cube here. Do you perceive the cube? I hope you do, but there are no lines that make up this image. It doesn’t exist. It’s completed by your brain in response to the arrangement of the white sections on the circles. You automatically compute that these white sections must be occluded by something, and so fill in the cube. And in the final example of there being insufficient information, look at these shapes. Do they make any sense at all? Without changing anything about them, but simply adding in occluding elements, the brain works out how to make sense of the image, and we can see Bs covered by inkblots. All of these are examples of how the visual system can adapt to insufficient information in our visual input. But it compensates so effectively that it can overcompensate, which results in us seeing illusory images, like the 3D cube, which aren’t actually there. Slide 8 Look at this image here. What did you see? The chances are that you could say something like, it was a seaside view, or an image of a pier. And you’ll have the impression of having seen most of what was there and have gained the gist of the scene. But if I asked you what the colour of the chairs were, or how many people were there, you’d be very unlikely to know. This is simply to give you an insight into the limits of how little we can process at any one time. This is Transcripts by 3Playmedia Week © King’s College London 2018 2. in vision, but in other senses too, it’s equally true. The visual input contains overwhelming information, and so we’re flooded with sensory stimulation, and we need a mechanism to select parts of this image that’s important to us. And this selection mechanism is attention, which I’ll talk about in the next topic. Slide 9 In addition to being, at times, insufficient and overwhelming, the visual input is often ambiguous, and so interpretation by our visual system is needed in order for us to understand what’s in front of us. Have a look at these two famous examples of ambiguous stimuli. You can see, here, two possible percepts for each one-- a rabbit or a duck on the left, and two faces or a vase on the right. Notice that only one possible interpretation is seen consciously at one time, and you might be able to switch between the two percepts. The way in which our brain computes ambiguous stimuli means that you can’t simultaneously perceive both possibilities. And this is a very useful feature of our system. We need our visual system to give us a clear interpretation of what we’re seeing, and not the two possible choices at the same time. Slide 10 Finally, as the input is often ambiguous, our visual system automatically uses context to interpret the visual scene. Fascinatingly, this context is used even when we don’t want it to be, or when it’s not useful. If you look at the two figures in the tunnel, these are exactly the same size. But even after you’re told that, we still perceive one as being larger, because we’re still using this context, and we can’t take the figures out of that context. If you look at the flower-shape dot arrangements, the orange circles in the middle of both of these are exactly the same size. But we automatically are using the context of the surrounding dots and compare them to the elements that surround them. And this means that the one on the left looks much smaller, and knowing that they’re the same size doesn’t actually change that. Transcripts by 3Playmedia Week © King’s College London 2018 3. Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 2 Attention - Part 1 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 3 Remember, attention is the mechanism that we use to select for further in-depth neural processing items that are of most interest to us. This selection could be from the sensory input, as I’ve already discussed, and it takes place across all types of sensory information. For example, auditory and somatic sensory, as well as visual. However, if you look here at the quotation from William James, you’ve heard about him in the introductory topics last week, his principles of psychology relied on introspection rather than experimentation. But nevertheless, he characterised the properties of attention extremely well. Have a brief read of this quotation here. The mention of trains of thought is very interesting and insightful. Attention does not only select external sensory items of interest to receive further processing, but also internal thoughts and memories. Slide 4 Next week, you’ll learn much more about pathological alterations and attentional selection during some mental health conditions. But let’s think about them for a moment, as they’re clinically important, and they’ll also enable you to gain insight into how attention shapes conscious processing. When you look at the image on the slide, what do you see? Do you see just lots of people smiling? Or do you notice quite quickly the unhappy looking man in the top right? There is evidence that we’re all biassed to pay attention to faces, rather than neutral items, which makes sense for us as social beings. But there’s also evidence that our attention is preferentially captured by emotional faces, both positively and negatively emotional when they’re competing with neutral faces. However, even taking into account this bias towards emotional faces in the general population, sufferers from some types of mental illness, maybe anxiety or depression, are perhaps more likely to draw in their attention towards negative faces. This research on allocation of attention to faces, and perhaps a pathological orientation towards negative faces in anxiety or depression, demonstrates an important feature of attention. Items that we select by attention, we’re aware of. That’s, we’re conscious of them. And we are not Transcripts by 3Playmedia Week © King’s College London 2018 1. aware or conscious of the items in our visual field that we’ve not selected by attention. By being aware, that’s conscious of them, they are able to affect our behaviours, our decisions, and our emotions in a way in which non-attendant items cannot. If you attend more readily towards negative stimuli, your conscious environment is more negatively valent than those without those biases. And it’s easy to see why this would have negative repercussions in daily life. Slide 5 So we need attention to select items for us to process in greater detail. What’s selected by attention and how is this determined? Certain types of stimuli are likely to be preferentially selected by attention. In part, these are the things that you would think of as being attention grabbing. Loud things, bright things, things that appear suddenly. This automatic allocation of attention towards these types of sudden, or salient, onsets is a good mechanism to make us aware of potentially dangerous stimuli near us. Novelty, or in other words, a change in the environment, is a crucial draw of attention. Something changing is interesting to us. It means that there’s new information, and therefore we want to process it in greater detail. A visual change causes a motion transient when it occurs, and this automatically reallocates attentional resources to the position of that change. You saw a direct example of this in the last topic, the change blindness example, and this demonstrated that when your attention was not automatically grabbed by the motion transient of the changing item. That’s because the motion transient was masked by the flicker across the entire image. You had to laboriously search across the whole scene for what was changing. However, when you detected the change, the impression was that it flashed on and off. The spotlight metaphor is an old one and there are cases in which it probably doesn’t describe all the features of attention. However, it’s a useful and somewhat accurate way to grasp how the selective processes of attention operate. It can light up part of the sensory input. And, as a spotlight enables you to see something more clearly, it enables enhanced processing of the selected input. Slide 6 There are two critical types of ways in which attention carries out this selection. Exogenous attention, or bottom-up attention, is the automatic allocation of your attention based on the properties of the stimuli themselves. For example, there is naturally attention grabbing types of stimuli we just talked about. Being very loud, bright, or a sudden change in the environment. Endogenous attention, or top-down attention, is not automatic, but instead is the allocation of attention to items that you’ve chosen to pay attention to, i.e.,.e. they’re relevant and interesting to you, but they’re not necessarily particularly salient in the general environment. Exogenous and endogenous spatial cueing paradigms were developed many years ago by Posner, but they’re now widely used across a huge number of permutations and in different clinical groups. Slide 7 I’m now going to outline a couple of paradigms that are used widely to assess visual attention. These are firstly the cueing paradigms I just mentioned developed in the ‘80s by Posner, and secondly, visual search tasks. Transcripts by 3Playmedia Week © King’s College London 2018 2. OK, so first we’ll try Posner’s exogenous cueing paradigm. If you look at the cross in the centre of the screen, try to keep your eyes on this throughout the whole trial. So you’ll see a cue appear, and you’ll see a target stimulus, and I want you to think as quickly as you can whether this target stimulus is a capital letter or a lowercase letter. So remember, it’s very important to not move your eyes. This is about cueing your attention over, rather than your eye movements. So what you saw there was a valid trial. The cue that flashed up briefly moved your attention over to that side of space, and then the target appeared there. In this condition, people find over in numerous trials that their reaction times are much lower if their attention has been correctly cued to the position of the target. And their error rates are also much lower. So it enhances, or boosts, performance. Now let’s look at a different condition in the same paradigm. OK, so there you would have noticed that the cue appeared on one side, but the target then went on to appear on the opposite side. And this is called an invalid trial. And it means that your attention was cued to the wrong side. And then we see over the course of many trials that performance is worse. So reaction time is slower to the target in invalid trials and errors are much more common. Slide 8 So that was Posner’s exogenous cueing paradigm. He also devised one for cueing attention endogenously. So have a go at these following trials in an endogenous cueing paradigm. What you saw there was a mixture of valid and invalid trials. Sometimes the arrow was pointing correctly to where the target appeared, and sometimes incorrectly. And we find in experiments on endogenous attention that the valid ones would enhance performance just as much as an exogenous cueing paradigm, and invalid ones would impede it. Experiments often vary the reliability of the cues in both of these paradigms. So if 80% of the cues that are presented are valid, and only 20% are invalid, it makes sense to use the cue. Whereas, if only 40% of the cues are valid, and 60% invalid, in the endogenous paradigm, you might be able to ignore them. However, in the exogenous paradigm, you’re less likely to be able to suppress, or ignore, your attention being grabbed by the salient cue appearing on the side. Slide 9 In the visual search paradigm, participants are asked to look for a particular target item as quickly and as accurately as they can. And the target is the one that is unique amongst all the other distractors. And it could be a simple shape, or letter, or more complex stimuli like faces, which you’ll hear about next week. The target is presented on screen among a varying number of distractors. And the number of distractors is called the set size. If you look at the examples on the slide here, the unique targets are very easy to spot. That is, they pop out from the distractors. This is because they differ in one fundamental dimension, in this case, orientation or colour. The term for this is pop out search, or feature search, as the targets and the distractors differ by only one critical feature or parallel search. You don’t need many cognitive resources to extract the target here. And you don’t have to search through all the distractors, so it doesn’t matter how many distractors are presented. Transcripts by 3Playmedia Week © King’s College London 2018 3. This type of search used to be called preattentive, and it was said it required no attention at all to distract the target. And it happened before there had been any attentional selection on the visual field. However, although very few attention resources are needed to detect these targets, there probably has been some basic extraction by attention. Slide 10 On the slide now, you can see examples of a serial search, the opposite, if you like, to parallel search. Here, finding the unique target requires serially searching through all the items, as it’s not defined by a unique feature, but two features. This is sometimes called a conjunction search. So the target differs from a conjunction of two or more features from the distractors. For example, being blue and horizontal among an array of blue vertical and red horizontal bars. If you compare the set sizes on this slide, you’ll find it much easier to detect the target with fewer distractors, rather than when you have to search through more. And this type of search clearly requires a larger amount of attentional resources. Look at the graph here which has the search slopes for parallel and serial search. You can see that reaction time does not go up much when more targets are added in parallel search. This is a defining feature of this type of task. There’s no increased search time with more distractors, meaning that fewer attentional resources are required. Note that when you look at graphs of reaction time like this, we only ever compute the reaction time of correct trials. Now look at the line for the serial search. You can see that our set size, that’s the number of distractors, goes up. In this type of serial conjunction search, reaction time rises, too. And the steepness of this slope is a good indicator of how attentionally demanding a search task is. If the target is very similar to the distractors, or perhaps if the participants have been selected to find a particular target hard, then the search slope will be steeper, as the reaction time rises quickly with the number of distracting items. Transcripts by 3Playmedia Week © King’s College London 2018 4. Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 2 Attention - Part 2 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 2 So what happens to items that we don’t pay attention to? As I’ve discussed, attention enables us to focus on items of interest. An effect of this is to filter out other elements which have not been selected and therefore might impede processing. So we’re not aware of items that have not been selected by attention. Whatever we pay attention to we’re aware of. In this paper here, we provided a good demonstration of how attention enables us to filter out information even if it’s presented right in front of your eyes. In the task shown on the slide, red picture stimuli are presented with green letter strings superimposed over the top. And these two stimuli together were presented in a rapid serial visual presentation paradigm. In some blocks, participant’s task was to detect immediate repetitions in the letter strings. And in other blocks the task was to detect immediate repetition in the red picture string. So you can see here, the dress is repeating. And in the green stream you can see the word clock repeating. Note that in the letter strings some of the strings are real words but some were actually a random arrangement of five letters. While the participants performed this task they were undergoing functional MR imaging in order that we can see what is happening in the brain during the different conditions. Slide 3 On this slide here, you can see brain activity from the condition in which participants are paying attention to the green letter string. But what we’ve done is extract out activity related to real words in that stream compared to meaningless letter strings. And so you can see activity across the left hemisphere in the top panel because, obviously, this is more related to language. Slide 4 The graphs presented on this slide relate directly to the brain activity you saw on the previous slide, but they are of blood flow-- to BOLD signal, the Blood Oxygen Level Dependent signal, in the areas of interest shown in the previous slide. If you look on the left panel first you see the BOLD signal for left frontal and temporal cortices in the attend to letter strings condition. You can see a clear difference here in the signal comparing real words to random letter strings. The Transcripts by 3Playmedia Week © King’s College London 2018 1. important thing here is to note that when we look at the bold changes between words and random letters strings in the attend to pictures condition, on the right panel we see no differences between activity related to real words and related to meaning less letter strings. So even though these words are still presented right of fixation, just as they are in the attend to letter string condition, when the attention’s not paid to them but instead to the pictures superimposed on top, they are entirely filtered out and we get no meaningful work related activity in the area shown on the previous slide at all. Remember, this despite those words being at fixation. If they’re not attended, we are not processing them in any meaningful way. This study links well to a fascinating phenomenon known as Inattentional Blindness which is the evidence that we, when we don’t pay attention, can be effectively blind even to salient visual stimuli. Slide 5 Have a look at these examples of Inattentional Blindness. The term Inattentional Blindness was first coined by Mack and Rock and it’s a classic demonstration of the power of attention. If your focus of attention is manipulated, it doesn’t matter if your eyes are ever on the key event. You’ll still remain effectively blind to it. It demonstrates the link between attention and awareness and that to be able to see something you need more than to look at it with your eyes you need to be paying attention to it too. If something salient happens, if attention is distracted, it still becomes invisible. Something incredibly obvious can happen, but if the attentional parts of your brain haven’t selected it, you simply miss it. And remember, your eyes could well have seen the events and will have landed on the relevant characteristics at some point. And studies that have looked at where people’s eyes are have revealed that they often fixate the relevant parts of the image. As I talked about previously, anything that’s fixated therefore arrives in V1. But still, if you’re not paying attention to it you’re unaware of it. Slide 6 In order to introduce the attention networks in the brain, I’ll outline an early and simple study using Positron Emission Tomography, PET, on sustained attention. Although it’s the selection aspects of attention that prompt the most research, there are aspects of attention that are, perhaps, more like the man-on-the-street might describe paying attention. These are usually called Sustained Attention. And that simply means paying attention to the same item for a sustained period of time. The study by Pardo and colleagues is a simple, but important, early example of the neural effects of sustaining attention over time. They used PET to examine blood glucose use across the brain in relation to particular tasks. In PET, participants are injected with a radioactive labelled glucose molecules. This label enables the scanner to detect where in the brain the tagged glucose is being used-- i.e. where energy is being required and therefore, where it is most active in a particular task. Pardo used two types of task, a tactile one, and a visual one. And these tasks are performed in separate blocks. In the tactile task, over a sustained period, participants monitor the pauses in a constant light tapping of either their right or their left big toe. And the visual task, over similar time period, they maintain attention to a small dot and monitor whether the brightness at this dot changes. The results of these really simple trials show that the parietal cortex was involved for all tasks whether it was tactile or visual. And also, the involvement of the right and the left hemisphere was not equal. For the lateralised stimuli-- i.e. the toes-- when the left toe is monitored you get right parietal activity. But when the right toe is monitored you get a little left parietal activity, but also right parietal. When they look at visual task-- which you remember was presented in the centre of the screen and so not lateralised-- they get only right parietal activity. This tells us, clearly, that the right parietal Transcripts by 3Playmedia Week © King’s College London 2018 2. cortex has a crucial role in sustaining attention across time. And, indeed, this is true of selective attention too. Slide 7 Right parietal cortex is crucial for allocation of attention. And this, as I mentioned in the previous topic, is part of the dorsal stream of visual processing. Have a look at the figure on the slides here. This is from Kolbetter and Schulman and outlines the much larger network of attention based on analysis from many different studies, from different groups, and using different paradigms, but all studying visual attention. They make a very useful distinction here between areas that appear to be more associated with bottom up, or exogenous attention, and top down, or endogenous attention. Looking on the figure here, the areas in orange indicate areas specifically involved in bottom up, or exogenous attention. And they’re found in inferior parietal cortex and ventral frontal areas. If you look in blue, these are the endogenous areas, or top down attention areas, in more superior parietal areas and frontal eye fields. So the entire network encompasses both the dorsal and more ventral parietal regions, but also some areas in frontal cortex. Slide 8 Critical pieces of information about the neural attention networks and about attention itself come from neuropsychological studies of patients with damage to right parietal regions. We call the syndrome that results from right parietal damage Visual Spatial Neglect. And it’s called neglect as this describes how the patients behave. They neglect one whole side of the world and they act as if it no longer exists. Now it’s important to note that this is generally the left side of space in these patients. So suffering from right parietal damage will leave you with left visual neglect-- so neglect for the left side of space. But there is evidence that if people have a stroke, instead, on the left side of their brain and the left hemisphere, neglect resolves very quickly because these areas aren’t as crucial for attention. So these patients they-- males often fail to shave the left side of their faces and shave only the right side. Females makeup only the right side of their face. Patients ignore people approaching from their left side. And they often eat food only for the right side of their plate then say they’ve finished. And if you ask if they’re still hungry, they’ll say yes. And if you turn the plate around so what was on the left is now on the right they’ll carry on eating. So when you think about their behaviour, remember the strong link between attention and awareness. If you lose the ability to pay attention you lose the ability to be aware of those things too. Think about Inattentional Blindness. We’re effectively blind to fixated items if we’ve not selected them by attention. These patients appear to be blind to items in the world around them as they can’t select them by attention anymore. Patients with neglect, if they do notice anything is wrong-- but sometimes they don’t-- tell their family and doctors that they can’t see properly. They feel that something’s wrong with their vision. This may be how it is perceived, but this is not the case. There’s usually nothing wrong with seeing or their visual cortex, but it’s their attentional selection. And we know ourselves from Inattentional Blindness and change blindness how a lack of attention can render things invisible even to those of us who don’t have a parietal lesion. So as I mentioned briefly, remember the patients I’m talking about have damaged the right hemisphere, so the right parietal lobe, and so are impaired on the left side of space. We call the side that’s impaired contralesional, so opposite the side of the lesion. And the side that’s on the same side of their lesion that’s not impaired, ipsilesional. Transcripts by 3Playmedia Week © King’s College London 2018 3. Slide 9 OK. So how do we assess a patient with neglect? I just want to show you a couple of tasks that we do, often, just at the bedside of the patient. First of all, the line bisection task. It’s very simple and patients are presented with horizontal lines and asked to bisect the line-- that is mark the middle of the line. What they do is bisect the line much further to the right side. And this is because they don’t perceive the left of it. So this, to them, appears to be the middle of the line. Have a look at this video of a patient we saw just after he’s had a stroke that’s damaged right parietal cortex. And you can see how much of the left side can be lost to some of these patients. Slide 10 Cancellation tasks are also given to patients after their stroke in hospital. They’re asked to cancel out all of one type of stimuli. So here, patients are supposed to cancel out-- that’s draw a line through-- all the small stars rather than the large stars. And the two that are crossed out in the centre are done by the experimenter. And you can see if you look to the far right side, the ipsilesional side, the patient has crossed out only the stars that are right, right, right towards that side. They’re just not getting across at all or perceiving stars any further to the left. Slide 11 You can see a patient in the next video completing cancellation task on a touch screen. What he’s doing is supposedly touching every one of the C’s in a field of C’s, O’s, and Q’s. And if you watch him, and watch it to the end, you’ll see that he finishes only having completed the right one side of the screen. Slide 12 Drawing tasks are also a very striking way to see the impairments that patients with neglect suffer from. If you look on the left you’ll see a copying condition. So the patients have been given a clock, a house, and a flower to copy. And you’ll see in the patient’s version, the right side-- the ipsilesional side is completed very well but the left side has not been completed at all. And these patients say that they’ve finished the images and they’re not aware of there being any missing on the left hand side. In spontaneous drawing they’re asked to draw something from their mind. So here you can begin to see that this is not to do with what they’re looking at in front of them. This is drawing a face from their memory, and drawing a clock from memory. And again, you see the same impairment. So the right side of the image is much better and contains much more detail than the left side. In that last panel you can see some paintings from the patient who was completing the line bisection earlier. When he was in rehabilitation he was doing lots of paintings. And although his neglect has got a lot better, still, in the picture of the flowers on the right side in the vase there’s lots of flowers and on the left side there’s none at all. Slide 13 Linked to the failures in spontaneous drawing, a famous study by Bisiach and Luzzatti on what they call representational neglect, showed that these patients fail also to attend to internal mental images. Think back now to my comments relating to William James’s description of attention also being directed to internal streams of thought. The patients that were local to Milan were asked to imagine standing at the north end of the square near the Duomo doors and then describe everything that they could see. They were accurate, in this condition, at describing the west side of the square but not the east because the west side of the square, in this representation, was to their right, and the east to their left. However then they asked the patients to mentally shift to the south side of the square. And they now described very accurately what was on the east side because this is now represented in the right side space for them. Mentally moving so different sides were on the ipsilesional side revealed Transcripts by 3Playmedia Week © King’s College London 2018 4. that the patients had not forgotten what was in the square, but they were unable to be aware of it when those parts of the mental image fall into the impaired contralesional side of space. Slide 14 OK. There is evidence that although we’re not aware of items that we don’t pay attention to, under some conditions we can implicitly process this information. And so researchers have been interested in whether neglect patients are also able to implicitly, or unconsciously, process items on the unattended side. In this famous study by Marshall and Halligan, they gave patients pictures of two houses. And in one of these houses there were images of flames coming out of the left windows. They checked and confirmed that the patients were unaware of the flames coming out of the left sided windows. However, when they were asked which of these two houses would you live in, and they were asked to make a forced choice because they couldn’t see, consciously, any difference between them, they always chose the house without the flames coming out of. So Marshall and Halligan suggested that there’s some residual, unconscious processing going on in these patients. Slide 15 The implicit detection of threat-- that’s the fire in Marshall and Halligan’s paper-- links well with a growing body of data suggesting that when an emotionally threatening stimuli are presented on the neglected side there is some residual processing. Here, in Patrick Bulimiar and Sophie Schwartz’s study, their patients were presented with pictures of spiders or flowers. And these were designed to be very similar, apart from clearly being of different types of stimuli. Pictures of the spiders and flowers were presented to the right side, the left side, or both visual fields. And the patients that took part had a milder form of neglect called Extinction. Within Extinction patients can detect left sided stimuli when they’re presented alone. But when they’re presented to stimuli on the left and right side simultaneously, they only detect the one presented on the right hand side. So therefore, bilateral trials, when there’s a stimulus on the left and right, are the most difficult for these patients. In their study, when patients are presented with bilateral stimuli, if spiders were on the left side they were detected much more frequently than if flowers were present on the left side. This suggests that something about the emotional intensity or the threat of the stimulus, if you like, enabled preserved processing. So they were actually consciously detected. The same authors have demonstrated this similar processing on the left side for faces with emotional expressions. Next week you’ll learn much more about preferential processing for emotional stimuli, but this time in the context of mental health rather than neurological patients. Slide 16 In this topic, I’ve outlined to you the cognitive processes of attention and you’ve seen that its principal function is selection. This is often selection from the sensory input, but attention also selects from our mental images and trains of thought, bringing parts of these into conscious awareness. We also learn that attentional selection could be exogenous-- that’s from the bottom up stimulus properties-- or endogenous-- from our own control. And I outlined to you that attention is linked to awareness such that we are aware of item selected by attention and unaware of those that we’ve not selected. Inattentional blindness is an impressive demonstration of this phenomenon as is the neurological syndrome of neglect. These patients are no longer are aware of this side of space that they can’t attend to. In the final section, I outlined that residual processing is possible in these patients and that this residual processing appears to be of emotional, often with a negative valence, stimuli. This evidence links with research you will learn about next week. Transcripts by 3Playmedia Week © King’s College London 2018 5. Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 3 Memory - Part 1 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 3 Imagine you’re involved in a conversation with a group of friends about a film you’ve watched. You have some strong opinions and points that you’d like to get across, and you’re just about to make one point which disagrees with someone, but another friend comes in with their own point, slightly changing the discussion. Throughout this simple, everyday exchange, you must keep your points in mind, pay attention to the conversation, monitor what’s going on, possibly modulate or add to the argument that you have in your own head. It appears straightforward, and we do it all the time. But you’re not only keeping this point you want to make in a simple, short-term memory store. This store is accessing your memories about the film from your long-term memory, using attention, and crucially, manipulating and organising your thoughts online. This is a cognitively demanding, active process. And these features of our short- term memory store-- what we now call working memory-- were not always well accounted for by theories within early cognitive psychology. Slide 4 Before I outline the early theory about short-term memory and why it was incorrect, let’s give you an idea about the kind of time period working memory works within. If you remember from last week, you learned about Sternberg’s memory scanning paradigm. Digits were presented on screen, and then people were cued with a particular digit afterwards and asked whether it was present in the sequence. In this task, you maintain the digits in verbal working memory during the course of a trial. But another quick and very common way to measure this type of working memory is to assess someone’s digit span. This test is carried out in neuropsychology clinics the world over. Digit sequences of increasing length are presented, and people either repeat the sequence back or, more challengingly, repeat it back in reverse order, which is known as backwards digit span. When people get two sequences of the same length incorrect, the test is stopped, and their digit span is calculated as being the previously presented length. So for example, if they get five in a row correct and then make an error with both sequences of six, they have a digit span of five. Let’s try to do this now, and I’ll read out the following sequences. Do it first forwards, i.e., repeating what I’ve said in the same order. And then rewind it, and do it backwards, so that if I say 3, 4, you would say, 4, 3. Transcripts by 3Playmedia Week 2 © King’s College London 2017 1. OK let’s start. 2, 5. 7, 3. 3, 8, 5. 2, 1, 4. 5, 4, 7, 9. 1, 4, 2, 8. 9, 3, 2, 7, 5. 3, 8, 2, 6, 4. 6, 8, 2, 1, 5, 7. 4, 9, 3, 8, 2, 1. 9, 5, 3, 4, 2, 7, 1. 6, 2, 8, 1, 9, 3, 4. 8, 1, 7, 9, 3, 6, 2, 5. 2, 6, 1, 9, 4, 3, 8, 7. I’ll stop there at a sequence of eight. And well-done if you got to eight perfectly, particularly in the backwards condition. Now, there are very large individual differences in digits span ability. And to do it well, it depends on many things, such as how tired you are, as well. But many people have associated the length of digit span with IQ. However, it’s unlikely that this is a very straightforward relationship, particularly, as I’ve just said, it will vary according to many things, such as being tired. However, keeping in mind a large set of material and manipulating it effectively-- for example, producing it backwards-- is a very challenging and extremely useful cognitive skill. Slide 5 Let’s have a look now at the concept of a short-term memory store that existed before working memory was proposed. This is known as the modal model of memory and was devised by Atkinson and Shiffrin. This was seen as the definitive description of how the roles of short-term memory and long-term memory were parcellated for a long time. And the figure shows you how these roles were described. Items from the different senses enter sensory-specific stores for a brief time before entering the short-term store, where these elements might be rehearsed. And we transfer them into long-term memory, or we lose them or perhaps fail to rehearse them enough, and they’re displaced. Once in the long-term store, these items are in a permanent memory that can be returned to short-term store if needed for a current goal or task. Importantly, in this model, the route to the long-term memory is always through the short-term memory. However, a simple study on just one neurological patient by Tim Shallice and Elizabeth Warrington showed that, as other people had suspected, too, the route to long-term memory was not necessarily through rehearsal in a short-term store. And therefore, the relationship between these two types of memory was not as described in the modal model. The patient in this study showed a severe short-term memory impairment in a variety of verbal tasks. But this was despite having no impairment at all in verbal long-term memory. Shallice and Warrington argue two things-- first, that this means that short-term memory and long- term memory do not use the same neural structures as each other, as whichever structure was damaged in the patient, it did not impair his long-term memory; and secondly, that there can’t be a sequential route from short-term memory to long-term memory, as otherwise, any impairment in short-term memory would prevent somebody possessing a normal long-term memory store. Slide 6 It’s not only the sequential nature of the modal model that did not fit the data. There also appeared to be separate short-term stores for different types of information. Baddeley and Hitch in 1974 showed this very nicely. Participants had to keep in mind a string of digits of varying length. And while they did this, they were presented with a spatial reasoning task. For example, in this reasoning task, they might see the letters C, G, and then the statement, “C is before G; true or false,” which they’d have to answer. The figure showed on the slide here shows that, although the time to compute the reasoning task went up as the size of the digit span to be recalled rose, the number of errors made did not go up. This means that keeping the digits in mind successfully-- these data are only from trials in which participants correctly recalled the digit string-- did not modulate reasoning accuracy. And so these two tasks that both need some kind of short-term memory store-- one, a verbal string of digits and one a reasoning rule-- presumably are not relying on the same resources. Otherwise, they wouldn’t be able to be completed successfully. Transcripts by 3Playmedia Week 2 © King’s College London 2017 2. From this and related studies, Baddeley and Hitch developed the hugely influential model of working memory. And this has effectively replaced the idea of short-term memory. Slide 7 Have a look at the slide at Baddeley and Hitch’s first model of working memory. So working memory does not have as a function a route into long-term memory in the way the modal model did. But it is our mechanism by which we maintain online items from long-term memory that are relevant for our current task. This storage is not passive and enables manipulations of this material; for example, the backwards digit span. We don’t just repeat what’s been said in that task, but transform it into the opposite order. And the addition of a central executive in this model is what enables this online manipulation. But related to the study I was just talking about in the previous slide, the other important addition is having two independent short-term stores, one for verbal information, the phonological store or loop; one for visual-spatial information, the visuo-spatial sketch pad. So in a dual-task study, participants can maintain the digits in the phonological store, while the material in the visuo-spatial sketch pad can assess the relationship between the letters for the reasoning task. The central executive facilitates the process by organising the correct type of material into the correct store. Some key assumptions of this model are first, if two tasks use the same parts of the working memory, they cannot be carried out well at the same time. And secondly, if the two tasks are using different parts, they should be successfully completed accurately. Slide 8 Let’s start by looking at the phonological store. The Paulesu et al. study from 1993 used PET to assess where in the brain might be important for this aspect of working memory. Note that nowadays, using our phonological store is often called verbal working memory, or VWM. What the authors here reasoned is that they thought English-speaking participants would use the phonological store for keeping in mind six English letters, but would use a different type of store for recalling Korean characters, as these are not verbally rehearsible if you can’t speak Korean. They isolated recall of the English letters and not the Korean characters to the two areas of the left hemisphere, which you can see circled on the slide here. One, circled in green, is more frontal. And one, circled in blue, is more parietal. Slide 9 They then wanted to find out if these two areas differed in their roles for memory storage or in rehearsal for the sounds of the letters. If you think of Baddeley and Hitch’s model, what they’re looking at here is the phonological store compared to the articulatory loop. So they added another task to the previous memory-- and remember, that was holding in mind six letters-- and added a rhyming task. So for example, they were asked, does the letter that you’re currently holding in mind rhyme with B. This task is assessing online rehearsal of the letter sounds rather than storage. Slide 10 Here, you can see a graph showing the difference in the use of these two areas, the frontal area, and the parietal area, in these two tasks. Memory refers to the first task that we talked about, which was storing in mind six letters. Rhyming refers to the second task we talked about, where they were asked what they have in mind rhymes with B. And you can see here, there’s much more frontal activity when judging rhyming. However, in the parietal area, activity during the rhyming task does not even reach baseline level. Transcripts by 3Playmedia Week 2 © King’s College London 2017 3. So it seems that in the left parietal cortex, it might be more specific for a verbal working memory store, like the phonological store of Baddeley and Hitch’s model. And the left frontal region might represent a neural correlate of the articulatory loop in Baddeley and Hitch’s model. Interestingly, this left frontal area overlaps somewhat with Broca’s area, which is a region we know to be involved in speech production. Slide 11 Turning to the visuo-spatial sketch pad, this is used for creating a visual mental image of new items or items from long-term memory that you need online. To use your spatial working memory, which is the name for the type of working memory you use in the visuo-spatial sketch pad, just imagine your kitchen now and think of how you would go from your kettle to your fridge. Navigating around this type of visuo-spatial mental image is one key task of spatial working memory. For us to be able to do this, we must hold in mind accurate spatial relationships of the items. The study presented here by Possl assessed the neural areas involved in holding this type of spatial information in mind; i.e., in spatial working memory. They got participants to perform a simple task in the functional MRI scanner. The task started by participants being presented with an initial fixation screen, then a target item that appeared on the right or the left of a central fixation cross. This disappeared, and they were required to hold this in mind over a delay period. In the delay period, while they looked at the fixation cross, black-and-white checkerboard patterns were presented bilaterally. These checkerboards were used as they’re known to stimulate V1 very effectively. After this period, fixation cross was still present. And two probe items were presented. Participants’ task was to respond as to whether the target they had seen before the delay period was to the left or to the right or in the same position as where the probe items were now presented. So in order to do this effectively, they had to maintain the position of the target item in their spatial working memory over the delay period. Slide 12 What they examined in the results was visual cortical activity in V1 to the checkerboards in the delay period. And what they showed was that holding a location in spatial working memory during the delay period enhanced early visual activity on the same side of space as that of the target. So here, it’s on the right of the brain for the left target that you saw on the slides. That means that keeping something in spatial working memory involves priming visual activity in the parts of the cortex that correspond to the item’s location in the world. So there’s some overlap here with the attention process, as I talked about in the last topic. This also spatially primes responses. Think of the exogenous queuing paradigm, for example. And indeed, it appears likely that similar right-hemisphere parietal regions are required for spatial working memory, just as they are for spatial attention. And in fact, the Neglect patients we talked about in the previous topic have a severely impaired spatial working memory as well, which suggests that, whereas left parietal regions are involved in verbal working memory, right-sided parietal regions are involved in spatial working memory. Slide 13 You can see in our video of the patient from the attention topic completing the cancellation task that this time, without being able to see where he’s cancelled or the Cs that he’s clicked on already-- and you’ll see that when he has to keep the locations in mind, which adds an additional spatial working memory element to the task, he’s actually much worse than when he could see where he’d cancelled already. Transcripts by 3Playmedia Week 2 © King’s College London 2017 4. Slide 14 On the slide here, I’ve outlined very briefly some of the principal functions of the central executive, as proposed by Baddeley and Hitch. Firstly, it would describe what information should go into the stores, which store the information should go into, and it can inspect, transform, manipulate all of this information from the different stores. But what I’m going to do now is leave working memory here for now, as what the central executive is and how it works are all pivotal to the next topic this week-- that is, control processes or, in other words, executive functions. And we’ll move on now to long-term memory. Transcripts by 3Playmedia Week 2 © King’s College London 2017 5. Module: Psychological Foundations of Mental Health Week 2 Cognitive processes and representations Topic 3 Memory - Part 2 of 2 Dr Charlotte Russell Senior Lecturer, Department of Psychology, King’s College London Lecture transcript Slide 2 Turning to long-term memory, I’m going to start by fractionating this function into two quite different processes, one which forms the memories that we can consciously access and explain and which is known as explicit memory, and the other which we cannot describe or define, but nevertheless consists of long-term memories, which is known as implicit memory. For example, we have long- term implicit memories for skills we’ve learned, like catching a ball. And priming is a good example of implicit memory. Within this phenomenon, exposure to some stimuli, for example, a list of words, might alter participants’ responses to later stimuli, without them explicitly recalling the previous words or knowing that they are affecting their responses. Note that implicit memories are not accessed through any conscious recollection. Two other terms used for these two types of memory are “declarative memory” for explicit-- that means memories we can declare or say out loud-- and “non-declarative memory” for implicit, i.e., we cannot describe them in words. In order to demonstrate this fractionation, I’m going to use evidence from patients with amnesia. Slide 3 When the term “amnesia” is used, it refers to a specific problem in long-term memory, without concurrent decline of any other cognitive function. So “retrograde amnesia” is the term for loss of memory before the event that caused amnesia. And this is perhaps what people think of when they imagine amnesia. For example, someone in a film has a bump on their head, and they can’t remember where they are or who the people are that they’re with or even who they themselves are. But actually, this is extremely rare, and the brain injuries that lead to amnesia do not usually lead to any severe loss of previously acquired memories from long-term storage in isolation. Much more common are anterograde impairments. And these cause a loss of the ability to acquire any new memories. And it’s very debilitating, indeed. There’s little evidence, remember, that focal retrograde amnesia, that means pure retrograde amnesia without the learning deficits of anterograde impairments, exist without there being a psychiatric origin. OK, so what do you think the film might be that memory research has rated the best depiction of amnesia in the movies a few years ago? Transcripts by 3Playmedia Week 2 © King’s College London 2017 1. Slide 4 So in Finding Nemo, Dory always says that she has a short-term memory problem. But what she really demonstrates is a clear case of anterograde amnesia. She fails to learn new events or new characters. Slide 5 The distinction between implicit and explicit long-term memory was clearly shown for the first time by Brenda Milner in her study of the most famous neuropsychological patient, HM. He suffered from severe anterograde amnesia after removal of parts of his medial temporal lobes that you can see in the figure here and hippocampal formations on both sides of his brain, so bilaterally. HM had had this operation to remove these parts, as he suffered from profound and life-changing epilepsy. He was having many serious fits a day. So operations of this type are performed successfully to this day, but much less of the medial temporal lobes are removed, and memory impairments are vastly reduced, if they’re present at all. However, for HM, after his operation, he could remember family, friends, and events from before he’d had the medial temporal lobe moved. And he suffered, in fact, only quite mild, minor retrograde amnesia, with estimates suggesting maybe an impairment for the two years before the operation. However, new people and new events that occurred after his operation were never remembered. Dr. Brenda Milner who worked with him for over 40 years had to introduce herself on each visit. Slide 6 Brenda Milner and her colleague Scoville carried out a detailed study of HM over many years. And they showed clearly that his working memory was in the normal range. And this was for both verbal working memory and spatial working memory. His digits span for working memory was six, which is well within the normal range. And his spatial working memory span, which was assessed with the Corsi blocks was five, also in the normal range. Just briefly, in this task, if you look at the figure, the experimenter, rather than saying out loud digits, taps a sequence on the blocks. The patient on the opposite side of the blocks can’t see the number that they’re tapping. They just have to remember where they tapped. So HM had a normal digit span and a normal spatial working memory span. But when Scoville and Milner modified these tasks to introduce a long-term memory element, simply by trying to help him learn one more item at a time to the sequence of digits or the taps in the Corsi blocks, he failed to progress either verbally or spatially. So in the digit span, he managed one more. Normally, people can get up to around 15. And in the Corsi block task, he never made it up to six at all. Slide 7 Milner next tried a different type of task altogether. This was the mirror drawing task, and it revealed fascinating results. When people are doing this, they must watch their hand in the mirror, while they try to draw around a star shape. This is very difficult to begin with, as you can see your hand moving in an opposite way to the expected movement that you’ve planned. HM was presented with this task 10 times on three consecutive days. And each time Brenda Milner came in with the task, he had no recall of having done it before. But if you look at the graphs, you can see clearly that he shows a very good improvement, in fact, exactly at the same level you would expect a person with no brain injury to do. Certainly, he’s learning the task very well, and he’s perfect by day three. But remember, this was without him being able to consciously remember ever having completed the star drawing task. This was the first evidence that although he was unable to learn new explicit knowledge and form new memories, he appeared able to retain brand new procedural skills, but without any awareness Transcripts by 3Playmedia Week 2 © King’s College London 2017 2. of learning them at all. Slide 8 Scoville and Milner wanted to test this type of learning in another way, in order to extend it beyond a motor learning task or procedural memory, to see if he would also show an improvement in another domain. And they chose a visual task, the fragmented pictures task shown here. You can see that the pictures shown in block one are extremely fragmented indeed and impossible to recognise. But they become more and more detailed as you go through the blocks. Exposure to the more complete ones, if you can have some kind of memory of them, will then improve your performance when you next see the extremely fragmented ones in block one. Slide 9 If we look now at HM’s performance, you can see for block one, he’s failing to recognise many of the images, as indeed I’m sure you couldn’t. But he’s learning as he goes through the blocks. The crucial thing here is that one hour later, when he’s shown block one pictures, he makes very few errors, showing that he’s learned something from the previous four blocks. But again, he has no recollection of performing this task before. This performance is entirely within a normal range, so he’s learning visual material too, albeit entirely implicitly. And these data from HM provide an excellent source of knowledge into the difference between explicit and implicit memory. Both of these are long-term memory processes, but one, explicit, is impaired in HM and one, which is implicit, is entirely unimpaired. So we learn that the parts of the medial temporal lobes that have been removed in HM are not needed for implicit memory formation, but they’re certainly necessary for explicit long-term memory. His lack of a dense retrograde impairment also tells us that those medial temporal lobe regions are unlikely to be the site of long-term storage of old memories. Slide 10 So we’ve seen that memory is fractionated into working memory and long-term memory, and then that long-term memory is fractionated itself into implicit and explicit memory processes. Now, we will stick with explicit memory and examine a further fractionation. And this is the distinction between episodic and semantic memory. There’s clear evidence for a difference in the way that we remember events that happened to us-- that’s episodic-- and facts about the world around us. That’s semantic. So episodic memory is our memory for our personal experiences or personally experienced events. Note that this doesn’t have to mean that they’re to do with emotional, autobiographical memories or particularly things that are very special to you, although obviously they can be, and there’s evidence that these might be consolidated more powerfully. But you can have an episodic memory just for when you think back to walking into your kitchen this morning or having dinner last night. By this, I mean the memory of the event of having dinner last night and not just a recall of the food you ate. And this brings me to the three Ws of episodic memory, what, where, and when. To have an event memory, all these three elements must be there. So if you really have an episodic memory of last night’s dinner, when you say what you ate and who you ate it with-- that’s what-- you also form a mental image with spatial information of what the scene around you looked like during dinner, which is the where. And then, you will correctly position this memory in time. And that’s when. And I don’t mean here the actual time you ate dinner at, for example eight o’clock, but the temporal order of the Transcripts by 3Playmedia Week 2 © King’s College London 2017 3. event within your life. So that’s that it was after lunch, that it’s last night’s dinner, which occurred yesterday, and that it was after dinner the night before that. So semantic memory is also part of explicit memory. But these are our memories of general knowledge, what we learned at school, university, what characteristics belong to different animals, which countries are in Europe. And very differently from episodic memory, they are context free. So they’re not stamped with that spatial and temporal information, the what, where, and when of episodic memory. You don’t need to remember where you heard it first, whether you learned the features of a rabbit before or after you learned the features of a mouse. They don’t contain these three Ws. So there’s evidence that episodic and semantic memory are functionally distinct and also that the brain networks that they require might be different too. For example, evidence does suggest that episodic memory is much more affected by medial temporal lobe and hippocampal damage than semantic memory. Slide 11 Endel Tulving, who was the person who coined the phrase “episodic” and “semantic memory,” studied a patient known as KC. After sustaining damage to his medial temporal lobes bilaterally in a motorbike accident, he had severe anterograde and retrograde amnesia, but it was confined to episodic memory. Note also that HM’s deficits were thought to be much more episodic than semantic. Evidence that the deficit shown in KC was specific to episodic includes the fact that he took a mechanics course after his injury and during this managed to learn new semantic terms, for example, spiral mandrel, but without explicitly recalling people from the course or any episodic events during the same period. Any memories he seemed to make, in fact, of episodic type events, like family weddings, were, in fact, very factual in nature and didn’t include a sense that he had personally experienced them at all. In the top panel on the slide, you can see coronal slice through KC’s brain, with the arrows pointing to medial temporal lobes and hippocampus, which you can see are extremely damaged, particularly if you compare them to the lower panel, where the arrows point to the same areas of a healthy brain. Slide 12 However, some research has argued that the differences shown in episodic and semantic memory might simply be down to the way in which we learn episodic memories compared to semantic. So an event in our life can be experienced only one time. However, when we learn facts or read news stories, we often see the same material presented again and again, sometimes over a period of years. People thought that perhaps the less frequent exposure we have to episodic type information might be a possible reason why this is more susceptible to loss by brain damage than semantic information. With this in mind, Varga-Khadem and her colleagues sought to test people who had suffered hippocampal damage at a very young age. The thinking was that if these patients showed an episodic-semantic distinction, this would be unlikely to be due to the time over which something had been learned, but rather something qualitatively different between episodic and semantic information. So the researchers were interested in whether the young people can learn semantic knowledge, without these medial temporal lobe and hippocampal brain regions and despite any episodic impairment. In the brain scans shown here, the left panel is the coronal cross-section of a healthy Transcripts by 3Playmedia Week 2 © King’s College London 2017 4. medial temporal lobe region, including hippocampi. And on the right are the same sections from one of the patients participating in this study. The arrows point out the areas that are particularly debilitated. You can see they are much smaller and less dense. Three patients took part in this study, all of whom had suffered from hippocampal and medial temporal lobe damage at a very young age. Slide 13 These are the results from some of the episodic tests that Varga-Khadem and her colleagues performed on the three young patients. On the left are the results from word list recall, the black columns representing immediate recall and the lighter ones recall after just a short gap. You can see that the patients, Beth, John, and Kate, compared to a group of normal healthy controls, which were matched for age and labelled NC in the figure, forget almost all of the words after a short period. On the right is the Rey complex figure task. And in the first panel, participants have copied this figure. And in the second panel, you can see their versions in a surprise memory test for the figure after just 20 minutes. The example from the healthy controls shows a reasonable amount has been retained. However, the versions from the young medial temporal lobe patients is striking. They have really remembered very little at all from the image. These types of formal task were supported by self- reports from the patients and interviews with their families. In daily life, they demonstrate dense problems in remembering events that have happened to them, people they meet, and so forth. Slide 14 However, despite these episodic deficits, these children attended mainstream school, and they had IQs and knowledge within the normal range for their age group. Look here on the slide for some examples from the semantic testing that Varga-Khadem carried out. For example, here, John is asked, what is a sanctuary? And he replies, safe haven, a place of safety everyone can go to. Or Kate is asked, why do some people prefer to borrow money from a bank, rather than from a friend? And she answers, because they can pay back the money in their own time. A friend might pester them. So you can see there’s really no semantic impairment here in these children. And the results from the patients show a clear episodic-semantic distinction. They were impaired in every episodic task, but at normal range in semantic tasks. Slide 15 Now we’ve seen the difference between episodic and semantic memory. Let’s think a little about episodic memory itself. It’s crucial to our sense of ourselves that we have a sense of re- experiencing the events that we recall. Indeed, the loss of episodic memory that is frequently among the first symptoms of dementia is exceptionally upsetting and unsettling for patients. The significance of our personal memories to us makes it all the more surprising to learn how remembering is very much a reconstructive process. Rather than being analogous to an accurate snapshot or a video of an event, our memories are biassed by our expectations. They change over time or are susceptible to influence by false pieces o