Lesson 7: Memory Structure & Processes PDF

Document Details

PamperedStrait1339

Uploaded by PamperedStrait1339

Università di Padova

Tags

memory working memory cognitive psychology human memory

Summary

This document discusses the structure and processes of memory, specifically focusing on working memory and long-term memory. It explains encoding, storage, and retrieval, and introduces the multi-store model of memory, developed by Atkinson and Shiffrin, while touching on the central executive's components, and the key elements of working memory such as the visuospatial and phonological components.

Full Transcript

Lesson 7 The structure of memory and working memory | WHAT IS MEMORY AND ITS PROCESSES Memory stores all information you have learnt in your lifetime, all the episodes, the meanings of all words you know, the core notions about who you were, who you are now and who you want to become in the futur...

Lesson 7 The structure of memory and working memory | WHAT IS MEMORY AND ITS PROCESSES Memory stores all information you have learnt in your lifetime, all the episodes, the meanings of all words you know, the core notions about who you were, who you are now and who you want to become in the future. This is what we call “long-term memory”. But memory also crucially serves your reasoning, problem solving and decision making by holding different pieces of information briefly while you are elaborating your thoughts. This is what we call “working memory” (once known as “short-term memory”, a term that is still used today, sometimes). Three fundamental and always cited processes of memory are: encoding: the initial attending to stimuli / acquiring information, which requires some attention; storage: maintaining and consolidating the information over time; retrieval: accessing the information at a later time, when needed. Memory failure may occur because of a failure in any of these three processes. It may be because we failed to dedicate sufficient attention to the original stimulus (encoding), because the trace had not been sufficiently consolidated afterwards (storage), or because we are unable to access the piece of information when we need it, even though it is somewhere in our brain (retrieval). In fact, these three processes describe better the functioning of long-term memory. As we will see later in this unit, working memory is a much more limited capacity storage, in which information is simultaneously, directly present to our awareness. ONE MEMORY OR MANY “MEMORIES” The distinction between temporary and permanent memory systems became famous with the “cognitive revolution” after the 50s, and it is still greatly relevant today for its connection with practically… any cognitive process. By likening the human mind to a computer, we might say that short-term memory (then, working memory) is our “RAM”, which stores a limited amount of information briefly while we use it, while long-term memory is more like our “hard disk”, which stores a very large amount of information permanently, but it has a slower access. A similar distinction, however, had already been made by William James in the previous (19th) century, as he spoke about “primary” vs “secondary” memory. In fact, the distinction between short- and long-term memory does not fully cover all important articulation of memory. Temporary systems were divided into a “sensory register” and a “short-term memory” by the modal model (see next slide). Working memory was then divided into different storage systems (verbal vs visuospatial) and a central executive directing attention. Long-term memory is the most articulated sub-system of memory: it includes implicit and explicit memory, each of which includes other articulations (that will be presented in the next unit), partly based on different “subjective” qualities (e.g., the memory of specific episodes vs general knowledge about facts), but also based on neuropsychological evidence. A MULTI STORE MODEL An early and highly influential model of memory was proposed by Atkinson and Shiffrin (1968) and it was called the “modal model”. It describes memory as a three stages process, where memory traces successively enter three storage systems that eventually encode information permanently. Although it has been criticized in some respects, the model has enormously stimulated subsequent memory research. The figure below summarizes the model. Please study this and ask if it is unclear! TEMPORARY SYSTEMS Two out of three memory stores in the modal models, namely sensory register (or “sensory memory”) and short-term memory [STM], are temporary systems. Their duration is so short you would not probably even have called them “memory”: in the common language, "memory" is generally only associated with what psychologists call long-term memory [LTM]. However, all these systems “retain” information, regardless of for how long. Sensory register(s) are very volatile memory buffers strictly associated with the specific modalities of the peripheral sensory systems (visual, auditory, etc.). These buffers serve to retain the sensorial information long enough for the brain to initially process it, but if a piece of information does not explicitly receive attention, it is quickly lost (in a matter of just a few tenths of seconds for the visual modality, for example). Information which receives attention is passed on to the short-term memory system, where it stays for a few more seconds or longer, if continuously rehearsed, and it may eventually be permanently in long-term memory. In fact, the sensory register is of relatively limited importance for higher cognition, and researchers have focused much more on the short- and long-term memory systems. If you want to know more about the sensory register/memory, however, you can read this well-written online chapter. In the rest of this unit, I will focus on the construct of short-term/working memory. In the next unity, I will focus on long-term memory. SHORT-TERM MEMORY STM is the most limited memory system in terms of its capacity. It is believed to retain an average of only 7 (±2) independent elements. In typical short-term memory tests, “elements” may be letters, single digits, or colored squares of a matrix. This means that, for example, most people can correctly remember any sequence of between 5 and 9 single digits just after hearing them, but they will fail with any longer one. In fact, exceptional mnemonists may even quickly learn sequences of several tens of digits… but they have developed strategies to rapidly chunk the elements together and transfer them to LTM for later retrieval. They don’t truly have super-human STM capacity! STM can be seen as a “spotlight”. Our sensory systems are constantly bombarded by huge flows of information, but only a very small subset of it will receive our attention and is passed on to STM, where it serves higher cognition. In fact, information in STM does not come only from the sensory registers. It can also come from LTM. The information flow between STM and LTM is bidirectional. Whenever we “remember” something we had previously encoded, we are taking a trace back from LTM to STM. STM lasts very shortly if not rehearsed, or in general if you do not keep your attention on it. For example, if you are told the names of a few people to remember, but you are simultaneously distracted by another task, you lose the information in about 10 seconds. Luckily, with a little effort and mental elaboration you can transfer the content of STM to LTM - which is what you do when you study! WORKING MEMORY Working memory (WM) is a conceptual evolution of short-term memory. An early but still influential model of WM was proposed by Baddeley and Hitch (1974). It is depicted in the figure. It encompasses two “slave systems”, the visuospatial sketchpad and the phonological loop, which represent short-term memory stores responsible for retaining visuospatial and verbal/phonological information, respectively, plus a superordinate “central executive” system, which is responsible for directing attention, inhibiting source of distractions, retrieving information from LTM, and carrying out other higher order operations. WM is often considered a core component of higher intellectual functioning. For example, WM is one of the main indices in the famous Wechsler scales of intelligence. A revision of the WM system in Baddeley (2000) added a third slave system called the “episodic buffer”, which allegedly retains multi-dimensional information combining the stimuli from the other two slave systems, as well as information retrieved from LTM, to form complex mental representations. This new component has not been so influential as the original model was THE SLAVES SYSTEMS OF WORKING MEMORY Visuospatial sketchpad and the phonological loop are stores that retain bits of visuospatial and verbal information, respectively. They are sometimes called “passive” systems or simply “short-term memory”, to indicate that their function is only that of maintaining the stimuli, without elaborating them (only the “central executive” component has the role of actively processing stimuli). Both systems are provided with a rehearsing process that allows you to keep the information available and prevent decay after a few seconds. Are these two systems truly distinct? Yes, there is evidence that they can work in parallel. For example, repeating “LA, LA, LA” aloud or sub-vocally strongly impairs your ability to simultaneously remember a sequence of letters or digits, because it saturates your phonological loop, but it does not impair your ability to remember the position of squares on a matrix, because it does not interfere with your visuospatial sketchpad. THE CENTRAL EXECUTIVE The central executive is a cognitive device embedded in the WM system, albeit it does not retain any information itself. It is more like the “CPU” of a computer. It processes and elaborates the information that is currently retained in the subordinate “slave systems” of memory. Also, it triggers the retrieval of information from LTM if needed, it directs attention to new stimuli in the environment that are relevant to accomplish a current goal while inhibiting sources of distraction, and it manages complex tasks requiring multiple simultaneous processes. A real-life example of the latter is understanding new articulated concepts while you study. It requires that you hold many pieces of information simultaneously active in your temporary memory while you try to connect them and grasp the overall meaning. Another example is resolving complex arithmetical problems. Even if the data is written down on the paper, so you think you do not have to memorize anything, in fact your WM is extremely active: you need to hold a complex mental representation of the problem, including a series of connections between the data, and you need to retain a series of partial steps towards the solution while you probe your LTM to find previous examples of similar problems and you simultaneously elaborate the data. Unsurprisingly, the central executive component has to do with memory, but it is also crucial for intelligence. In fact, some researchers prefer to relate it to attention rather than to memory. MEASURING WORKING MEMORY CAPACITY When you want to measure the performance of someone's WM of someone, you generally refer to its capacity. Classical tests require memorizing increasingly long sequences of items and stopping when the participant can no longer remember them. This is called a “span procedure”. The point is which component of WM you want to measure, precisely. Measuring the phonological loop is relatively easy. You could ask the participant to hear and repeat immediately afterward some increasingly long sequences of digits. For example: “3, 6, 2, [now repeat!]”, then “7, 1, 3, 9, [now repeat!]”, and so on, like in the “Digit Span” subtest in the famous Wechsler scales of intelligence. Measuring the visuospatial sketchpad is also easy. You may present matrices with an increasingly large number of blacked squares (see previous slides), and immediately afterward require the participant to tap the black squares on an empty matrix. Measuring the central executive is more complex. In the previous two cases the task required only to hold a series of elements (until the series became too long to remember). Measuring the central executive involves that one must not only remember, but simultaneously elaborate something. The “Operation span task” is an example: there, a participant must not only hold a series of digits (or other elements), but also solve simple arithmetic operations after each new digit is presented. Long-term memory | PERMANENT SYSTEMS Long-term memory (LTM) permanently stores all information we have acquired in our lifetime. It is virtually unlimited in terms of both capacity and duration. A limit, however, is represented by the “risk” of a memory trace becoming progressively inaccessible. For example, because you have not retrieved it for a long time, or it is no longer connected with anything that is relevant for you now, or because other, similar and perhaps more recent memory traces eventually overshadow it. “Long-term” may make you think of memory traces being retrieved after months or years. In fact, LTM also includes everything you remember from yesterday, or one hour ago, and even a couple of minutes ago! (provided that in the last couple of minutes the trace ceased to be retained in your working memory [WM]). Like other constructs in Psychology, LTM is multi-componential. The figure below shows a summary of its main articulations EXPLICIT VS IMPLICIT MEMORY A first general distinction within LTM is between explicit and implicit memory (Squire, 1992). Explicit (or “declarative”) memory refers to any information that we retrieve from LTM in a primarily verbal and conscious way (e.g., what we ate yesterday for dinner, what river flows in Paris). Implicit (or “non-declarative”) memory refers to any LTM phenomena that involve a non-verbal, and potentially (but not necessarily) even unconscious, retrieval process. Three frequently mentioned types of implicit memory are the following: 1. Procedural memory, that involves retrieving procedures so well-acquired that have become automatic and no longer require an attentional effort (e.g., tie your shoes, playing a well-known piece of music for a pianist). 2. Priming, which involves a facilitated retrieval of an item after you have (perhaps even unconsciously!) encountered a related stimulus (e.g., you become faster and more likely to respond “pizza” to “p _ _ _ _” after seeing the picture of a tomato compared to seeing a tennis ball). 3. Conditioning itself is largely a manifestation of implicit memory (see the lesson on Learning and conditioning). A demonstration that explicit and implicit memory are distinct is offered by neuropsychology. Cases with severe amnesia that largely destroys their explicit memory may still have implicit memory phenomena largely intact. In the rest of this unit, I will focus on explicit memory alone, which has been more extensively studied in its relations with the higher cognitive processes of humans. EPISODIC VS SEMANTIC MEMORY The most important distinction within explicit memory is between episodic and semantic memory [Tulving, 1972]. Episodic memory includes all memory traces about specific events, bound to a specific space-time context, and generally enriched by specific sensory details. Retrieving a memory trace from episodic memory has been described as a subjective “travel back in time” with the mind, as you have the impression of re-living the original event in first person [Tulving, 2005], albeit in a somehow faded version. Semantic memory includes all our knowledge about general facts, with its memory traces being disconnected from a subjective space-time context. Examples of episodic memory: remembering the celebration of our birthday 8 years ago, doing grocery shopping yesterday afternoon, recalling a brief conversation we had 10 minutes ago with a colleague. Examples of semantic memory: we can tell who wrote “War and Peace”, where Baghdad is, and how many wheels a car has. It has been suggested that semantic memory accumulates from episodic memory once the specific contextual details fade and are no longer accessible [Baddeley, 1988]. For example, I know that the star closest to our Solar system - other than the Sun - is called Proxima Centauri and it is about 4 light years away from us. I probably learnt this in my adolescence, but I cannot remember the precise moment, nor how that information impressed me. So, the episodic memory has gone, but the semantic trace - the core information - is still with me. SOURCE MONITORING An important process associated with episodic memory is called source monitoring (Johnson et al., 1993). It indicates the ability to remember the source from which the information came. “Did I watch this film at the movie theater or on Netflix?”, “Have I read this fact from a paper on Science, or did I hear it from my friend Johnny Hoaxer?”, “Did I actually notice that the assailant was a foreigner, or it was the policeman who misled me into thinking so?” Here are a few questions that involve source monitoring. You bet, remembering where you saw a film is pretty irrelevant, but incurring any source monitoring error in the other two questions may have increasingly more serious consequences. Unfortunately, source monitoring errors are frequent. They reveal that a memory trace is about to be “transferred” from episodic to semantic memory. In fact, there is no physical transfer whatsoever, but out of different pieces of information, only the “what” (that is, the content) has remained because it is more durable, while the “where”, “when”, “with whom”, “who told what” (that are, the details representing the context) are fading. A special type of source monitoring is reality monitoring, indicating the ability to distinguish internally generated information from information presented in the external world. For example, distinguishing between what you imagined/inferred and what you really saw. Source and reality monitoring errors are involved in the generation of memory distortions and false memories. EPISODIC FUTURE THINKING “Episodic future thinking refers to the capacity to imagine or simulate experiences that might occur in one’s personal future” (Schacter et al., 2017). For example, when you imagine what you might do next summer, what you will say to your partner on the next date, or what path you will take tonight to go to a new place in an area that you know little about, you are using episodic future thinking. Isn’t memory only about the past? In fact, being able to mentally simulate a future event heavily involves your memory. It requires that you retrieve and recombine already encoded pieces of information to create new hypothetical scenarios. Unsurprisingly, remembering the past and imagining the future activate similar brain areas, and the conditions that specifically impair episodic memory also specifically impair episodic future thinking (e.g., depression, amnesia, advanced aging). Episodic future thinking reveals an important feature of memory: that it is reconstructive, not reproductive, that it is malleable rather than fixed (more on this in the next unit). Despite its name, episodic future thinking does not involve episodic memory alone. Semantic memory is equally important. It provides the scaffolding, the general “schema” of the imagined event, which is then populated with more specific episodic and even perceptual details. AUTOBIOGRAPHICAL MEMORY “Autobiographical memory is a complex blend of memories of single, recurring, and extended events integrated into a coherent story of self that is created and evaluated through sociocultural practices” (Fivush & Graci, 2017)*. Autobiographical memory is a subset of episodic and semantic memory that is relevant for the self, it is integrated into a life narrative, and it defines who you were, who you are, who you might be in the future. Not all episodic memory is autobiographical. If you remember a specific event that occurred four years ago at school, it probably fits into a larger narrative of your life, so it is autobiographical, but if you just remember going jogging last Saturday, that is episodic but not autobiographical. Similarly, not all autobiographical memory is episodic. Relevant information about yourself, the story of your family, as well as general knowledge about your own values and personality are part of your autobiographical memory, but semantic in nature. Rather than a separate “store”, autobiographical memory is a way in which your memory is organized, and it has important functions. According to Bluck et al. (2005), its main functions are: 1. creating a sense of the identity and continuity of the self over time; 2. directing and inspiring your current behaviors and decisions; 3. socializing by sharing (parts of) your life narrative with others. PROSPECTIVE MEMORY This is the second time we talk about “future” in a lesson on memory! Prospective memory refers to the intention of remembering to perform a planned action in the future. It is clearly “long-term”, because you might need to remember to perform an action tonight, or in a month, and it is “explicit”, because it is clearly verbal and conscious. I have not depicted it in the overview figure of memory systems, however, because it is not a separate “store” like episodic or semantic memory. Prospective memory tasks are either “event-based” or “time-based”. Event-based tasks require performing an action in response to an event (e.g., remembering to give Paul his much-needed badge the next time I see him). Time-based tasks require performing an action at a specific time (for example, taking a pill at 4 pm). The latter are comparatively more difficult because they require continuous monitoring of time and not just remembering how to “react” to an external event. Prospective memory tasks are largely involved in daily life, and their stakes can be minor (I love making pizza, so I must remember to make the dough a few hours before dinner!) to major (taking life-saving drugs with a strict discipline). Older adults may often have to carry out prospective memory tasks with regards to taking medicines. Luckily, older adults are not worse (or they may even be better!) than younger adults in real, everyday prospective memory tasks (Ihle et al., 2012)*, a fact that has not yet been thoroughly clarified and may involve a combination of motivational and cognitive factors. HOW TO STUDY BETTER Having a good memory does not mean having everything in your LTM system. It’s being able to access the information you need when you need it… for example, during an exam! So, a crucial point of memory is retrieval. And what can you do to improve later retrieval, for example when studying? The simplest, most spontaneous but poorest strategy is rehearsal. You just read the same chapter again and again. Rehearse the information in WM several times facilitates it being transferred to LTM, but it is ineffective in aiding later retrieval. Elaborative rehearsal is better (Craik & Watkins, 1973). You rehearse but also elaborate the meaning and relate it with knowledge already in your memory. Linking new memory traces with well-established ones creates connections, which facilitates later retrieval. An encoding that focuses on the meaning and semantic connections largely facilitate later retrieval compared to mere rehearsal, with equal time spent to encode. Another important strategy is the retrieval practice (Roediger & Butler, 2011)* or “testing effect”, the phenomenon by which making an effort to retrieve information (without having it in front of you) largely facilitates any future subsequent retrieval. So, rather than studying the same material just another time, you had better stop, close the book, and try to recall. This requires an attentional, and perhaps “motivational” effort. It could unpleasantly reveal your unpreparedness. But it is arguably the most powerful of all strategies. Likely, this is because it creates additional retrieval routes that help you access the memory traces next time. WHEN DO I STUDY BETTER Ok, I must elaborate the meaning in depth and exploit the “testing effect”! But when do I have to study? Well, whenever you prefer. But keep the spacing effect in mind! The spacing effect is the phenomenon by which spacing out your study sessions boosts your later retrieval (Carpenter et al., 2012). So, you had better divide your study into several short instead of few long sessions and leave large enough intervals in between. How large? It is not fully clear. It seems to depend on when you want to reap the advantage. Longer intervals seem to ensure good retrieval for longer periods. In any case, intervals longer than 3 weeks are probably more than enough, and even 1 week may already be optimal. In the first phase of your study, however, you might use shorter intervals (e.g., study the same topic every other day), only to space out more later, when the memory traces are already partially consolidated (see review by Carpenter et al., 2012)*. Why does the spacing effect work? There are several explanations (Smolen et al., 2016). Of course, short rather than long sessions of study help keep your attention high. But there is more. Dividing your study into more, well spaced sessions may repeatedly activate a retrieval process that helps you reap the advantages of the retrieval practice. A more complex hypothesis refers to the context-dependent and state-dependent memory, the phenomena by which we retrieve better when we are in the same context and/or psychophysiological state as we were when we first encoded the trace. According to this hypothesis, dividing your study into several sessions may help you encode the same material in a variety of different contexts and mental states, thus facilitating retrieval in any possible condition. Forgetting and misremembering | MEMORY ERRORS ARE “ADAPTIVE” This unit presents two broad categories of memory errors: forgetting, that is failing to access a memory trace when it is needed, and misremembering, that is remembering something in a distorted way, or even remembering something that never was. As for other psychological phenomena, errors and biases in memory must not be interpreted as just "failures", but as byproducts of otherwise adaptive properties (Schacter, 2001). Of course, we do not want to forget a piece of relevant information that helps us pass the exam, but what would life be like if we did not forget anything? Cases of individuals with exceptional capacities of episodic memory retrieval, such as Jill Price (Price & Davis, 2008) or Shereshevsky (Luria, 1968), show the dark side of never forgetting. Shereshevsky reported being unceasingly overwhelmed by mental associations, images, and details produced by his memory. Jill Price talked about the tyranny of never being able to let go of unpleasant experiences. Sure, we do not want to remember things incorrectly. An eyewitness remembering details in a distorted way may have tragic consequences in a trial. However, memory distortions and even false memories are a byproduct of our memory being malleable. As mentioned in a previous slide on episodic future thinking, our memory is reconstructive and this allows us not only to remember the past in a rigid and fixed way, but also to recombine its elements to imagine hypothetical scenarios, such as future ones. Forgetting and Amnesia / Eyewitness Testimony and Memory Biases WHAT IS FORGETTING As pointed out by McDermott and Roediger (2021) in another online chapter I invite you to read, it is important to distinguish between availability and accessibility of a memory trace. Availability indicates a memory trace being stored somewhere in your LTM system. Accessibility indicates you being able to retrieve that memory trace when you need it. We do not precisely know what and how much information is “available”, but we can see what is “accessible” at a given moment. Forgetting is not being able to access (i.e., retrieve) the information, not necessarily it not being available in your LTM. Conceptually, we will never know for sure whether a memory trace has definitively disappeared from our LTM. It might be impossible to retrieve now and apparently lost, but perhaps with the appropriate cues, in ideal conditions, maybe reproducing the same context and mental state as the moment of the original encoding, the retrieval might still occur. The “tip of the tongue” (TOT) phenomenon clarifies the distinction between a piece of information being available vs accessible. In TOT, you know that you know something, that it is somewhere in your LTM, but you can access it only partially (e.g., you may remember only the first letter of the needed word, or it being short or long). Unlike TOT, when you “forget” you might even be totally unaware that you still possess the information, but the only thing we can actually know for sure is that you cannot access it now. CAUSES OF FORGETTING Dudukovic and Kuhl (2021) propose 5 major causes of forgetting: 1. Lack of encoding: it may not seem true “forgetting”. But often we cannot access information that is not there, simply because we never really encoded it in the first place. Lack of attention is the culprit, like when we are told the name of someone, but we lose it immediately afterward. 2. Decay: brain, synapses, and our memory change over time, so a trace may eventually disappear. Such disappearance is difficult to prove, but it may happen. (The simple passage of time per se, however, and even the lack of any occasional retrieval, do not really guarantee the decay of a memory trace.) 3. Lack of retrieval cues: this is frequent. We lack enough cues to initiate a retrieval process. For example, we cannot recall the name of the person we collaborated with on a project a month ago, but we can remember the name when we see the photograph of their face. So, the face is an additional “retrieval cue” that “unblocks” the accessibility to the related information of name. 4. Interference: retrieving a memory trace blocks the retrieval of another. It is discussed in the next slide. 5. Deliberate attempt: we may try to systematically keep some unpleasant or embarrassing memories out of our mind. Eventually, this perpetuated inhibition may lead to nearly permanent inaccessibility, and perhaps to a systematic activation of alternative, diversionary thoughts. INTERFERENCE Interference is a major cause of forgetting. When several “similar” memory traces accumulate over time, retrieving a specific one may become impossible, because the others (or parts of them) come to your mind instead, inhibiting the access to the desired trace (Levy & Anderson, 2002). For example, you may still recall when you had dinner at a new restaurant in your city five years ago, even if that experience was nothing special, with no emotional involvement, nothing salient, and even if you have never retrieved it previously in all these years. You can still access it just because it is a unique trace, with no similar memories to compete for access with it. Contrarily, you may be unable to remember the specific occurrence of having dinner with your parents only two weeks ago, just because it is a recurrent event, so several other traces, both more recent and older, tend to come to mind promptly, inhibiting the retrieval of the target trace. Interference can be divided into proactive and retroactive: 1. Proactive interference is when an older trace inhibits the retrieval of a new one. For example, your older password comes to your mind instead of the new one and “blocks” it; 2. Retroactive interference is when a newer trace inhibits the retrieval of an old one. For example, your memory of your last birthday celebration comes to your mind and inhibits the retrieval of the trace of your birthday three years ago. MEMORY IS RECONSTRUCTIVE, NOT PHOTOGRAPHIC RECONSTRUCTIVE → Despite a subjective impression of “seeing again”, like in a videotape, when we think back to our past experiences, our memory is not photographic at all. Remembering is NOT like seeing a videotape. It is more like putting the actors and scenography back on the stage (of our mind) and making them play again. Hopefully, the theatrical script has remained the same. But it may also have changed. For example, because in the meantime we have received new information about the original event, or we have made new inferences ourselves after thinking back. In a nutshell, this is how memory distortions and even false memory arise. More on this in the next few slides. NOT PHOTOGRAPHIC → Think back to a common small-denomination banknote. In my Italian classes, I suggest thinking about a 10-euro bill. First, I ask students to think how “vividly” they can imagine it. Generally, the subjective impression of vividness is high. Second, I ask students to draw or describe the details. Virtually all fail completely. Generally, the only correct and precise detail that is reported is color. Color is what helps us use the banknote. It informs us about its denomination. But all other details are virtually useless, so they are never truly encoded, and when we believe that we can vividly imagine seeing a banknote, it’s only an illusion. Try the same with other familiar objects, such as the façade of your own home (while you are not seeing it) or a landmark in your town! HOW SEMANTIC SCHEMAS AFFECT OUR MEMORY Semantic memory widely affects our episodic memory. That is, our memory of a specific event is guided, and sometimes distorted, by our previous knowledge about that class of events. That is, by our knowledge about what typically happens in that kind of event, by our expectations. Such previous knowledge is called a “schema”. There are countless examples of how schemas affect our memory of specific episodes. For example, Tuckey and Brewer (2003) showed participants a video of a bank robbery, and later “interrogated” them as eyewitnesses. Schema-consistent details (e.g., the robber holding a gun, explicitly demanding the money) were more likely to be remembered over time than schema-inconsistent ones (e.g., apologizing after taking money). Worryingly, a similar effect concerns false memories as well! Bower et al. (1979) and Hannigan and Reinitz (2001) showed that participants are likely to falsely remember having seen parts of an event, just because these parts are consistent with the general schema of the event. For example, if you are shown a story about four people dining at the restaurant, you are highly likely to remember having seen the moment when they ordered food, even though that part of the story was never actually shown to you. Hannigan and Reinitz (2001) went on to show that you are especially likely to falsely remember having seen the cause of a specific episode even though only its consequence was actually shown. HOW CULTURAL SCHEMAS AFFECT MEMORY Previous knowledge about events, as well as the appropriate “narrative forms” that should be used to tell others about an event we have witnessed, are culturally determined. In his famous pioneering research, Bartlett (1932) suggested not only that memory is distorted by schemas, but also that these schemas are culturally determined. He had British participants read the transcription of a tale from the native American folklore, and then recall it in a series of subsequent sessions. Bartlett found that the story was progressively simplified, reshaped and assimilated to resemble the characteristics, details, and narrative schemas that are familiar to the Western cultural background. In fact, the original tale would probably sound bizarre to any Western reader, with a lack of plausible logical steps, and a very unfamiliar content: a war among ghosts! Bartlett noted that not only the form, but even the content was distorted by his readers, who in some cases ended up reporting the tale as a more "conventional" story of a battle among native American tribes. As early noted by Bartlett, and proved by decades of subsequent research, the transformations of the original memory become more and more evident with the passage of time, and especially with each subsequent repetition. Every time we recall something we reshape the memory trace a bit! THE DEESE-ROEDIGER-MCDERMOTT PARADIGM The so-called DRM paradigm (DRM is an acronym of the names of the authors who originally developed it; Gallo, 2010)* is the most widely used technique to elicit “false memory” in laboratory settings. It consists of presenting a series of words, all related to each other and to an underlying common theme. In a subsequent memory test, a specific word labeled critical lure, which represents the underlying common theme of the list presented, but that was not in the list, is tested. Research shows that the critical lure is recalled or recognized virtually as much as the presented words. For example, if you read “rest”, “tired”, “dream”, “snooze”, etc., you are highly likely to remember having also read “sleep”… even though the latter word was never presented. This suggests that, even when we receive the explicit instruction to focus on and remember specific items, we tend to grasp and be affected by the underlying meaning (which refers to our previous knowledge!). Focusing on the “meaning” generally helps us in real life tasks… but sometimes it tricks us into false memories! MEMORY DISTORTIONS BY MISINFORMATION One of the most striking effects on memory distortions is how susceptible we are to incorporating external information, and even just subtle cues, in our memory traces. The seminal experiments in this field (e.g., Loftus & Palmer, 1974)*, demonstrated that even just the specific term used to ask a question affects both the immediate answer of an eyewitness and their memory in the long term. For example, after seeing footage of a car accident, participants were asked either “at what speed were the cars traveling...when they collided” or “...when they smashed into each other”. Not only were the estimates of the speed higher, on average, in the latter than in the former case, but in the latter case, participants were also more likely to remember having seen broken glass in the video when re-interrogated after a week! There are countless other examples of how the way in which the interrogators ask questions affects memory for details in the eyewitnesses. In general, the extraneous elements are introduced into the memory trace of the eyewitness as side details in questions that apparently concern other aspects. For example, asking “at what speed was the car traveling when it surpassed the yield sign?” enormously increases the risk of false remembering having seen the yield sign (when in fact a stop sign was shown). For a more in-depth presentation of these results, see this online chapter on eyewitness testimony. FALSE MEMORY BY MISINFORMATION In the 90s, the research on the effects of misinformation to false memory broadly expanded. This was motivated by a widespread scientific controversy that emerged at the time, named “memory wars” (Loftus, 2018). Back at the time, several researchers in memory began to suspect that many allegedly “repressed”, traumatic childhood memories that frequently re-emerged after many decades of total forgetting in adults, often under hypnosis and psychotherapy, were most probably false memories, induced by oneself or others’ suggestions. To demonstrate that misinformation can lead to genuinely believed false memories about one’s remote past, even in perfectly healthy adults, a group of researchers led by Elizabeth Loftus successfully tried to induce false memories experimentally. The most famous technique is known as the lost-in-the-mall paradigm: researchers try to mislead participants into falsely remembering that they once got lost at the mall when they were children. Participants are prone to believe this because the researchers work with the complicity of the participants’ parents. Eventually, about 30-35% of participants (all healthy young adults) come not only to believe but even to remember and report in detail the (false) event, which in fact they had only been pushed to imagine! Countless other experiments in recent years have shown how easy it is to induce this type of genuinely believed false memory using misinformation and the participants’ own imagination. Again, find more details in this online chapter on eyewitness testimony. (How reliable is your memory?) Lesson 8 LANGUAGE Language reflects the enormous complexity of human cognitive capacity. Most of our mental processes (perception, attention, memory, reasoning) are devoted to acquiring information from the external world and elaborating it. Language serves not only to share information with others, but also to assist all thought processes, including the creation of concepts, reasoning, logical inference, and decision making. Unlike simpler systems of communications in other animals, human language allows us to create an unlimited variety of new meanings by combining its basic components. This is because human language is symbolic and generative: we use arbitrary elements that can be combined through the set of rules of grammar and syntax to generate a virtually infinite range of complex meanings. Our brain is born prewired to acquire and use language. All humans (except those with some serious disabilities) learn at least one language without being taught explicitly. There are no human societies without a language. If there is not already one, humans can invent it. An incredible example was the origin of the Nicaraguan Sign Language. It was “invented” by deaf children in Nicaragua when they were brought together to the same schools for the first time. They began developing their own language using hands and gestures to communicate, even though they were not encouraged (or they were even discouraged) to do so by adults (e.g., Senghas & Coppola, 2001). (Categories and Concepts) IS LANGUAGE POSSIBLE IN NON HUMAN ANIMALS The quick answer to the title is: most probably not. Of course, non-human animals communicate in several ways to convey information or regulate their social interactions. For example, they use scents, visual displays, vocalization, and singing. Think of the “waggle dance” that honeybees use to direct their fellow bees toward food sources, or the singing in many songbirds. Many mammals use specific sounds to communicate danger and facilitate social interactions. Some basic displays of emotions (e.g., teeth grinding, postures) can even be understood across different species. This is all about communication, however, not language. Language requires a symbolic system whose elements can be recombined using grammar and syntax to create an arbitrary large variety of (new) meanings. Human brains are prewired to acquire and use language in this way, while the brains of other animals are not. Past attempts to teach primates to use non-spoken languages (e.g., the American Sign Language [ASL]) showed that they can learn a variety of symbols to express several meanings, and they can combine them to some extent. However, they never came even close to the richness and complexity of human language. Kanzi, a bonobo language speaker that is considered very proficient, masters syntactic rules to a level equivalent to that of a 2-year-old human infant, not beyond. Also, unlike humans, it requires repeated exposures to learn any single new sign (while human children can learn words even after one single exposure). LANGUAGE BINDS THOUGHT AND EVEN PERCEPTION If language shapes our thoughts, it should determine its limits too. The “linguistic relativity” hypothesis argues that the structure of language binds our higher cognition and worldview. Therefore, different cultures, by speaking different languages, should have different underlying understandings of reality. By using a specific set of morphemes, a language defines a specific set of concepts. A frequently cited example is the fact that English (like most European languages) indicates “snow” using a single word, whereas Inuit people possess several words to indicate different types of snow. Therefore, different languages impose different categories to simplify the (otherwise infinite) complexity of the world. Whether this really affects general cognition is still unclear, however. For example, Dani people in New Guinea have only two words for color (equivalent to dark vs bright), but they were still able to categorize a variety of colors using newly provided verbal categories when they were asked to do so (Rosch, 1973). Roberson et al. (2000), however, found that their recognition and perceptual judgements of colors systematically differed from those of speakers of languages with more words for colors, confirming that their perception may be affected by their linguistic categories. Although the perception of colors seems of limited importance, think of the implications for more complex (and harder to study) concepts like different types of emotions or social relationships. THE BASICS OF LANGUAGE Language is a multilayered subject of study. Here are the basic key components. 1. Phonemes: the smallest unit of speech, or perceptually distinct units of sound that allow you to distinguish one word from another; phonemes are different across languages (e.g., the sounds for "R" and "L" are mapped as different phonemes in English, but not in Japanese, where they are practically perceived as the same sound); phonology is the branch of linguistics that studies phonemes; 2. Morphemes: the smallest units of language that carry meaning; they may coincide with words (e.g., “place”) or be parts of them that still carry meaning themselves (e.g., prefix “re-” to imply repetition or suffix “-ed” to indicate past tense); morphology is the branch of linguistics that studies morphemes and the rules that define the formation of words; 3. Syntax: the set of rules defining how words must be arranged in sentences and phrases. Grammar is the set of constraints of a language that encompasses phonological, morphological, and syntactical rules. Finally, important subfields of linguistics are semantics [the study of meanings at different levels of the language] and pragmatics [the study of the actual use of language in communication, including the role of contextual aspects and the nonliteral parts of the communication]. STAGES AND ACQUISITION OF LANGUAGE 1. Newborns can still distinguish all phonemes (e.g., "R" and "L" are more easily distinguished by Japanese infants than by adults), but at about 1-year infants lose this ability due to the repeated exposure to the phonetic categories (phonemes) of their surroundings; 2. At about 7-month, infants start babbling, a spontaneous vocalization that progressively resembles the sounds of their language and acquires a conversational tone (even though it is still meaningless); 3. At about 1-year, children produce their first words (thus showing to possess morphemes), generally single nouns; 4. At about 2-year, children possess hundreds of words, and they present two important phenomena: overextension of concepts [i.e., using a specific word for a larger category, e.g., “dog” for any animal, due to not possessing more specific terms] and fast mapping [i.e., the acquisition of a new word even after just one single exposure, which leads to a fast “explosion” of the available vocabulary]. Utterances may still contain many phonetic errors (e.g., confusing /b/ and /d/). Also, at about 2-years, children start acquiring syntax and produce their first utterances composed of two or more words. 5. At about 3-years, most children can conjugate verbs into their future and past tenses. 6. At about 5-year, most language acquisition is accomplished, although some pronunciation problems may persist, and the vocabulary is still relatively limited compared to that of an adult (i.e., a few thousand vs hundred thousand words). Note 1. Comprehension [receptive language] precedes production [expressive language] at all stages of language acquisition. Note 2. The ages indicated above for stages are only indicative and may vary across individuals. Substantial delays in the acquisition of receptive and/or expressive language, however, are abnormal and considered as reflecting neurodevelopmental disorders. If a child has a normal level of global intelligence but still has a substantial delay in the acquisition of language, they are said to have a "developmental language disorder", a condition that concerns about 1 out of 10/15 children, may persist later in life and has high comorbidity with later conditions such as specific learning disorders, dyslexia. LANGUAGE IN THE BRAIN For more than a century, patients with accidental brain lesions provided the largest amount of evidence about how language is mapped in the brain. Recently, neuroimaging techniques (such as fMRI) have been introduced to examine the brain in vivo. In nearly all right-handed people and most (over two-thirds) left-handed people, language is mostly controlled by the left hemisphere of the brain (Marzi, 2007). The temporal lobe is specifically important for receptive language (i.e., understanding language; note that it is close to the auditory areas) while the frontal lobe is more connected with expressive language (i.e., producing language; note that frontal lobe plays a major role in higher cognitive functions and integrates information). The Broca area and the Wernicke area are especially famous because they were related to the language since the 19th century. Broca and Wernicke independently reported two patients who had selectively lost their ability to produce or to comprehend language, respectively, but with the other ability being spared. After their death, the patients had their brains examined, and lesions were found in the two areas. Categorisation and concepts | WHY DO WE CATEGORISE THINGS? Language is based on concepts, which indicate sets of different things grouped together because they have something in common. Why do we categorize things? Well, there are many reasons! In fact, categorization is a core feature of intelligent behavior. Categorizing allows us to save our cognitive resources by mentally simplifying the world around us. Two major advantages in doing so are: 1. Communication. There may be no two things exactly equal in the universe! Grouping things into categories based on similarities or common features allows us to communicate with others using language. For example, we can distinguish about 7 million different colors. What would happen if we did not categorize them into fewer nuances? It would be impossible to speak about colors! 2. Making the world predictable. Even if you have never seen a particular object before, its belonging to a known category allows you to predict some of its characteristics. For example, the strange fruits in the figure are apples. Now, what do you expect of them? HOW DO WE DO IT According to the “classical theory” in the standard logic, categories should have well-defined boundaries, and be associated with a series of attributes that are singularly necessary and globally sufficient for an item to belong to a category. For example, a bachelor must be 1] human, 2] male, 3] adult, 4] unmarried, but 5] not unmarried because of religious or cultural reasons. Did I just say it all? Each of these attributes is singularly necessary, and combined they are sufficient (i.e., combined they qualify one as a "bachelor" with certainty) Similarly, a grandmother must be 1] female, 2] parent of someone who is a parent. Furthermore, concepts should be organized hierarchically. For example, Animal > Fish > Salmon. Each sub-category adds new, more specific attributes to define the items in it, and it inherits all attributes from the higher-order sub-categories. The theory sketched so far may be simple and elegant, but... unfortunately this cannot be applied to all concepts! Moreover, it does NOT reflect how the human mind actually works and categorizes things. Here are the main reasons why: 1. In fact, some attributes are neither necessary nor sufficient and yet they may be extremely relevant for a category (e.g., "flying" for birds [think of penguins]); 2. Indeed there are categories and sub-categories, but one particular level is more important than others; 3. Some items represent a category much “better” than others, and actual boundaries between categories may be fuzzy. ARE ATTRIBUTES NECESSARY? A demonstration that our mental categories are not based on necessary and sufficient attributes is that it may be surprisingly difficult, and it may even feel “unnatural” to try to define the attributes of a category. For example, is having four legs an attribute of dogs? Yes. But as noted by Murphy (2021) in his chapter, the dog in the figure was born with only three legs, yet no one would doubt it is a dog! Another example: what are the attributes of a bird? Having two legs and reproducing by laying eggs? Perhaps yes, but these two do not reflect how we naturally think of birds. "Flying" is likely a much more typical feature of birds, even though it is neither necessary (e.g., penguins do not fly) nor sufficient (e.g., mosquitoes fly as well). Evidence that some attributes are more relevant than others in defining a category comes from data on response time. People are faster in responding “Yes” (even incorrectly!) to sentences like “Birds are animals that fly” than to sentences like “Birds are two-legged” (e.g., Conrad, 1972). Response times may reveal how the underlying, implicit processes of our mind truly work! BASIC LEVEL IN THE HIERARCHY OF CONCEPTS “Pet that poodle”, “CAN I PET THAT DAWGGGGGGG”, “pet that mammal”, or “pet that vertebrate” (and so on)? You probably just pet the dog! Even though categories are organized hierarchically, there is one single most important level, which defines how we spontaneously refer to an item. That level is called “basic”. The basic level is generally associated with one or more of the following characteristics: 1. quick response (you are faster to answer “yes” to “Is this a dog?” than to “Is this a mammal?”); 2. it is indicated by a relatively short word; 3. are learnt earlier by children; 4. may be associated with a specific motor response (e.g., petting a "dog" is a relatively specific gesture, while petting a "mammal" is not; putting on "shoes" involves a specific motor pattern, while putting on "garments" does not). BASIC LEVELS MAY VARY OVER TIME AND ACROSS CULTURES As reported by Murphy (2021), cultural aspects may be decisive in determining which level is the basic one. People in less industrialized societies are more likely than North Americans to have more specific “basic levels” for natural categories. For example, they may use labels like “elm, trout, finch”, rather than just “tree, fish, bird” (Berlin, 1992; Murphy, 2021). This is due to a different familiarity with the specificities of the items within each category and sub-category, which leads to a different propensity to “see” the subtle differences rather than the similarities across items at each level. Therefore, it is likely that even North Americans, when they lived in a less industrialized society (i.e., a couple of centuries ago), had their “basic levels” of natural things at a more specific level. In general, the basic level may be determined by one’s expertise with the items of a given category. For example, what a blacksmith spontaneously calls a monkey wrench or a hacksaw for me are just a wrench and a saw, or maybe just tools (importantly, this happens even though I know the more specific names, yet I am unlikely to use them to refer to the items, unless necessary). FUZZY CATEGORIES AND TYPICALITY The clearest piece of evidence that our mental categories are not so well-defined or based on clear-cut attributes is that not all items are equally representative of their category. For example, robins, eagles and pigeons are probably much more typical of the “bird” category than ostriches, chickens and penguins. For some (but not all!) natural categories we still may possess enough scientific knowledge to decide with certainty how to assign an item. For example, penguins must be assigned to birds, while dolphins must NOT be assigned to fish, even though both items possess “ambiguous” characteristics. For many other categories (more frequently, artificial ones), boundaries are totally fuzzy. For example, chairs and bookcases are certainly typical pieces of furniture, while rugs are less typical, and pictures or vases are located at an uncertain boundary of the “furniture” category. They may also belong to other categories such as art objects and gardening objects, respectively. What makes an item “typical” in its category? That is unclear. It could be the frequency with which we encounter them. This may be true, but only in part. For example, I see both robins and chickens quite often, but the former is still much more typical “bird” than the latter. Also, I see eagles quite rarely (and nearly never in vivo), yet eagles are undoubtedly very much a “bird”! WHAT MAKES AN ITEM TYPICAL OR A PROTOTYPE As reported by Murphy (2021) the most convincing explanation about what makes an item “typical” was provided by Rosch and Mervis’s (1975) family resemblance theory. The theory states that “typicality” depends on an item: 1. possessing many features that are shared with most other items in the same category, 2. not possessing many features that are more frequent in other categories. For example, “flying” is neither necessary nor sufficient to be a bird, but it is still crucial for defining typicality within the bird category, because 1] most birds fly, and 2] not many non-bird animals fly. On the contrary, "laying eggs" is necessary for being a bird, but it does not define typicality, because 1] yes, all birds lay eggs, 2] but a lot of other animals also lay eggs. The most typical item in a category, which ideally has all features shared by most other category members and as few as possible of those that are most frequent in other categories, is called the “prototype”. The similarity with the prototype reflects how much each item is typical within its category. Murphy (2021) reports many characteristics on how “typicality” affect cognition, including : 1. Highly typical items are judged as members of a category more often and more consistently. This is especially evident in categories with unclear boundaries; 2. We are faster in categorizing typical than atypical items; 3. Typical members are learned before atypical ones. This can be observed, for example, in children; 4. Typical items represent more effective examples when you must learn a new category. GROUNDED COGNITION AND CONCEPTS In fact, none of the standard theories of cognition can successfully explain all phenomena concerning mental concepts and categories. In fact, this may be partly because concepts are not abstract representations within our minds. According to Barsalou’s theory of grounded cognition, concepts, like other cognitive processes, are not fixed or totally ingrained within our mind, but they depend heavily on situations, actions, goals, desires, and any contingent aspect that may emerge as salient in a specific context. For example, imagine spending a winter in Norway. How much "typical" would each item shown in the figure be of the “clothing” category? And how much would they be typical if you were playing football in summer? Similarly, how much food or beverage items would be typical depending on the season, or your hunger, thirst, or just the social context (e.g., party vs elegant restaurant) in which you are? Reasoning and decision making | BOUNDED RATIONALITY Reasoning and decision making are probably the highest expressions of the human mind and rationality. Inductive and deductive reasoning underlie scientific thinking. Decision making, when supported by higher cognitive processes and not by contingent appetites alone, may be viewed as the basis of free will. Importantly, however, psychological research has consistently shown that our rationality is “bounded”. Not only our rationality is limited because we do not (cannot) possess all possible information, which is somehow inevitable, but it is also inherently bounded by the limited capacity of our cognitive resources. We cannot remember or pay attention to everything. We cannot mentally handle an unlimited amount of information while reasoning. We may seek additional information to try to make a better choice, but this may cost time, money, and effort. Understanding human thought considering these limitations was called the “bounded rationality” framework (Simon, 1957). More recently, Tversky and Kahneman went on to show that not only our limited cognitive capacity binds our thought processes, but we are also prone to a series of recurrent and predictable “biases” that systematically affect our reasoning and judgment, and that may be overcome only at the cost of substantial effort, expertise and attention. DEDUCTIVE REASONING Deductive vs Inductive are the two most cited forms of reasoning. Deductive reasoning is often described as “top-down”, while inductive is “bottom-up”. While these sound as opposite, both are correct and appropriate. They are just two different ways of extracting information from what we know, with different goals. In both cases, however, we risk incurring fallacies! Deductive reasoning is “top-down” because it starts from some general principles, that are assumed to be true, to finally reach a specific conclusion through logical inference. A classic example is syllogisms, where there are two premises and a conclusion. What makes a syllogism valid is the fact that IF the premises are true THEN the conclusion must certainly be true. What matters in deductive reasoning is its formal validity. A syllogism can be valid even if the specific content of its sentences is obviously false. Below are two examples of syllogisms. Both are formally valid because their conclusions must be true IF the premises are true (even though in one case the major premise and the conclusion are false)! WHY IS DEDUCTIVE REASONING SO DIFFICULT Our human mind is not a logic machine. Of course, we are capable of carrying out valid deductive reasoning, and of using the rules of the standard logic correctly, but this may require substantial expertise and attention. It is not the “natural way” in which we work, and we are always at risk of incurring biases and heuristics (i.e., automatic judgements based on our previous knowledge). A demonstration is the “Wason selection task” (Wason, 1966) and its modified “realistic” version below. Carefully read the figure above. With the Original version, 90% of people correctly turn “A” (which is important)... but then incorrectly turn “4”. In fact, the latter is irrelevant, because even if a consonant appeared on its other side, this would NOT disconfirm the statement, yet it deceptively appears as an important confirmation of the rule. The correct choice is turning "7", because if a vowel appeared on its other side, then the statement would be effectively disconfirmed. This shows how difficult it is to apply deductive reasoning on abstract material. Interestingly, most people will perform correctly in the – formally identical! – realistic version of the right side of the figure. But beware! The specific "realistic" content helps you in this specific case... but it may also mislead you in another context. INDUCTIVE REASONING Inductive reasoning makes “bottom-up” inference. It starts from a series of specific observations to reach more general conclusions. A classical example: (Premise) “I have seen several swans so far, and all were white” --> (Conclusion[s]) “Most swans are white” / or even “All swans are white” The latter and strongest conclusion (i.e., "all swans are white") is objectively false: the cygnus atratus in Australia is black! In fact, even the former and milder conclusion (i.e., "most swans are white") could be false, because it is only based on my own empirical experience, which may not be representative of a universal reality. That is not a problem, however. In inductive reasoning, there is nothing wrong with a conclusion being false even if the premises are true. Inductive reasoning is less inferentially strong than deductive reasoning. It deals with uncertainty, aiming to reach conclusions that are most probably true if the premises are true. Inductive reasoning may appear as a “weak” type of reasoning. It is not! Scientific thinking is largely based on induction. Moving from some specific observations to a more general model (and then continually challenging it) is exactly what science does/should do. Inductive reasoning deals with uncertainty... but even uncertainty has its own rigorous rules. When you draw an inductive conclusion from empirical observations, you can remain uncertain, but you should make every possible effort to ensure that your observations were not biased, that you are not deliberately ignoring other sources of information, that you are not overstating or understating evidence, that your conclusions are consistent with statistical principles, and so on. BIASES IN THE PROBABILISTIC REASONING Biases in dealing with probability are a major threat to the validity of inductive reasoning in real life. As for deduction, this does not mean that people cannot conduct valid inductive inference, but only that doing so requires attention and expertise. There are countless examples of how “spontaneous” probability judgments are systematically biased. The “conjunction fallacy”: people may fail to notice that the conjunction of two distinct conditions is always less likely than each condition separately. A renowned example is the “Linda problem”. Participants are shown a description like: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.” Then, participants must separately provide probability estimates about a series of sentences, which also include the following: “Linda is a bank teller” and “Linda is a bank teller and is active in the feminist movement”. The latter statement is necessarily less likely than the former (because the conjunction of the probability of two events is always less likely than the probability of either event taken alone). Nonetheless, most participants tend to assign a higher probability to the second statement. Nobel-laureates Tversky and Kahneman (1981) showed this phenomenon first, and attributed it to a “representativeness heuristic”, that is the fact the second option sounds more “representative” of the description of Linda. JUDGEMENT UNDER UNCERTAINTY → HEURISTICS AND BIASES As said at the beginning of this unity, rationality is bounded not only because our knowledge and our capacity of cognitive processing are inherently limited (Simon, 1957), but also because we are “spontaneously” prone to a series of systematic biases that affect our judgment and decision. The title of the present slide is taken after the seminal work by psychologists Tversky and Kahneman (1974), who won the Nobel prize in Economics for their work which fundamentally challenged the assumption of human rationality prevailing in modern economic theory. Why do biases exist, if they mislead us? According to Kahneman and Tversky, it is because they do NOT necessarily mislead us! In fact, heuristics and biases represent an “adaptive toolbox” that may have been evolutionarily selected because in most cases… They worked! Heuristics are “mental shortcuts” based on automatic, effortless cognitive processes that help us make judgements, decisions, or solve problems. They are not optimal, and they are not necessarily rational, but they are fast, and they often lead to approximately good solutions when we do not have time or energy for a deeper and effortful examination of the situation. In fact, as in the examples in the next few slides, we focus on the cases where biases and heuristics mislead us. But this is only because seeing where they fail (that is, where they obviously contrast with rationality) is the best way to highlight their existence and how they work. AVAILABILITY HEURISTIC Several heuristics are involved in our reasoning, judgement and decision making. Here I present the availability heuristic as a major example. According to the availability heuristic, the easiness and immediacy with which examples of something come to your mind become a major index to assess the probability, frequency, or importance of that something. For example, nowadays everybody knows that airplanes are a super-safe means of transportation. But decades ago, most people wrongly believed that it was less safe than a car or boat, due to the ease and impact with which dramatic (but anecdotal) cases of airplane crashes would come to mind. In another example, Tversky and Kahneman (1973) found that most people wrongly believed there are many more English words starting with “K” than having “K” as their third letter. Why? Because it is easier to think of words starting with a letter than words having that letter in a non-first position. Fox (2006) found that students who had to indicate 10 things that they did not appreciate in their course of study paradoxically judged the course better than students who had to indicate only 2 things. You may believe that focusing on bad things leads to worse judgement. In fact, in this case the difficulty to think of so many examples (10 are a lot!) of negative aspects was (perhaps unconsciously) interpreted by students as an index that such aspects were not many nor so relevant. Is availability a good heuristic? In many situations, yes! It can mislead us, but if we do not have enough time or energy to formulate a fully informed judgement, relying on the easiness with which we can think of previous examples is generally a good approximation! DECISION MAKING A core assumption in modern economic theory was that humans involved in economic decisions are fundamentally “rational” decision makers. Of course, the bounded rationality framework pointed out that one cannot always make the best possible choice, because of the limitations in our knowledge and cognitive capacity. But even so, we would still be perfectly rational if we could make the best possible choice based on what we know and the capacity of our cognitive processing. Research by Kahneman and Tversky, however, once again showed that economic decision making is prone to systematic biases. A clear example of systematic bias in economic decisions is the “framing effect”. The way in which a problem is “framed” affects the likelihood of the eventual choices. Why is it a problem? In fact, according to modern economic theory, even with the same available knowledge and cognitive capacity, we should not all necessarily make the same decisions to be rational. Decisions may well depend on preferences and individual differences, and this is ok. You may prefer to gain less now than more later because of your current financial situation. You may prefer option A over option B because you are (or you are not) risk averse. You may prefer X over Y just because of your personal preferences. None of these are examples of irrationality. BUT if you change your choice just because of the way in which a problem is posed (framing effect), then that is a systematic bias, and it is irrational! EXAMPLES OF FRAMING EFFECT Imagine being a doctor (or a patient) and you have to choose between either one of these options: McNeil et al. (1982) found that 86% of participants would hypothetically choose the surgical intervention, likely focusing on the better hope of survival in the long term (even though the prospects are clearly worse in the immediate and medium term). This might be perfectly rational: it is just about personal choices. But what happens if you reframe the exact same problem with a focus on mortality instead of on survival? Like this: If participants were perfectly rational, 86% should again choose surgical intervention. Instead... the figure dropped to only 56% (McNeil et al., 1982). There is still a majority opting for a better long term survival, but much less than before. Unit 9 → emotions WHAT ARE EMOTIONS An emotion is a multicomponent episode that prepares humans to react. Typically, an emotional episode is composed of six components starting with a cognitive appraisal that is a person's assessment of the personal meaning of the circumstances he/she is facing. The appraisal triggers a series of responses that represent other components of the episode, that are the subjective experience of the emotion, the affective state of the emotion; the thought and action tendencies, the need to act and think in a certain way. A fourth component concerns internal bodily reactions, and in the specific the responses of the autonomic nervous system; furthermore, a fifth component is represented by facial expression, the specific facial landmarks that are triggered by an emotion. The last component includes the responses to emotion, or the way in which individuals cope with the emotion that the situation has elicited. It is important to underline that those elements do not consist in an emotion, rather, the different activation and position that those elements assume generate a particular emotion. (Functions of Emotions / Culture and Emotion / The Experience of Emotion) THEIR FUNCTION It is impossible to imagine a life without emotions, they are informative of who we are, of social behavior and of relationship with others. To understand fully the role of emotions it is important to understand their functions, here the major functions of emotions are reported divided into three major sections, the intrapersonal role, the interpersonal functions and the social and cultural functions of emotions. INTERPERSONAL FUNCTIONS OF EMOTIONS Intrapersonal functions of emotions refer to the role played within each of us individually, emotions, in the specific: Help us Act Quickly with Minimal Conscious Awareness → Emotions are rapid information-processing systems that help us in response to different stimuli with minimal thinking. Emotions are in this sense adaptive in the sense that aid in our survival and allows us to act immediately without much thinking. For instance, the emotion of disgust helps us immediately act by not ingesting the item or expelling it out. Emotions Influence Thoughts → Emotions are also connected with thoughts and memories. For instance, when we encode memories in our brain, they are colored with the emotions felt at the moment in which the situation encoded was experienced. Further, emotions affect our thinking process in different ways. It is, indeed, more difficult to think critically and clearly if we feel intense emotions, whereas it is easier to think when we are not overwhelmed by emotions. Emotions Motivate Future Behaviors → Emotions prepare us to behave in a certain way; indeed, when we a particular emotion is triggered, it activates systems such as perception, attention, inference, learning, memory, motor behaviors and behavioral decision making; and deactivates other in order to prevent an overload of the systems so that a coordinated response to environmental stimuli is allowed. For instance, when we experience fear, our bodies shut down temporarily unneeded digestive processes, resulting in saliva reduction; blood flows disproportionately to the lower half of the body; the visual field expands; and air is breathed in, all preparing the body to flee. Interpersonal functions of emotions refer to the meaning of emotions to our relationship with others. Considering, in the specific, the emotional expressions, they: Facilitate Specific Behaviors in Perceivers→ Since emotional expressions represent some universal social signals, they contain information about the expressor's psychological states, intent, and subsequent behavior. That information affects what the perceiver is likely to do. For instance, fearful faces are more likely to produce approach-related behavior, whilst angry faces are more likely to produce avoidance-related behavior. Signal the Nature of Interpersonal Relationships → As mentioned above, emotional expressions contain much information about expressors’ states. Together with this information, emotional expressions also contain information about the nature of the relationship among interactants. Provide Incentives for Desired Social Behavior → Facial expressions of emotion are important regulators of social interactions. SOCIAL AND CULTURAL FUNCTIONS OF EMOTIONS Cultures also inform us about what to do with our emotions - that is, how to manage or modify them - when we experience them. One of the ways in which this is done is through the management of our emotional expressions through cultural display rules that are learned early in life that specify the management and modification of our emotional expressions according to social circumstances. By affecting how individuals express their emotions, culture also influences how people experience them as well. In this way, our culturally moderated emotions can help us engage in socially appropriate behaviors, as defined by our cultures, and thus reduce social complexity and increase social order, avoiding social chaos. The following video shows us similarities and differences in emotion expression across countries: Are there universal expressions of emotion? - Sophie Zadeh MOTIVATION Motivations are closely linked to emotions, serving as the driving forces that initiate and guide our behavior. Some motivations are biological, like the need for food, water, and sex. However, there are numerous personal and social motivations influencing behavior, such as the desire for social approval and acceptance, the drive to achieve, and the inclination to take or avoid risks. We pursue our motivations because they are rewarding. According to operant learning theories, motivations prompt us to engage in certain behaviors because doing so makes us feel good. In psychology, motivations are often discussed in terms of drives—internal states activated when the body's physiological balance is disrupted—and goals, which are desired outcomes we strive to achieve. Motivation can be seen as a series of behaviors aimed at reducing drives and reaching goals by comparing our current state with a desired end state. Like a thermostat regulating an air conditioner, the body seeks to maintain homeostasis, balancing goals, drives, and arousal. When a drive or goal is activated, such as hunger, the body's thermostat triggers behaviors to reduce the drive or achieve the goal (in this case, seeking food). As the body progresses toward the desired state, the thermostat continually checks for balance. Once the need or goal is met, the behaviors cease, but the body's thermostat remains vigilant for future needs. Beyond basic motivations like hunger, personal and social motivations can also be understood in terms of drives or goals. For example, if we skip a day of studying for an exam, we might work harder the next day to reach our goal. When dieting, we might binge if the scale shows we've met previous goals. When lonely, the drive to socialize increases, prompting us to seek company. Often, our emotions and motivations operate subconsciously, guiding our behavior without our awareness. theories of emotions | THEORIES OF EMOTIONS As in every other science, a Theory of Emotions should fit some specific requirements in order to be accepted in the scientific community. First, a theory of emotion should answer some specific questions such as: - What is an emotion? - Which are the functions of emotions? - How many emotions exist? - How universal are emotions? Second, a theory should stimulate scientific research; for instance, James’s theory of emotions stimulated a lot of research on the role of the autonomic nervous system in humans and animals. Third, a theory should allow us to think of new things that should be observed and studied. Finally, a theory should be able to show correlations between different disciplines and fields of study. Different theories of emotions have been developed and many of them are characterized in two macro-areas: Dimensional Theories of Emotions and Categorical Theories of Emotions. (Dr. Ekmanin New Guinea) DIMENSIONAL THEORIES Some authors, however, assessed that in the categorical descriptions of emotions there was a lack of information, therefore they proposed new models to describe the structure of emotions. In this alternative approach to modeling the structure of emotion, the variability that seems to be reflected by the discrete emotions is held to be reducible to a smaller number of underlying dimensions. Usually these dimensional models endorse a two dimensional array of emotion experience, and these two dimensions correspond, with a certain degree of agreement, to the level of which a specific state is experienced as “Pleasant” or “Unpleasant”, and to the level of which a specific state is experienced as “Activated” or “Deactivated”; but it is possible that the dimension of which an emotional experience is composed can vary in number and in type across theories. CATEGORICAL THEORIES Categorical Theories of emotions assume that there are some basic emotions to which correspond specific events, those basic emotions are identifiable at all the stages of phylogenesis and have a great meaning for what concerns the fight for individual survival. The idea of basic emotions has been conjugated in two different ways in literature. One of them is that a small group of basic emotions are the fundamental elements of emotional life because, when combined together, they produce other – more complex – emotional states. While on the other side theorists that assume that there is a small number of basic emotions, these basic emotions have a biological basis and, therefore, they are encoded in the genes. An example of the different theories that provide a description of basic emotions is provided below: AFFECTIVE NEUROSCIENCE Affective neuroscience explores how the brain generates emotional responses. Emotions are psychological phenomena involving bodily changes (such as facial expressions), alterations in autonomic nervous system activity, subjective feelings, and motivations to act in particular ways. This field seeks to understand how brain structures and chemicals give rise to emotions, one of the most intriguing aspects of the mind. Affective neuroscience relies on objective, observable measures that provide credible evidence to both scientists and laypeople about the importance of emotions, paving the way for biologically based treatments for affective disorders like depression. The human brain and its emotional responses are complex and adaptable. In contrast, nonhuman animals have simpler nervous systems and more basic emotional responses. Techniques such as electrode implantation, lesioning, and hormone administration are more feasible in animals than in humans. Human neuroscience primarily uses noninvasive methods such as electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), and studies individuals with brain lesions caused by accidents or diseases. Consequently, animal research offers useful models for understanding human affective processes. Affective circuits in other species, especially social mammals like rats, dogs, and monkeys, operate similarly to human affective networks, though animals' brains are less complex. In humans, emotions and their associated neural systems exhibit additional layers of complexity and flexibility. Unlike animals, humans experience a wide range of nuanced and sometimes conflicting emotions and respond to them in complex ways, influenced by conscious goals, values, and other cognitions in addition to emotional responses. However, this module emphasizes the similarities between organisms rather than the differences, often using the term "organism" to refer to any individual experiencing an emotion or showing particular neural activations, whether it be a rat, a monkey, or a human. Across species, emotional responses are organized around survival and reproductive needs. Emotions affect perception, cognition, and behavior to help organisms survive and thrive. Networks of brain structures respond to different needs, with some overlap between emotions. Specific emotions are not confined to a single brain structure; rather, they involve networks of activation, with multiple brain areas engaged during any emotional process. In fact, nearly the entire brain is involved in emotional reactions. Brain circuits located deep within the brain, below the cerebral cortex, primarily generate basic emotions. While past research focused on specific brain structures, future studies may uncover additional brain areas crucial to these processes. AMYGDALA The amygdala plays a key role within emotion circuits, and it is known to register emotional reactions. Initially, it was thought that the amygdala received all its inputs from the cortex and, hence, that those inputs always involved conscious appraisal. But more recent studies with rats indicated connections between sensory channels and the amygdala that do not go through the cortex suggesting it is the biological basis of unconscious appraisals. The amygdala is capable of responding to an alarming situation before the cortex does, which suggests that sometimes we can experience an emotion before we know why. Less activation has been found in the amygdala in criminals with antisocial personality disorder during emotional processing than normal criminals or normal individuals. FACIAL EXPRESSION AND EMOTION The facial movements that sometimes accompany an emotion serve to communicate the sender’s emotion. Since the publication of Charles Darwin’s 1872 classic, The Expression of Emotion in Man and Animals, psychologists have regarded the communication of emotion as an important function. Recognizing facial emotional expression has therefore a critical function for social interaction. Communication of emotion through facial expressions → Certain facial expressions seem to have a universal meaning, regardless of the culture in which an individual is raised. The universal expression of anger, for example, involves a flushed face, brows lowered and drawn together, flared nostrils, a clenched jaw, and bared teeth. When people from five countries (the United States, Brazil, Chile, Argentina, and Japan) viewed photographs showing facial expressions typical of happiness, anger, sadness, disgust, fear, and surprise, they had little difficulty identifying the emotion that each expression conveyed. Even members of remote groups that had had virtually no contact with Western cultures (the Fore and Dani peoples in New Guinea) were able to identify the emotions represented by facial expressions of people from Western cultures. Even though facial musculature varies from person to person, the muscles needed to produce these universally recognized emotions appear to be basic and constant across people, suggesting that the human face has evolved to transmit emotion signals and the human brain has evolved to decode these signals. There are however certain emotions that are culturally related to their intensity of expression. For example, the expression of disgust or rejection is based on the organism’s attempt to rid itself of something unpleasant. Effects of emotions on experimental settings | EMOTIONS AND ATTENTION Emotional stimuli can be used not only to modify an individual's emotional state but also to trigger attention. As mentioned in Lesson 4 on Attention, our attention can be voluntarily and/or automatically directed to a specific cue. Salient cues such as a flash or a loud sound automatically attract attention. Emotional stimuli do the same: if two stimuli, one neutral and one emotional are presented simultaneously, our attention is automatically directed to the emotional one. Past studies have reported detrimental effects of both task-irrelevant unpleasant and pleasant stimuli on performance during perceptual and attentional tasks indicating that both pleasant and unpleasant distractor stimuli capture attention and divert processing away from the main task leading to impairment of behavioural performance. Models of emotion-attention interactions indicate that stimulus intensity (high vs. low arousal) is the most relevant dimension. However, when both pleasant and unpleasant items are presented valence and arousal modulate attention and modify participant’s performance. To test the influence of both intensity and valence on participants' performance, a common task is the letter search task in which participants are instructed to respond as fast as possible when they detect the letters “X and N”. Participants made their response by pressing button 1 for target letter X and button 2 for target letter N on a number pad. Two irrelevant images were also presented on the computer screen with different combinations: unpleasant-pleasant; neutral-pleasant; neutral-unpleasant; unpleasant- unpleasant and pleasant- pleasant. Results showed that the interference effect of a compound pleasant-plus-unpleasant stimulus was greater than that of a neutral-emotional (pleasant or unpleasant) stimulus. These results suggest that at the influence of joint pleasant and unpleasant task-irrelevant stimuli during perception is mainly determined by the intensity of the stimuli, and independent of their valence. EMOTIONS AND MEMORY The emotions we experience during an event may affect how we remember that specific situation. In 1890 the famous psychologist William James stated that “an impression may be so exciting emotionally as to leave a scar upon cerebral tissue”. Since James, researchers have investigated how emotions can modify our ability to encode and remember emotional events. A classical example of memories enhanced by emotions is flashbulb memories. These are extremely vivid and long-lasting memories that individuals retain for emotionally arousing, shocking public events, such as 9/11 or the Bataclan attack in Paris. These memories feel vivid and can last for years, although details are often not accurate and they are prone to distortions. Over the years researchers have developed several paradigms to study the association between memory and emotions. For example, participants can be exposed to pictures of different valences, i.e., pleasant (e.g., scenes of erotica, families, animals), neutral (e.g., neutral faces, scenes of household objects), and unpleasant (spiders, snakes, scared faces). After the encoding, the participants perform an immediate test, where they are exposed to half of the pictures presented at the encoding (called target) and other new pictures of similar valence (called foils). The task of the participant is to decide whether the pictures are old (i.e., seen during the encoding) or new (i.e., not seen during the encoding). Then, after a long delay (minutes to hours, depending on the aim of the study) participants perform a delayed test similar to the immediate, but with the remaining pictures from the encoding and other new pictures of similar valence. A typical result is that unpleasant pictures are remembered better than the pleasant and neutral ones, whereas results for pleasant pictures are less clear. A possible explanation for the memory enhancement effect of unpleasant events is related to the higher arousal that this picture elicits. As postulated by the memory-modulation hypothesis, emotionally arousing experiences strongly activate the amygdala, which promotes both a better encoding and long-term consolidation of this information. PART 2 Another interesting way to study the relationship between memory and emotions is the so-called trauma film paradigm. In this paradigm participants usually watch short (8–12-min) films depicting traumatic events (e.g. scenes of car accidents) or, as a control condition, neutral events (e.g., driving on the highway). After the film exposure, participants are usually asked to fill an intrusion diary. In the diary, they have to record any intrusive memories of scenes from the film that occurs over the next few days. Afterward, they perform a recall test, which may consist of recalling details of the films. The trauma films induce a greater level of spontaneous intrusive recollections over the days, and the strength of frequency of the intrusions is associated with great memory retention. ( Effects of Emotion. Emotion Review, 1(2), 99-113. + trauma film paradigm) EMOTIONS AND SUBJECTIVE TIME As we have previously discussed, emotional stimuli can modulate our subjective perception of time. It has been demonstrated that negative emotional stimuli (subjective experience of time) are perceived to last longer than neutral stimuli. For example, if we present the image of surgery for 400 ms we perceive it to last longer than an image of a flower also presented for 400 ms. The effects of emotional stimuli on time processing is evident in children as well as in adults but with a different degr ee. In a study testing 8- and 9-years old children and young adults (university students) it has been shown that both groups overestimated facial emotional stimuli of anger but only children overestimated time when facial emotional stimuli of sadness were used. A similar study was conducted to investigate the effect of emotional stimuli of anger, happiness and sadness on temporal judgment with young and older adults. The study confirmed the positive effect of emotion called the "positivity effect" of ageing postulated by the socio-emotional selectivity theory. Older adults overestimated temporal intervals when marked by facial emotional stimuli of happiness. THE ROLE OF EMOTIONS AT SCHOOL Educational settings are infused with intense emotional experiences that can affect students’ and teachers’ performances and learning. The German psychologist Reinhard Pekrun, since the complexity of emotions in educational environments, proposed that outcome-related emotions at school should be observed by two points of view: retrospective and prospective. In triggering an emotion, therefore, compete both expectancy and values referred to the object in focus and to accomplish the purpose of instigating an emotion they can combine together in multiple ways as shown in the table below. Appraisal-Based Framework of Emotional Responses: The Role of Value, Control, and Object Focus in Emotion Generation PART 2 For instance, according to this theory, if the focus is success and there is high expectation due to a high control over the situation anticipatory joy can be predicted, on the other hand if the focus is on failure the same level of expectancies predicts anticipatory relief. In case of low expectancies in both success and failure we can predict hopelessness. Finally, if a partial lack of control causes moderate expectations, we can predict hope if the focus is on success, and anxiety if the focus is failure. The onset of failure or success triggers outcome emotions. In this case, when control is irrelevant, joy is the result of success and sadness is the result of failure. But also control-related emotions are fundamental in Pekrun’s model, therefore, he included in his model also pride as a result of success and shame as a result of failure if those are caused by oneself. If the focus of control is, instead, another person's gratitude is the emotion experienced in case of success and anger in case of failure. The novelty in this model is expressed by the fact that Pekru

Use Quizgecko on...
Browser
Browser