Evolution, Thought and Cognition PDF - Chapter 9

Document Details

TrendyTurquoise7885

Uploaded by TrendyTurquoise7885

Aalborg University

Tags

cognitive psychology evolutionary psychology brain mind

Summary

This document explores the fascinating intersection of evolution, thought, and cognition. It delves into the computational theory of mind, discussing how the brain functions in the face of its metabolic costs when making decisions. It also investigates the impact of evolutionary thinking on core cognitive areas such as vision, memory, and decision-making.

Full Transcript

9 Evolution, Thought and Cognition Key Concepts computational theory of mind substrate neutrality levels of explanation episodic and semantic memory cognitive economy typicality effect indicative and deontic reasoning the gambler’s fallacy the hot hand fallacy foraging...

9 Evolution, Thought and Cognition Key Concepts computational theory of mind substrate neutrality levels of explanation episodic and semantic memory cognitive economy typicality effect indicative and deontic reasoning the gambler’s fallacy the hot hand fallacy foraging theory marginal value theorem The ability to respond to and act upon the environment was a big step in evolution, starting simply with primitive responses to stimuli, but ultimately requiring sophisticated mechanisms of percep- tion, monitoring and decision-making. In the twentieth century, a new form of psychology was developed – cognitive psychology – which described these control processes in terms of their under- lying computations. Traditional theories of cognition have tended to emphasise proximate causes, explaining behaviour in terms of the cognitive processes that underlie it rather than ultimate ones. Evolutionary approaches to cognition attempt to explain behaviour at the ultimate level in terms of behaviours that might have been ancestrally adaptive. In doing this they seek the adaptive signifi- cance of certain behaviours and ask what specific problems the cognitive system was designed to solve. Cognition is therefore seen as adaptive and apparent maladaptive behaviour is either the result of differences between the current world and the environment in which the mind/brain evolved, or necessary trade-offs in the evolution of mind. In this chapter, we discuss the nature of cognitive theorising, which focuses on explaining behaviour as a result of mental computation. We then inves- tigate the impact of evolutionary thinking on theories of cognition investigating the important areas of vision, memory, reasoning and decision-making. What Are Minds, What Are Brains and What Are They For? Psychologists, particularly cognitive psychologists, are familiar with discussing minds, whereas neuroscientists are familiar with discussing brains. What is the difference and if there is a difference, does it matter? To many, particularly dualists such as the philosopher Descartes, there was a differ- ence and it certainly mattered. In fact, to Descartes it was all about matter and the lack of it. In his view brains were material objects meaning that they were made of the same kind of stuff as the rest of the body and indeed the physical world. Brains doubtless did a lot of important work – such as moving the body and sensing the world – but what they didn’t do, in his view was, think. Thinking was done by a different kind of thing completely, a thing called the mind and unlike the brain it was not a material object, it was not made of matter but of some mysterious ‘substance’ that Descartes referred to as res cogitans: Latin for ‘thinking stuff’. (The name he used for common or garden matter, including the brain, was res extensa, Latin for ‘extended stuff ’.) 220 Evolution, Thought and Cognition Nowadays most psychologists and philosophers have rejected the view that there are two fundamentally different kinds of substance involved in cognition (known as the dualist position because there are two distinct kinds of stuff) and accept that mind and brain are fundamentally the same kind of thing. This view suggests that everything is ultimately the result of matter and material processes and is therefore known as materialism. It is the materialist view that is espoused in this chapter. We can express the relationship between mind and brain in different ways. Steven Pinker refers to the mind as ‘the information processing activity of the brain’; in other words, the mind is just a particular way of describing the brain’s activity. But why do we have brains, and what are they for? When psychology students are asked this question they usually struggle for an answer. This is surprising as many of them have been studying psychology for two years. It is a bit like physiology students having an intimate under- standing of the heart: its tendons, muscles, neural pathways, valves, blood vessels and distinctive sound without ever being told – or bothering to ask – what its function is. When we learn that the heart is a pump it brings order to what might otherwise be a random collection of different types of meat. So what is the brain for? One answer is that it is an organ of decision-making (Gintis, 2007). Brains evolved in order to answer the question ‘what shall I do next?’ and in order to answer this question we need information about what is going on in the external and internal world: ‘is there anyone around?’, ‘am I hungry?’ and it needs to be able to process this information in order to arrive at a conclusion. Once we start thinking in this way, we can see that many aspects of cognition that are often treated separately – vision, memory, reasoning – all combine to enable better decisions. Cognition and the Evolution of Thought The power and complexity of the human mind is, well, mind blowing. It enables us to navigate three-dimensional space with an ease that would embarrass any robot (if they were capable of emotion), entertain thoughts about things that we have never before experienced, and share these thoughts with others by packaging them up using the medium of language. The benefits of having a powerful mind are obvious; culture, language, creativity and complex problem solving, but there are also less obvious costs. First, a powerful mind is the result of a powerful brain, and human brains are metabolically costly, consuming 20 per cent of the body’s energy while accounting for only 3 per cent of its mass. Second, it increases the weight of the head, making death from a broken neck or other ‘whiplash’ injuries much more likely than in other primates. Third, as a result of trying to combat the problems of a weighty head, the human skull is more delicate than it could be, making death from a fractured skull more likely. Fourth, the larger head required to accommodate the brain means that birth is difficult for both mother and baby – an alarming number of infants and mothers die from birth com- plications, particularly in societies without advanced medical care. This problem is compounded by the fact that the human pelvis is narrower than in our primate relatives due to the biomechanical requirements of bipedalism (see Chapter 4). That the brain developed despite these costs is clear evidence that the brain didn’t get bigger by accident, there must have been advantages in order to outweigh these costs. In fact, there is some research which suggests that our brains have been shrinking for the past 10,000 years or so (Liu et al., 2014). This may be because nowadays we have less need to be individually smart, relying increasingly on the products of culture and teamwork (see Chapters 6 and 14). Whatever the reason, if this research turns out to be correct it shows that evolution gives us the size of brain we need and no bigger. Cognition and the Evolution of Thought 221 Devore and Tooby (1987) suggest that human brains evolved to fill what they call the ‘cognitive niche’. A ‘niche’ is the specific ecosystem that an organism exploits. For example, a Demodex mite is a microscopic arthropod that lives for all of its life on our faces where it feeds off dead skin, mates and then dies (we’ll spare you a photograph). Your face is its niche. The cognitive niche refers to humans being able to use their intelligence and other cognitive abilities such as vision, memory, language and reasoning to exploit their particular ecosystem. Examples would be collaborative hunting, the creation of traps, cooking, medicine and a multiplicity of other neat ideas, some cultural, some innate, some a bit of both. The cognitive perspective, therefore, is that the brain engages in computation and this com- putation is what we call the mind. Many people are uneasy with the notion that the brain is a kind of computer, and many academics have explicitly rejected it (see Searle, 1980). Often the objections focus on the observation that computers need to be told what to do (by a program) whereas the mind learns for itself, or that computers work by slavishly applying algorithms to a problem whereas hu- mans work by ‘intuition’. This is presumably because they have in mind the type of computer they use at work or at home. But hard disks and microprocessors are just one way of building a computer; another way is to do what nature does and build a brain. To make this clearer it might help to know that the word computer was originally used to describe people whose job it was to crunch numbers, sometimes with the help of machines such as abacuses or other calculating devices. As technology improved the word was used to describe the machines themselves rather than their operators. (Quite a few things that used to be considered jobs are now more commonly used to refer to devices. Some younger people look askance when one of the authors of this book tells them his father was a printer.) Figure 9.1 Ada Lovelace, mathematician and writer of the first computer program. 222 Evolution, Thought and Cognition Computers do not need to be made of silicon. The first proper computer was, in fact, de- signed in 1837 by an Englishman called Charles Babbage. He never actually finished the machine, largely because he ran out of money, but detailed plans exist. It would have been five tonnes in weight, the size of a large room and made largely of steel. This was truly a steampunk construc- tion: data input and output was achieved using cards with holes punched in them, information was processed by metal drums driven by cogs and the whole thing was to have been powered by steam. Partnering Babbage in this ambitious enterprise was Ada Lovelace, a talented mathematician and the daughter of the poet Lord Byron (see Figure 9.1), who is credited with writing the first ever computer program in 1843. Although it was mechanical rather than electronic, it was nonetheless as much of a com- puter as a modern-day PC or Mac. This is because it engages in computation, a process defined by a set of mathematical principles devised by, among others, the British mathematician Alan Turing; so long as something engages in computation, it is a computer irrespective of what it is made from or how it is constructed (in his 1976 book Computer Power and Human Reason, computer scientist Joseph Weizembaum showed how to build a computer out of toilet roll and some pebbles). So, when we describe the brain as a computer, we are referring to the abstract process of computation just as when we describe the heart as a pump we are referring to its abstract property of moving fluid from one place to another by a particular sequence of actions. To argue that the brain is not a computer be- cause it doesn’t resemble current computers is like arguing that the heart cannot be a pump because it doesn’t resemble the thing you use to inflate your bicycle tyres. (There is a debate about what com- putation means, and whether the brain engages in it; see Fodor, 2000, and the reply by Pinker, 2005.) Figure 9.2 Statue of Alan Turing in Manchester, where he lived, worked and died. It is believed that he took his life by poisoning himself with an apple laced with cyanide, although others believe that his death was accidental as the apple was never tested. The scarf was placed there by the townsfolk of Manchester in order to ‘keep him warm’. Cognition and the Evolution of Thought 223 So, computation can be performed on a variety of media whether they be silicon, wheels and pegs, toilet rolls or, indeed, brains. This property is sometimes known as substrate neutrality because the computation doesn’t care (is neutral towards) the actual medium (or substrate) that produces it. What Does Evolutionary Theory Bring to the Study of Cognition? Hopefully by now you will be familiar with the evolutionary idea of ultimate questions, which focus on the evolutionary pressures that shaped our psychology: in short asking what a behaviour is for. This is what evolution brings to the study of cognition. As we shall see, our visual systems, memo- ries and reasoning abilities can appear to be very poorly designed in certain circumstances: we see things that aren’t there, forget our partner’s birthday and frequently jump to the wrong conclusion. But evolutionists argue the reason for this is that it only seems to be poorly designed because we don’t properly understand what these systems were designed to do. Any complete understanding of human psychology needs us to begin with the ultimate explanation. And this is what we do for each of the topics that follow: vision, memory, logical reasoning and statistical reasoning. Box 9.1 The Problem of Free Will The cognitive approach to psychology frequently leaves some people feeling uncomfortable. If the mind is nothing more than a biological computing machine where does that leave our free will? Are all our choices predetermined? The idea that free will is contrasted with determinism is a common one and it is important to point out the problem of this form of thinking. A deter- ministic system is one in which (1) if you understand completely the starting state of the system and (2) you understand completely the rules by which the system operates then it is possible to predict with 100 per cent accuracy the state of that system at any point in the future. Assuming that the universe were a deterministic system, then if we knew the state of every particle in the universe at some point in time (say a few billions of years ago) and if our physics were perfect, then by applying the laws of physics we would be able to predict the formation of planet earth, the origins of life, the evolution of human beings and even the fact that you are reading this book at this precise moment. It follows, therefore, that if the universe is deterministic and our brains are part of that universe, made up of complex aggregations of atoms and molecules, then our thoughts are as determined and therefore as predictable as everything else. This usually makes people feel uncomfortable, and many seek solace by assuming that the universe is not determin- istic. After all, they argue, quantum physics suggests that there is genuine uncertainty: physics does not operate like a clockwork mechanism. In an indeterministic universe there would be gen- uine unpredictability. For example, particles subjected to identical forces might move off in one direction or another, with no possible way of telling which in advance. It would be as if someone flipped a coin or threw dice in order to determine what happened next. This would certainly lead to the universe becoming unpredictable – exactly how unpredictable depending on how much randomness there is – but does indeterminism rescue our concept of free will? Some argue that it does. But consider what indeterminism means for freedom. It would be as if a coin were being tossed in the brain. Given a set of circumstances with two courses of action X and Y, in some cas- es the coin would come down heads and we would choose X, in other circumstances they would 224 Evolution, Thought and Cognition Box 9.1 (cont.) choose Y. But randomness does not seem like free will either, free choices are motivated, not the result of a random coin flip. So neither determinism nor indeterminism rescues our concept of free will. Is free will therefore an illusion? On the face of it, some research seems to suggest that it might be. One famous study was conducted by Libet (1985). Libet connected his participants up to an EEG machine that measured brain activity and asked them to consciously decide to move a finger whenever they felt like it. During this part of the study participants were staring at a fast-moving clock so they could register exactly when they had the subjective experience of deciding to move their finger. When their hands moved it broke a beam of light so the actual timing of the hand movement could be accurately registered as well. Things became interesting when the results of the EEG were studied. For all participants a swell of brain activity reliably preceded the conscious decision to move the finger by 350–400 milliseconds. This suggests that some set of unconscious processes occurs before the conscious decision to move the finger. In another study by Brasil-Neto et al. (1992) participants were asked to choose to move either their left or right index finger in response to a click. During this, transcranial magnetic stimulation (TMS) was applied to the motor cortex in either the left or right hemisphere. The effect of TMS here was to stimulate the motor cortex to initiate action in either the left or right finger (recall that due to the wiring of the brain, the left motor cortex controls the right-hand side of the body and vice versa). Even though participants were not in control of their actions (TMS was) they stated verbally that it was they who decided which finger they moved. So is free will an illusion? Not necessarily, but it does mean that we should probably revise the way we think about free will and consciousness. Free will and consciousness are not entities that exist outside the material world; they are part (perhaps a special part) of the computational machinery of mind which, in turn, is produced by the brain. Although it might seem strange that conscious will might not be in full control of the minutiae of behaviour, this might derive from a misunderstanding of what consciousness is for. One way of viewing consciousness is as the manager of a company who has to control every aspect of the business. But it is also possible that our conscious selves only become involved in decisions that cannot be dealt with automatically. In the Libet study, our conscious selves might simply send an instruction to lower level processes which states ‘at some point move the finger and let me know when this has been done’ leading to a lag between the decision and the awareness of it. Likewise in the TMS study a similar instruc- tion (‘move one of the fingers when you hear a click and let me know when this is done’) results in the reply that a finger has been moved as requested, even though this finger was moved – unbeknownst to consciousness – by magnetic stimulation. Like many managers, your conscious self knows only what it has been told happened, not what actually happened (Wegner, 2003). Vision The branch of the cognitive sciences to which evolutionary thinking has been applied with the least controversy is visual perception. The vision scientist David Marr (see Box 9.2) revolutionised the field when he asked what the visual system is designed to do (see the above discussion on what the brain is for). The answer might seem obvious: to enable us to see the world as it is, but this doesn’t really help since there are many instances where we clearly do not see the world as it is. Vision 225 Figure 9.3 Some visual illusions. In the one on the left, you see a triangle where there is no triangle, and in the one on the right, are the tabletops the same size and shape or different? Figure 9.3, for instance, shows two visual illusions. In the first all that is there are three circles with wedges cut out, yet we can’t help seeing a triangle; it is as if the space in the middle is somehow denser than the surrounding space. In the second the parallelograms that make up the tables are the same size and shape even though the one on the left looks longer and slimmer than the one on the right. Box 9.2 David Marr and Levels of Explanation David Marr (1982) approached the problem of vision rather differently from other neurosci- entists. Rather than studying vision by recording the impulses of single neurones (usually in non-humans such as frogs and cats) and then inferring how the impulses related to what was be- ing seen, he proposed the scientists should begin by asking what the visual system was designed to do in the first place and then work downwards to more specific levels: 1. The level of computational theory. What is the thing for? What is its function? For example, a computer program might be designed to add up a list of numbers. 2. The level of representation and algorithm. How is the above achieved at the abstract compu- tational level? For example, what steps does the computer program go through to add up the numbers? 3. The level of hardware implementation. How is this computation described in step 2 imple- mented on the actual physical substrate of the machine? Is the computation done by neurones, microprocessors or mechanical wheels and punch cards? How does this substrate achieve the computation? The first level of computational theory is very similar to what evolutionists call the ultimate level of explanation: what a particular behaviour or capacity is for; what is its ultimate function in terms of survival and reproduction. Once this has been worked out, neuroscientists can then ex- plore ways that algorithms might produce this behaviour (second level). Finally, neuroscientists can explore neuronal activity to try to determine how neurones might implement the algorithms. 226 Evolution, Thought and Cognition Box 9.2 (cont.) Given the difficulties of understanding the human mind in terms of the behaviour of neu- rones most cognitive theories of mind have, understandably, tended to focus on the second level or representation and algorithm. One of Marr’s great contributions was to propose that if we are to properly understand some aspect of human thought and behaviour, we need to have some idea as to the function or purpose of this behaviour. Throughout this book we have been suggesting that one of the benefits of adopting an evolutionary approach to psychology is that it focuses on the function of behaviour. Thus, evolutionary psychology can provide us with our computational theory (to use Marr’s term). In other words, the area of the retina that is covered by the two tabletops is the same but our brain somehow contrives to make them look different. If our visual system were truly designed to represent the way that the world is then we should see three Pac-Man characters having a chat in a triangular configuration and the tabletops would look the same shape, albeit with one rotated through 90 degrees. Further illustration of this can be seen in Figure 9.4a (from Adelson, 1993). Look at the squares A and B. The square marked A looks grey and the square marked B looks like a white square in shadow, a kind of grey but not as dark as A. You guessed it. They are exactly the same colour. Don’t believe it? Well, Figure 9.4b shows the proof of this. The two vertical lines are, as you can see the same colour as both squares, they blend in perfectly when they touch the squares. It is easy to look at these illusions and conclude that the visual system is poorly designed. We see things that aren’t there and fail to see things that are; like a fire alarm which goes off when there is no fire and fails to go off when there is. According to Marr (1982, 473), however, this is based on the assumption that the visual system is designed to provide a faithful representation of what is ‘out there’ in the world. Instead he proposes that vision ‘is a process that produces from images of the external world a description that is useful to the viewer and not cluttered with irrelevant information’. (a) (b) Figure 9.4 Shadow illusion by Edward Adelson. (a) Believe it or not, the two squares, A and B, are exactly the same shade and colour, even though A looks much darker. (b) Proof that the two squares are the same. Each square blends perfectly with the vertical bars, which are the same colour throughout their length. If you still don’t believe it (and why should you?), try covering up sections of the picture with paper. Vision 227 One of the most informative parts of a visual scene are the edges of the various objects that we see because they define its boundaries. When a child sketches a face, they will usually just draw the most important edges: head, eyes and mouth as this is enough to capture the essence of a face. Adding extra information such as a nose, ears or shading or colour might make for a prettier picture, but it wouldn’t make it easier to identify as a face. Detecting edges is important for our visual system (and, incidentally, for computer vision). The simplest interpretation of the left-hand picture in Figure 9.3 is a triangle placed on top of three circles, so our visual system ‘fills in’ the central edges of the triangle. The tables in Figure 9.3 look different because the visual system is compensating for the foreshortening that occurs when objects are viewed at an oblique angle. If the two objects were real tables (rather than simply drawings) then because of the viewing angle, the left-hand table really would be longer than the one on the right is wide. The visual system is giving us information about the shape of the object that we are viewing by compensating for this foreshortening effect. Figure 9.4 can be explained by the fact that it makes evolutionary sense for objects to look the same under a variety of lighting conditions (how much light there is and what colour it is). During the day, natural sunlight changes in brightness and colour, from a dim pinky colour just after dawn, through an almost blueish hue at midday and back to a lower intensity redness at dusk. This means that the qual- ity of the light reflected off any object and hitting your retina varies as the day progresses, so the colours of objects should change dramatically. (You have probably noticed that in photographs objects can look different to how they appeared to your eyes at the time.) It is evolutionarily important that objects look the same to us under different lighting conditions so that we can successfully track and identify them. Happily, the visual system has a special trick, it takes account of the light coming off the object while simultaneously measuring the light coming off other objects around it. If the ambient light has a lot of red in it, then the visual system will dial down the red to compensate, meaning that the perceived colour change is a lot less than in actuality. The net result is that objects look approximately the same under a wide variety of lighting conditions. This is known as perceptual constancy. In Figure 9.4, there is a repeating checkerboard pattern which leads our visual system to assume that square B is light grey and square A dark grey, there is also a cue that square B is in the shadow of the cylinder which further reinforces the assumption that B is light grey but in shadow. So, the visual system measures the ambient light coming from the photo and represents B as a light-coloured square in a bit of shadow rather than a dark coloured square and so A appears lighter than B, even though it isn’t really. Box 9.3 #TheDress Oddly, this process of colour adjustment became a worldwide phenomenon in 2015 when Cecilia Bleasdale went shopping for a dress to wear for her daughter’s wedding. She took a photograph of a potential candidate and sent it to her daughter for approval. The photo caused some consternation as no one could agree what colour the dress was: some thought it was white and gold, others blue and black. The photo was duly posted onto Facebook to solicit the opinion of family and friends. Then it went viral. At one point #TheDress was receiving 11,000 tweets per minute. Celebrities offered their expert opinion: Taylor Swift and Justin Bieber seeing blue and black, Kim Kardashian and Katy Perry white and gold, perhaps unsurprisingly, Lady Gaga thought it periwinkle and sand. It is still not known exactly why people see the dress so differently, but it seems to be down to indi- vidual differences in the colour adjustment process described above. The photo is hardly studio quality 228 Evolution, Thought and Cognition Box 9.3 (cont.) and was taken under strip lighting which gives it a strong yellow cast. This washes out the blue and makes the black appear gold. For some reason, it seems that some people ‘tune out’ the yellow more than others who are therefore more likely to see the dress in its true colours (Hardiman-McCartney, 2015). Figure 9.5 shows how differently the dress appears under different lighting conditions. In each image only the skin tone of the model has been changed. The dress is the same colour. Figure 9.5 #TheDress. The dress is identical in both cases; only the skin tone of the model and the colour of the background have been altered. Speaking of colour, it is worth emphasising that colour isn’t really a thing in the world. When we say that we see green we are not literally picking up green light because there is no such thing (Hardin, 1988). What we are detecting is light of a wavelength between approximately 560 and 520 nanometres. The green is created by our visual system. We know this because not all an- imals have colour vision and of those that do, many have reduced colour vision relative to us. We (and our close primate relatives) have what is known as trichromatic colour vision, meaning that we have three types of colour receptor in the retina which provide the palette from which we create all the colours we can experience. These three colour detectors are known as cones due to their broadly cone-like shape and there are three types, referred to as the ‘colour pigments’: red, green and blue. Of course, all this describes is the wavelength to which they are most sensitive, so calling them pig- What Is the Function of Memory? 229 Most other mammals, on the other hand, are dichromatic with just two types of sensor. In case this is taken as further evidence of how ‘highly evolved we are’, it is worth pointing out that evolutionarily much more ancient animals such as fish and lizards also have trichromatic and some even tetrachromatic vision with four types of sensor. Colour vision did not begin with us; nor did it begin with our more recent ancestors. What seems to have happened is that around 250 million years ago there was something of an evolutionary battle between two lizard-like lineages: the ther- apsids and the diapsids. The former were the ancestors of us and all other mammals, the latter were the ancestors of the dinosaurs. As history records, it did not go well for the therapsids, and it was the descendants of the diapsids, the dinosaurs, who eventually took over the world. The ancestors of the therapsids being reduced (in every sense of the word) to living off the scraps that fell from the dinosaurs’ table: tiny, often inhabiting burrows and coming out only at night when it was safe. Cones are great for seeing colour but not at all great for seeing in the dark. For this, another type of sensor is better, the so-called rod, known – you guessed it – for its approximately rod-like shape. So our ancestors traded colour vision for night vision to better exploit their particular niche (Jacobs, 1993). For their part, it is possible that dinosaurs retained trichromatic vision, because their descendants the birds have this feature. Once the dinosaurs were gone, mammals began to exploit niches previously unavailable to them including a diurnal (daylight dwelling) lifestyle. Around 10–20 million years ago, our ances- tors evolved (it might make more sense to say re-evolved) trichromatic vision. One possible reason for the return of colour perception is that it helped our frugivorous (‘fruit-eating’) ancestors to more easily spot ripe fruit against the green background of leaves (Gerl and Morris, 2008). Some things cannot be spotted in the visual spectrum, and other animals can detect these. Bees, for example, can see into the ultra-violet spectrum which helps them to more easily spot veins on flowers that guide them towards the nectar. Once again, vision is not for representing what is actually ‘out there’ but what is useful for the birds and the bees and for us: a form of colour coding in its most literal sense. What Is the Function of Memory? Who has not, at some time, wished that they could travel back in time to a pivotal moment and change history? Killing Hitler before he attained power, perhaps? Or saving a hot date that went badly wrong? TV, film and books are filled with many such what-if scenarios. From Groundhog Day to About Time, from Harry Potter and the Prisoner of Azkaban to innumerable episodes of Doctor Who we see the hero going back in time to save the world, the universe, or in the case of Groundhog Day, a hot date that went badly wrong. Of course, we cannot literally travel back and change history, but we can travel back in our personal histories and use our memories of the past to change the future. As the philosopher George Santayana (1905) wrote ‘those who cannot remember the past are condemned to repeat it’. And this, according to some evolutionists and psychologists, is what memory is for. Memory in a Single-Celled Organism You might think memory is something only higher animals are equipped with, but in 2016 a team lead by Audrey Dussutour (Boisseau, Vogel and Dussutour, 2016) at Toulouse University showed that a large single-celled organism called a slime mould was able to remember, in a manner of speak- ing, past events. Slime moulds are interesting creatures: they are not moulds at all but amoeba-like organisms with no neurones (see Box 9.4 230 Evolution, Thought and Cognition up a medium-sized dish, but capable of expanding and contracting into all manner of shapes. Rather like the slime you may have made as a child. Dussutour used petri dishes divided into two halves with a narrow constriction between the two halves that enabled movement from one half to the other, like a kind of bridge. There were two treatment conditions. In the first, a slime mould was placed in one half of the dish and food in the other. In the second, everything was the same except that the ‘bridge’ was coated in quinine: a bitter substance which slime moulds find disagreeable. The organisms moved across the untreated bridge freely from the beginning but tended to avoid going over the treated one until they gradually learned that the bridge was safe and then started to cross it. Box 9.4 Slime Mould Cognition Slime moulds are not animals, plants, fungi or even bacteria. Instead they belong to their own sep- arate kingdom called Protista which also includes amoebas. Like amoebas they are single-celled but are much, much larger. The largest individual example, a species called Brefeldia maxima, was found on a tree stump in North Wales and weighed around 20 kg. Perhaps unsurprisingly, due to its size, Berfeldia lives a sedentary life; rather like the fungus that it is not. Others, how- ever, are livelier and occupy themselves foraging among leaf litter. If food becomes scarce, however, slime moulds have a trick up their sleeve: individual slime moulds can stream together to form a large single multicellular organism called a ‘slug’ (which is what is resembles). This slug climbs out of the leaf litter to a higher spot where it turns into a mushroom-like object which releases spores to populate the world with more slime moulds and the process continues. Figure 9.6 Slime mould Physarum polycephalum on a tree branch. You can see that it is sending out tendrils in search of food. What Is the Function of Memory? 231 Box 9.4 (cont.) The species studied by Dussutour is called Physarum polycephalum. As can be seen from the figure, Physarum sends out tendrils to seek food and researchers Nakagaki, Yamada and Tóth (2000) used this ability to show that a slime mould can solve a maze problem. The researchers used a square dish with walls forming the maze’s corridors, with food placed at key points in the maze. When Physarum was placed in the maze, it slowly streamed its tendrils down the corridors hunting for the food. If a tendril reached a dead end it was retracted back into the main body. Once the food was located, Physarum moved the majority of its body to these places in order to eat, with only thin strands connecting it together. No one knows how Physarum does this, but it is clear that intelligent behaviour can exist without brains and even without neurones. Another example of the substrate neutrality of cognition. Most of us will be familiar with the slime moulds’ plight. Like slime moulds we find bitter tastes aversive and tend to avoid them (many bitter substances are toxic), this is why our first ever glass of beer or wine is often less than pleasant. With a little persistence, however – and peer pres- sure is a wonderful motivator – our brains soon realise that the drink is safe, and we drink it without problem and may even start to enjoy it. This process is technically called habituation and in humans it usually lasts a lifetime. Slime moulds are different. If the bitterness was removed from the bridge for two days and then reinstated the slime moulds avoided it again. Slime moulds, it seems, are not only the simplest organisms that remember, they are also the simplest that forget. Hopefully, you can immediately see the survival advantage of this simple example of mem- ory. Things that are initially aversive are generally avoided. But on repeated encounters our mem- ories tell us that in the past, all was OK, so all will probably be OK in the future. This is how we acquire a taste for alcoholic beverages, as discussed above. However, if we drink too much alcohol in those early times and become sick then we can immediately develop a long-term aversion to that flavour to the extent that even thinking about it can raise a shudder (Seligman and Hager, 1972). We use the past to predict future safety and future danger. Some of you may be complaining that this isn’t really memory, not in the way that the word is typically used anyway. We tend to think of memories as richer entities, such as a childhood holiday, but as we shall see – and as you might already know – there are different kinds of memory. The one discussed above is probably the simplest kind of memory, but no less important for all that. Predicting the Future with Words and Concepts Psychologists are apt to distinguish between episodic and semantic memories (Tulving, 1972). Episodic memories are quite often rich, vivid and full of sensations: the aforementioned childhood holiday, or, less appealingly, a trip to the dentist. They do not need to be vivid. Sometimes they can be vague but what makes them episodic (the word is derived from ‘episode’) is that they are made of undistilled experience. Semantic memories, on the other hand, are distilled, with most of the richness taken away. The example always given is the knowledge that Paris is the capital of France. At some point that was part of a real episode in your life: the teacher telling you in school, a TV programme you watched, or something your parents said. But for most of us, time has eroded the 232 Evolution, Thought and Cognition episodic details of the experience of learning what the capital of France was called and all that is left in memory is the basic fact. (It is sometimes said that episodic memories involve remembering, whereas semantic memories involve knowing; Mickley and Kensinger, 2008.) Later we will discuss why episodic memories exist from an evolutionary point of view, but for now we will focus on the semantic memory system. Semantic memory does rather more than store information about European capital cities; it stores the meanings of all of the words and concepts that we know. Semantic memory is designed to store information and, crucially, retrieve it quickly when it is needed. In order for it to do this memory needs to be organised in a particular way with specially designed retrieval mechanisms. As an – admittedly old-tech – analogy consider another information storage and retriev- al system: the library. Many libraries contain archives consisting of large repositories of books, journals and other materials kept away from public view. If you wish to obtain something from the archive, you need to speak to a librarian who will fetch you the relevant item from the store. An important criterion in the design of an archive is that librarians are able to retrieve relevant infor- mation quickly (you have many customers and you don’t want to keep them waiting by spending a long time looking for each item). One way of increasing efficiency is to have the items that you think will be more likely to be requested in the future within easy reach and use the less accessible places for infrequently borrowed items. How can you predict which items will be more likely to be requested in the future? One way is to use past information about borrowings. Research on libraries shows that books that were popular in the past are likely to be popular in the future, therefore one way of increasing future efficiency is to place high-frequency items in easily accessible locations, and low-frequency items in the more difficult locations A more up-to-date analogy is found on YouTube. You watch a video or two on a topic different to the kind of thing you usually watch and suddenly you find related videos being recom- mended to you in the side bar. This is the result of an algorithm that is monitoring your past viewing behaviour and, based on data from millions of other users, tries to predict other videos you might be interested in. Although memories are vastly different from libraries (and YouTube), it seems that similar principles hold. The psychologist and cognitive scientist John Anderson of Carnegie Mellon Univer- sity proposes that human memory is optimally adapted to the structure of the information retrieval environment. For instance, we know that the speed at which an item such as a word is retrieved is predicted by the frequency at which the word was encountered in the past; high-frequency words being faster than low-frequency words. Anderson (Anderson and Milson, 1989) argues that this is evidence of adaptive design. The mind is simply doing what an intelligent librarian would do and predicting which items will be useful in future by using information relating to its usefulness in the past, and organising the system so that useful items can be accessed more rapidly. Anderson and Milson provide a similar account for priming. If a person encounters a par- ticular word (e.g. ‘dog’) then he or she will be able to access that same word much faster next time they encounter it. This is called priming (more technically it is called ‘repetition priming’) because the presentation of the word ‘primes’ the system for future encounters. Perhaps more interestingly, not only is the word itself primed, but words that are semantically related are also primed. For ex- ample, after we experience the word ‘dog’ the words ‘cat’, ‘bark’ and ‘leash’ will be retrieved more quickly than if we had received the word ‘table’. Again, this is interpreted as memory trying to pre- dict what words might come up in the near future based on previous co-occurrences of the concepts that underlie these words. A similar concept is used in predictive messaging systems on mobile What Is the Function of Memory? 233 phones and other devices. As you start to enter a word the system searches for words that typically follow it and provide recommendations. The idea being to speed up the process of typing with the additional benefit of making you look stupid for ‘your’ poor choice of words. Box 9.5 Evolutionary Cognitive Neuroscience As we have seen, cognitive psychology is concerned with mental processes including, for ex- ample, memory, thought, perception and decision-making. In the early years of its development, while cognitive psychologists imputed internal processes, they knew very little about the neu- rological foundations of such processes. Due, however, to technological advancements during the 1970s and 1980s the new field of cognitive neuroscience began to develop. Here, by making use of electrophysiology (such as electroencephalography) and neuroimaging techniques (such as fMRI and PET) cognitive neuroscientists began to investigate both the structural and the functional neural bases of mental processing. Much progress has been made in the field through studies either of unimpaired participants when conducting very specific tasks involving, say, working memory or of single case studies of people who have had well-designated brain injury (or removal through surgery). Examples of the latter include ‘HM’ (Henry Molaison), who, fol- lowing removal of much of the hippocampus, was no longer able to form long-term memories and ‘SM’ (an unidentified woman), who, following damage to her amygdala, was no longer able to experience fear (Feinstein et al., 2011; see Chapter 11). In 2007 a new field of cognitive neuroscience – evolutionary cognitive neuroscience (ECN) began to emerge (Krill, Platek and Shackelford, 2007). According to its adherents, the addition of an evolutionary perspective is now allowing investigators to develop a metatheoretical frame- work for the development of cognitive neuroscience (Keenan et al., 2007; Platck, Keenan and Shackelford, 2007; Saad and Greengross, 2014). Why might this be the case? Because evolutionary psychologists consider we all share evolved psychological mechanisms, then, arguably, such mechanisms must have common neurological substrates. Such psychological mechanisms are believed to have arisen to solve recurrent an- cestral challenges and many evolutionists consider they operate largely in a domain-specific manner (see Chapter 5). Although some psychological mechanisms no doubt operate in do- main-general ways (such as working memory) there is experimental evidence that many are domain-specific. An example of this is the research by O’Doherty et al. (2003) who examined the neural correlates of facial attraction. They found an increased activation of the orbitofrontal cortex (see Chapter 11) when viewing attractive faces, suggesting that looking at attractive faces is both inherently rewarding and has a defined neural substrate. Also related to the notion that many psychological mechanisms are domain-specific is a series of experiments from the lab of Cosmides and Tooby on social interaction and exchange. In their work on the concept of a domain-specific, cheater-detection module (see later), they have demonstrated that impairment in the ability to detect cheats occurs following specific brain injury, while performance on other forms of problem solving remains largely intact (Stone et al., 2002). Their studies suggest that, following damage to specific parts of the limbic system (see Chapter 11), the ability to detect cheats becomes impaired. 234 Evolution, Thought and Cognition Box 9.5 (cont.) What is interesting about these findings is that they make use of developments in cogni- tive neuroscience to test directly the notion of evolved psychological mechanisms. Such findings help us to understand how the mind/brain evolved along the lines that it has. In the words of Krill et al. (2007, 239), ‘[w]ithout evolutionary meta-theoretical guidance, cognitive neurosci- ence will fail to describe with anything but superficial accuracy the human (and animal) mind’. This means that, while cognitive neuroscientists are shedding new light on important proximate ‘how’ questions, without recourse to an evolutionary meta-analysis, they will miss out on ulti- mate ‘why’ questions. Stanley Klein and colleagues (Klein et al., 2002) extended Anderson’s work on memory, arguing that memory evolved to support the decision-making process. Throughout our lives humans make many millions of decisions and most, if not all, of these decisions will require more infor- mation than is available in the environment; this additional information is provided by information stored in memory. Of course, there are many different types of decisions – including habitat selec- tion, mate choice and predator avoidance – each with its own particular solutions and constraints. Some decisions, for example, need to be made rapidly, whereas others can be made more leisurely but demand greater accuracy. For this reason, Klein et al. propose that there will be separate mem- ory systems depending on the nature of the decision being made. We have discussed the semantic memory system, now we turn to look at episodic memories. Predicting the Future with Episodic Memories So far, the discussion above has been about semantic memory (memory for words and concepts), what is the function of episodic memory? Psychologist Stanley Klein (Klein et al., 2002) suggests that our episodic memories, by their nature, not only contain a great deal of detail but can contain detail from different sensory modalities. Much will be visual, but they’ll also likely contain olfacto- ry, auditory and haptic (touch-based) memories too; often bundled together in the same episode. A childhood memory of a beach holiday may contain the sun shining down, the smell of the sea, the call of seabirds and the feel of sand between your toes. Unlike our semantic memories, our episod- ic memories are also heavily attached to emotions. We feel the swell of pride when we relive our successes, that sinking-feeling of shame when we relive our failures and the ‘world swallow me up’ feeling when we recall the date that went so badly. Episodic memories, although slower to process than semantic memories (we often have to mentally ‘scan’ them to extract information), contain an unalloyed detail that is lost from semantic memories. This detail can be essential in certain deci- sion-making contexts, and the emotional triggers that they pull can lead to a powerful urge to repeat, or avoid certain activities, as we shall see next. THE PERSISTENCE OF MEMORY In his book The Seven Sins of Memory (2001), psychologist Dan- iel Schacter proposes that people who have developed post-traumatic stress disorder (PTSD) will often relive the traumatic experience and avoid situations that remind them of the event. Similar to flavour aversions described above (for which episodic memory is not necessary), Schacter suggests What Is the Function of Memory? 235 that such a mechanism aids survival by encouraging us to stay away from situations that proved dangerous in the past. Flashbulb memories are similar, these are memories of a specific event, usually of great personal or public significance, of which some people seem to have unusually detailed memories such as whom they were with, what they were wearing or specifically what was said on the news or by people around them (Brown and Kulik, 1977). Examples include, the assassination of President Kennedy in 1963, the death of Princess Diana in 1997, the 911 terrorist attacks in 2001 and the Paris terrorist attacks of 2015 (we will not discuss the result of the Brexit referendum and the election of forty-fifth US president Donald Trump in 2016 and 2017, respectively, for reasons of personal sensitivity). Although there is some debate on the accuracy of such memories, especially when they are recounted years after the event (see e.g. Talarico and Rubin, 2007) there is a good argument for why we should form detailed memories when emotions run high. Events of great import can have significant fitness consequences and it is probably a good thing if we use past events of this nature to help us to avoid or embrace situations which diminish or enhance fitness. And like with our early experiences with alcohol, unique events are usually more significant than those that are common- place. And it is the latter events that we are often most likely to forget, because, despite how it might seem, forgetting is really an important part of successful remembering. Witness the amazing memory feats of Solomon Shereshevsky studied in the early part of the twentieth century by neurologist Alexander Luria (1968). Shereshevsky – who was identified in Luria’s book only as S. – had an incredible memory. For instance, he could faultlessly recall a list of 70 words or a matrix of 50 numbers in any order, after the briefest of exposures. Perhaps more strikingly, Shereshevsky could remember such words or numbers many years later, again in any or- der and without error. Such a memory might seem like a blessing, but it often caused Shereshevsky severe problems. Sometimes he would find it difficult to control his memories and they would burst unbidden into his consciousness at the slightest provocation. He became so frustrated with the per- sistence of his memories that he would write words down onto pieces of paper and burn them, in a vain attempt to remove them from his mind. Forgetting is therefore an important part of memory and can be seen as a process in which useless information is deleted (or at least archived), similar to the way one may clear unwanted files from a computer’s hard disk (Schacter, 2001). Unfortunately, this clearing-up process can lead to items that we would like to keep, being thrown out inadvertently. This is particularly the case with memories that are seldom accessed; their lack of use signalling they are unimportant as we discussed when describing Anderson and Milson’s work on library borrowing. In this regard our memory system can be as frustrating as a spouse who throws out a much-loved item of clothing because ‘you never wore it’. Being able to forget (or what Schacter calls ‘transience’) should not be confused with what he calls ‘absent mindedness’, where ‘memories’ are not stored in the first place due to a lack of attention. The Relationship between Episodic Memories and the Future The notion that episodic memory is intimately related to the future began in the 1980s when a team led by Tulving (Tulving et al., 1988) studied the case of a patient known as KC who had suffered brain injury as the result of a motorbike accident. KC had severe retrograde amnesia, and therefore had no personal memories of his experiences prior to the accident, and also anterograde amnesia meaning any new memories lasted for no longer than a few hours. These injuries obviously had a 236 Evolution, Thought and Cognition profound effect on KC’s everyday life but they also affected his sense of the future. When asked about what he would like to do tomorrow he just looked bewildered, as if the question made no sense to him. His semantic memory was preserved, so he could list things that people like him might do tomorrow; it was just that he had no sense of a personal future. The patient known as HM (see Box 9.5) when asked what he was going to do tomorrow, responded ‘whatever is beneficial’. This shouldn’t be too surprising, on reflection. With usually fairly minor and often predictable differences our immediate futures are often similar to our immediate pasts. We get up, go to work, have lunch, come home, go shopping, watch TV and go to bed. When one has lost one’s past there can be no expectation of what the future might hold. For people with unimpaired memories, recent fMRI research (Schacter et al., 2015) has revealed that the neural structures underlying remembering past events are almost identical to those used when imagining possible futures. It seems that episodic memory is intimately involved in future planning and envisioning possible outcomes as a result of our actions and those of others. This is of clear survival value because by doing this, as Karl Popper (1972) pointed out, it ‘lets our hypotheses die in our stead’. Memory and Categorisation A lot of our thinking involves the use of mental categories. A simple sentence such as ‘the cat sat on the mat’ contains at least three: ‘cat’, ‘sat’ and ‘mat’. ‘Cat’ is a category because there can be many different kinds of cats and use of the word ‘cat’ doesn’t specify its size, gender or colour; we are left to fill those in ourselves. Even if we were to say ‘black female cat’ that is again just another category, albeit more specific. On the other hand we could tie the identity of the cat down to a particular unique individual, say Grumpy Cat (Grumpy Cat, if you’re not aware, being an in- ternet sensation who sadly passed away in 2019), who is not a category – he is technically called an instance. Mental categories are important in thought because they enable you to think about the essential properties of a situation without concerning yourself about what kind of cat it is, precisely how and where it is sitting or what the mat looks like. Unencumbered by weighty detail, thought is very much speeded up (Pinker, 1997). In this way the mind embodies what is called cognitive economy. Categories also help in communication. If someone told you that they owned a puli, what would you think? Unless you were familiar with the word, you would have no idea what it was. If you found out that a Puli is a breed of dog you would immediately know all sorts of things about it; that it eats meat, enjoys walks, wags its tail, barks, has hair, a backbone and so on. This process also works in reverse. If you were to see a puli without being told what it was, you would probably be able to infer that it was a kind of dog, even though it doesn’t look like many of the dogs that we might be familiar with. This is because we tend to group objects together that have a family resemblance. Sometimes this process of categorising on the basis of family resem- blance goes awry, and we place things together into categories which we later find out should not be placed together. Such as when we used to think of whales as a kind of fish, rather than a mammal; mushrooms as a kind of plant, when they are in fact more closely related to us; or slime moulds as a kind of mould. Despite these blunders, much of the time our psychological categorisation system works just fine in the everyday world and helps us, again, to make rapid decisions on how to act. For example, unless you are a professional mycologist (fungi expert) most of your interactions with mushrooms are probably of a culinary nature. Thinking of a mushroom as a kind of plant will give What Is the Function of Memory? 237 you a better clue as to how to cook it than thinking of it as a kind of animal, to which it is more closely related biologically. The psychologist Eleanor Rosch who began this research enterprise (Rosch, 1973) also noticed that categories exhibit what she called the prototype or typicality effect. If you are asked to think of a bird you will probably think of a small flying animal that has a beak, feathers and lives in trees such as a robin or sparrow (at least you will if you live in North America or Europe). You will be unlikely to think of a large, flightless bird that lives on the ground such as an ostrich. This is beneficial, the typicality effect exists because our cognitive system adjusts the focus of a category to the kind of thing that is most frequently encountered. CREATING CATEGORIES FROM EXPERIENCE Creating categories occurs by a process of abstrac- tion in which we remove extraneous detail and get to the essence of the entity or event in question. As we have seen we can think about a cat or a dog without having to worry about aspects such as its colour, size or other extraneous factors. One of the other downsides of Shereshevsky’s amaz- ing memory, was that he found it difficult to form abstractions, he even struggled with recognis- ing people’s faces. Now it might seem odd to think of your mental representation of someone’s face as an abstraction, but it is, albeit on quite a low level. If you think about it, people’s faces change: they age, they change their hairstyles, wear different make-up or grow or remove facial hair. Less dramatically (and similar to our discussion of colour constancy earlier) faces change under different conditions of luminance and we often see them at different viewing angles. So in order to recognise a familiar face we need to have some sort of abstract representation of the face. Shereshevsky struggled to do this (or rather his memory systems did) as he stated faces are ‘so changeable’ (Luria, 1968). Temple Grandin, professor of animal science at Colorado State Uni- versity and a person who has autism, also struggles forming abstractions. She has said that she finds grammatical conjugations such as ‘to be’ are completely meaningless, and like Shereshevsky finds jokes, metaphors and irony – all of which require abstraction – completely incomprehensible (Grandin, 2006). The ‘Adaptive Memory’ Approach The adaptationist accounts of memory discussed above focus on the processes of storage, retrieval and abstraction, the only evolutionary criteria are to design a system that (1) enables the future to be predicted from the past and (2) rapid and (usually) reliable decision-making based upon limited in- formation. They make few appeals to domain-specific (see Chapter 5 and Box 9.6) adaptations that might have evolved in the Environment of Evolutionary Adaptedness (EEA; see Chapter 1). There is an approach to the study of memory that does just this and has become known as the ‘adaptive memory’ approach (Nairne et al., 2007). (Note that we use quotes in the title of this section to indi- cate that this is a specific name for this research programme, obviously all of the research we have discussed before sees memory as adaptive.) Researchers using this approach claim that memory has evolved not only to be an efficient mechanism for supporting decisions but is more sensitive to certain kinds of content than others, particularly things that would have been important to the lives of our ancestors. They propose an additional variable to those that cognitive psychologists have previously shown made concepts memorable such as concreteness, imageability and frequency that they term s-value (for survival value; see Nairne and Pandeirada, 2008). Concepts that are high in s-value 238 Evolution, Thought and Cognition are those that are related to survival, reproduction, navigation, sex, social exchange and kinship. In a series of experiments Nairne and colleagues show that memory can be enhanced in situations that trigger perceptions of s-value. Perhaps the most striking result is from a related study by Weinstein, Bugg and Roediger (2008, Experiment 2), who presented participants with the follow- ing scenario: In this task we would like you to imagine that you are stranded in the grasslands of a for- eign land, without any basic survival materials. Over the next few months, you’ll need to find steady supplies of food and water and protect yourself from predators. They then presented participants with a list of 12 randomly selected words such as ‘priest’, ‘slipper’, ‘tomb’ and ‘macaroni’ which they were asked to rate as to how relevant they would be to achieving the above task. Following a delay, participants were then asked to unexpectedly recall the words. The results demonstrated that participants recalled significantly more words in the above condition than if the word ‘grasslands’ was replaced by ‘city’ and the word ‘predators’ replaced by ‘attackers’. This result is interpreted as the result of evolved mechanisms for dealing with life in the grasslands (including predator avoidance) which increases the memorability for any items presented together with this scenario, even when some of those items (such as slipper and macaroni) are irrelevant to the scenario. The results are even more striking when you consider that the participants were from St Louis and London and presumably have much more knowledge about dealing with cities than the savannah. The Limits of Conscious Awareness Freud was right. Consciousness is just the tip of the iceberg; we can be aware of only a tiny frac- tion of what we know at any one time. How big a fraction? In 1956 psychologist George A. Miller published a paper estimating that we could hold seven plus or minus two items in consciousness simultaneously. That is pretty small, but not as small as more recent estimates which put it down to around four items (Cowan, 2001). The question is, how is it possible to do any serious thinking in a space big enough to squeeze in only seven (or four) items? It’s like trying to do mathematical calculations on a white board the size of a postage stamp. Making matters worse is that our con- scious workspace is extremely transient, and its contents quickly vanish if they are not continually refreshed (or ‘rehearsed’ as psychologists like to say). This is one reason why when somebody gives you a phone number that you wish to remember and have no way of writing it down, you have to continually rehearse it, often out loud, over and over again. One moment’s loss of concentration, one minor shift of attention from this process, and it’s lost forever. Thinking big thoughts under these conditions should not only be difficult, it should be impossible. Happily, in the same paper, Miller also proposed the solution. He called it ‘chunking’. So far, we have been discussing our conscious workspace (some call it short-term or working memory; Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974) as containing a limited number of items, Miller suggested that it is better to think as the limitation as being on chunks of information. What is a chunk? Imagine someone spoke out a list of letters and you had to repeat them back immediately (this is how the capacity of the conscious workspace was originally measured) how many letters could you repeat back? In Hayes’ original experiment (Hayes, 1952) the answer was around seven (as Miller noted). But imagine that the letters were as follows: PHDCIAPDFUSBFBIBBCTBC Conditional and Logical Reasoning 239 That’s 21 items (letters) and therefore should be impossible. But, in fact most people would find this rather easy (it becomes more obvious if you were to read it out loud). These 21 items, in fact, consist of just seven chunks of information. PHD CIA PDF USB FBI BBC TBC Of course, this only works if you are familiar with what a USB stick was or what that TBC means, for example. You can go further by connecting these seven chunks together into a meaningful phrase, such as The PHD candidate was investigated by the CIA because a suspicious PDF was found on his USB stick. He was arrested by the FBI and reported on the BBC. To Be Continued … would be just one chunk, even though it consists of 32 words (and 134 letters!). This is one of the ways that mnemonists, such as Shereshevsky, achieve their incredible memory. It is not just about remembering words and letters (or even numbers). Chunking vastly extends the effective capacity of the conscious workspace: low level thoughts are combined together into higher-level chunks and these chunks combined into higher-level chunks and so on. Physiolo- gy students first learn about individual organs, heart, lungs, liver and so on. As they become more knowledgeable, they chunk these organs into systems – circulatory system, immune system, endo- crine system – which enables them to think more clearly about the way that organs operate together. As they become expert, they will chunk systems into the supersystems which involve interactions between systems such as the way the endocrine and nervous system interact. Until finally they have a mental representation of the whole organism. This process of chunking is what makes experts experts allowing them to think very complex thoughts effortlessly (Ericsson and Smith, 1991). But it is essential for all of us. By packaging many small ideas into increasingly fewer bigger ideas, we can almost literally think big in a small space. Is Memory Adaptive; Did Memory Evolve? The above discussion suggests that memory evolved in order to predict the future (and deal with the present) by storing past experiences in order to make effective decisions. Experiences can either be kept in their raw form (episodic memories) or abstracted into semantic memories which enable rapid processing of information from events that have usually been encountered more than once. The more recent ‘adaptive memory’ perspective provides compelling evidence for domain-specific memory processes relevant to ancestral humans. These are early days for this approach, but the effect has been replicated a number of times by different researchers and could potentially revolutionise the way that cognitive psychology studies memory which has traditionally focused on content neutral processes such as imageability, depth of processing and familiarity. Conditional and Logical Reasoning Conditional reasoning refers to problems that use an IF/THEN format. These are used in social interaction in the making of social contracts or promises. If someone were to say to you ‘if you give me some money, then I will get you a concert ticket’. You give them the money but receive no tickets; clearly the person has broken their promise to you and has contravened the conditional rule. 240 Evolution, Thought and Cognition But conditionals are not only used in the making of promises, they are also used in specifying causal relations among events. A rule such as IF you drink beer THEN you get a headache is an example of one such causal rule. Suppose you don’t drink beer but still get a headache, has the rule been broken or not? Logically speaking the answer is no because the rule doesn’t exclude that you might get a headache by other means (such as reading books on logic for example). However, if you find someone who has drunk beer and does not suffer from a headache, then the rule has been falsified as you have found an instance which breaks the rule. Suppose that you drink beer and get a headache, does that prove the rule? No, according to philosopher Karl Popper (1959) because truly to test a hypothesis you need to try to falsify it, as above; you can never prove a scientific theory to be true, only show it to be false. If you put forward the hypothesis all swans are white then no matter how many confirmatory instances you find you never prove that it is true because the next swan you find might be black (as indeed some swans are) and the rule will be broken. This is one of the reasons psychology students are discouraged from using the word ‘prove’ when discussing the results of psychology experi- ments. As Popper once said, there are no true theories only those that have not yet been shown to be false. Wason’s Selection Task In 1966 the psychologist Peter Wason presented participants with a task to see if they reasoned in accordance with the laws of logic. The task used rather abstract conditionals; participants were pre- sented with the conditional rule if a card has a vowel on one side, then it has an even number on the other. They were presented with four cards (see Figure 9.7) and asked which of the cards would they need to turn over to check to see that the rule was not being broken. Table 9.1 summarises typical results of this experiment (data from Johnson-Laird and Wason, 1970). The task can be interpreted as a problem of logical implication, p implies q (written as p → q), which can also be read as IF p is the case THEN q is also the case. If our goal is to find out whether the rule is being obeyed, then the only cards that bear upon this are the E card and the 3 card. To see why, it is worth spending a few moments stepping through the problem. Turning over the E card (p) is necessary because if you find that it is not an even number (not-q) the rule is falsified. Turning over the K (not-p) card is pointless; it does not matter what is on the other side Table 9.1 Percentage of choices in the abstract version of the Wason selection task Cards chosen E&3 E &4 E E,4 &3 Expressed logically p and not-q p and q p only p, q and not-q % choosing this response 4 46 33 7 E K 4 3 Figure 9.7 Stimuli used in Wason’s selection task. Conditional and Logical Reasoning 241 because the rule says nothing about consonants. The 4 (q) card might be of initial interest; after all if you find that there is a vowel on the other side it would add support to the rule. However, according to Popper it should be avoided because it can never falsify the rule, only confirm it. Finally, the 3 card (not-q) should be turned, because there is a chance that there might be a vowel on the other side, in which case the rule is falsified. As we can see from Table 9.1 only 4 per cent made the correct choice with the majority seeking confirmation rather than refutation by turning over the E and 4 cards (p and q). Griggs and Cox (1982) used a task very similar to the one above but obtained rather dif- ferent results. They used a rule that was familiar to the participants in their study, namely the laws about the minimum legal age that people are allowed to consume alcohol. The rule was If a person is drinking alcohol, then they must be over 19 years of age. They were presented with four cards as above containing beer (p), coke (not-p), 16 years of age (not-q) and 22 years of age (q), and were asked to imagine that they were police checking for under-age drinkers. When presented with this version of the task most subjects correctly chose the p and not-q cards. Participants’ success at this task is not simply attributable to it being less abstract than in Wason’s version. Manktelow and Evans (1979) presented participants with the statement, Every time I eat haddock then I drink gin and then gave them cards haddock, cod, gin and whisky, and found no improvement over the standard task that used letters and numbers. Even the under-age drinking example only works when presented with a suitable context. Pollard and Evans (1987) found that if the police cover story was omitted, performance decreased towards that of the abstract tasks. So why do people sometimes reason logically and at other times not? Many explanations have been provided (see e.g. Evans and Over, 1996) but here we focus on the one of most relevance to evolutionary psychology. Domain-Specific Darwinian Algorithms Cosmides (1989) suggests that one of the main differences between the under-age drinking task and the other tasks mentioned above is that the under-age drinking task makes sense to people because it appeals to their knowledge of, and concern to catch freeriders. Freeriders (or some- times free loaders) are people who take from others without giving anything back. As well as being annoying, they pose a significant problem for the evolution of cooperation. In computer simulations (see Chapter 8) freeriders, as a result of their selfish and exploitative behaviour, tend to leave behind more offspring than cooperators. Because children tend to resemble the parent, the number of freeriders will increase to the detriment of cooperators. Of course, cooperators do exist, in fact, cooperation is the default human strategy (see Chapter 6), so in order for coop- erators to have evolved they must have also evolved cognitive machinery specifically designed to detect cheats. This is what Cosmides believes is being triggered in the under-age drinking problem. Cosmides suggests that people perform well on the under-age drinking task because it trig- gers mental circuitry (she prefers the term mental modules; see Box 9.6 and Chapter 5) responsible for detecting people who renege on social contracts. Other tasks such as the abstract version using even and odd numbers do not trigger this circuitry and give rise to a different pattern of results (see also Gigerenzer and Hug, 1992). There is some evidence that this is not simply a foible of Western culture. Cosmides and Tooby (1992) have replicated the results of this study with a foraging people from Ecuador. 242 Evolution, Thought and Cognition The abstract versus concrete distinction is therefore a red herring as we saw above peo- ple can be equally confused by concrete versions of the problem. The important difference is that they are different kinds of problem. The original Wason task is, in logical terms, known as an indicative problem; it relates to facts about the world such as cause and effect. The under-age drinking problem, on the other hand, is known as a deontic problem which relates to a sense of duty or doing the right thing. If, as discussed in Chapter 6, morality evolved in order to enable cooperation then the under-age drinking problem triggers our moral sense (detecting cheats) whereas the original Wason task does not. Why should it? Facts about the world have nothing to do with morality. Manktelow and Over (1990) present evidence of facilitation in the selection task in cases where no cheating is involved, such as ‘If you clean up spilled blood then you must wear rubber gloves’. People solve this unsavoury example, concerned with the avoidance of contamination about as well as they solve the under-age drinking example. Cosmides and Tooby (1992) acknowledge that the mind may well contain a large number of innate, domain-specific mental modules for de- tecting contamination as well as cheating. In fact, in recent years a substantial amount of research has focused on what is called the behavioural immune system (Schaller and Park, 2011). The phys- ical immune system fights pathogens once they have passed through the body’s envelope, where- as the behavioural immune system works to avoid pathogens getting anywhere near us. One rich source of pathogens are the various excretions and secretions that ooze or eject from other people such as urine, vomit, faeces, sweat, saliva, mucus and blood. In order to avoid contamination from these effluvia we evolved a sense of disgust so that we avoid them almost literally like the plague. Box 9.6 What Is the Domain of a Module? One of the key claims of the synthesis of evolutionary psychology and modularity is that mod- ules can respond to things for which they were not originally adapted. A bizarre example of this is that Nico Tinbergen reported that sticklebacks he kept in his living room responded aggres- sively to a red post-office van that visited his house and was clearly visible to the sticklebacks through the window. This represents a misfiring of a mental ‘module’ that sticklebacks use as part of their mating behaviour (male stickleback have a red colouration and males usually re- spond aggressively to other males). Sperber (1994) deals with this issue by proposing a distinction between what he calls the actual and proper domains of a mental module. The actual domain of a module is anything that satisfies its entry requirement; the proper domain, on the other hand, is the stimulus (or stimuli) which by virtue of it triggering the module gives adaptive value. So for the stickleback example, the proper domain of the releasing behaviour is the red colouration of male sticklebacks leading to competitive behaviour, whereas the actual domain consists of lots of things, including red post office vans. Many of the stimuli in the actual domain will have no fitness consequences (such as post office vans). Some, however, may have negative fitness consequences. For example some orchids mimic female bees leading to male bees copulating with them hence wasting valuable sperm (and time) and putting themselves at risk of predation. Orchids do this in order to use the bees to fertilise the flower by transmitting pollen. Conditional and Logical Reasoning 243 Table 9.2 Summary of results from abstract, cheat detection and altruist detection tasks Task Correct choice % Choosing correct answer Abstracta p & not-q 4 Cheat detection p & not-q 74 Altruist detection not-p & q 28–40 Note: The data for the altruist detection task vary depending on whether the word used is altruist (28%) or selfless (40%). a See Table 9.1. Source: Cosmides and Tooby (1992). Interestingly, this sense of disgust is so powerful that we don’t even enjoy our own effluvia once it has left our body. As Rozin and Fallon (1987) wrote: We have confirmed this in a questionnaire in which we asked subjects to rate their liking for a bowl of their favorite soup and for the same bowl of soup after they had spit into it. There was a drop in rating for 49 of 50 subjects. (26) Some research really doesn’t need to be done in order to know the results. Criticisms of the ‘Cheater-Detector’ Approach There are many psychologists who disagree with Cosmides and Tooby’s account of these results (see e.g. Oaksford and Chater, 1994). Similarly a criticism of cheater-detection theory (and evolutionary psychology more generally) by David Buller (2005b) takes the cheater-detection hypothesis to task for – he claims – ignoring the difference between indicative and deontic tasks (but see Cosmides et al., 2005). His alternative explanation, based on a domain-general mental logic, can explain only the differences between deontic and indicative tasks not differences within deontic tasks. To under- stand why there might be differences in deontic tasks one must understand why we have them in the first place. Deontic rules, recall, relate to obligations, societal rules or morals. These rules are pre- sumably there to maintain stability within a community, for example the maxim: do not take more than your fair share (with punishments meted out for those who violate the rule) is there to maintain harmony. If resources are limited and if everyone is to get a share of the resources, then it makes sense to place limits on how much each person can reasonably take. If the rule did not exist, then some people would take more than what they were due and the resource would soon be depleted leaving others without. If violations went unpunished (a toothless law) then even fair-minded people would probably start to take more than they deserved (‘if they do it, then why shouldn’t I?’) with a similar depletion of the resource. The almost inevitable depletion of a shared resource because of people taking more than they should is called ‘the tragedy of the commons’. According to Cosmides et al. (2010) violations of such deontic rules regulating social ex- change should only be detected when (1) a cheater has benefited from cheating, (2) the person did it intentionally rather than accidentally, and (3) cheating by violating the rule is possible. If all of these conditions are met, the cheater-detection algorithm will fire and people will solve the deontic rule correctly; if any one of these criteria is not met, then the cheater-detection system will not fire 244 Evolution, Thought and Cognition and people will show reasoning errors such as those for the original Wason selection task. In a series of experiments Cosmides et al. (2010) showed this to be the case. In one example (Experiment 4) participants were told that they were given a task of checking for cheating in allocating children to one of two schools. The rule was: If a student is to be assigned to Grover High School, then that student must live in Grover City. They were given additional information that the people allocating children to schools were volun- teers and they were concerned that some of these people might have vested interests, e.g. they might be the parents of children going to one of the schools. In some cases, they were told that people in Grover paid higher taxes to support their school which was thus better than the rival school, Hanover High (benefit to cheating) in others that the schools were the same (no benefit). In some conditions they were told that the students were identified by name (possibility of cheating), in others that they were identified by anonymity code (no possibility of cheating). Finally, intention was manipulated, some participants were told that volunteers had been overheard planning to break the rule whereas other participants were merely told to check for mistakes. Cosmides et al. found that people rea- soned correctly more frequently in conditions where there was motivation, benefit and possibility of cheating than in conditions where these were absent. Summary of Logical Reasoning The literature on logical reasoning is large and complex and many attempts have been made to understand why people reason logically in some situations but not others. The cheater-detector theory described above is one of the most successful, not only because – unlike its competitors – it explains the differences between conditional and deontic tasks (Cosmides, 1989) and differences between different kinds of deontic tasks (Cosmides et al., 2010) but because it gives a good evolutionary rationale for why these differences exist: they engage mental processes designed for detecting freeriders. It should be made clear that a cheater detector is not just a convenient fiction, a just-so story constructed post-hoc to explain a pattern of results – as we pointed out earlier, a way of detecting freeriders was an evolutionary necessity in order for cooperation to evolve. Statistical Reasoning Cognitive psychology can sometimes seem as too-frequently discussing things that we are bad at. Visual illusions reveal flaws in our perceptual processes, forgetting suggests a malfunctioning mem- ory and problems of logic are solved illogically. So to continue this ongoing theme, we next consider research showing just how bad we are at statistics. Here is a problem, if you were to toss a coin ten times and each time it came down heads what are the chances that it will come down heads if you toss it an eleventh time? Hopefully you are sophisticated enough to realise that since the chances of getting a heads or a tails is 50-50, and the next toss is independent of (i.e. unaffected by) the previous tosses, so the chance of it being heads is 50 per cent (or as statisticians would say, 0.5, as they prefer to deal with proportions – i.e. fractions of 1 – rather than percentages). What we have described is often known as the gambler’s fallacy (Tversky and Kahneman, 1971) which is the belief that a sequence of similar events (a run of heads, for example) will increase the likelihood that something different will happen (a tail). So a gambler having had a run of losses Statistical Reasoning 245 on the slot machines will continue to play because the losing streak surely must be followed by a win. There is also an opposite fallacy – although, as shall see, they are in fact closely related – called the hot hand fallacy. ‘Hot hand’ is a term with its origins in basketball which describes the belief that when a player scores they are more likely to score again in the future. Such a situation is often described as a person having ‘a streak’, ‘a run of form’ or being ‘on fire’. Research, however, has shown that such a phenomenon doesn’t exist when the distribution of successes is taken over many instances, they really are random (Gilovich, Tversky and Vallone, 1985). Of course, in both cases bookmakers exploit these intuitions and make a lot of money off the back of them. So what might explain why we make such egregious errors? One answer might be provided by a research area known as foraging theory (see Box 9.7). Foraging Theory and Statistical Reasoning First it is important to emphasise that coin flips, spins on a roulette wheel or successful shots in basketball, as we have already mentioned, are independent of each other, and in this regard, they are quite different to many other phenomena of evolutionary importance. Take foraging, for instance. For most of our evolutionary past our ancestors were obligate foragers. (‘Obligate’ just means that they had no choice in the matter; our ancestors couldn’t just order a pizza instead.) According to psychologist Andreas Wilke, our foraging past can explain both the hot hand fallacy and the gambler’s fallacy, he suggests that rather than being fallacies they reveal an evolu- tionarily significant sensitivity to the distribution of things in the world (Wilke and Barrett, 2009; Wilke, 2020). Anyone who has ever foraged for anything – picking blackberries for example – knows that blackberries are not uniformly distributed throughout the world; they occur in clumps, namely blackberry bushes. Food is not the only thing that our ancestors cared about which comes in clumps or patches. The sun rises in the morning at a predictable time, stays in the sky all day and sets in the evening; periods of rain are followed by periods or dryness; even people tend to congregate together into villages, hunting parties or foraging groups. Finding one berry predicts another, just as one drop of rain predicts the next. This is hot hand fallacy: the environment genuinely is made up of streaks (or patches as we’ve been calling them). But all things must pass; the rain will stop, and the berries exhausted. This is the gambler’s fallacy. So the argument put forward by Wilke and others is that they are only fallacies to the extent that they are applied to independent and random sequences of events such as roulette wheels and dice throws, but, as Steven Pinker points out in The Blank Slate (1997), these are sophisticated pieces of technology that have been designed to do this. The rest of the world is not a casino. Box 9.7 Foraging Theory and the Marginal Value Theorem Foraging theory (sometimes known as optimal foraging theory; see Stephens and Krebs, 1986) is an approach to the study of animal behaviour developed by behavioural ecologists. The question they ask is quite simple: given that animals (including humans) need to satisfy many needs (for example, feeding, mating, rearing offspring) and have only a limited amount of time to do them, how do they manage their time in a sensible way? As the name implies, foraging theory has been most widely applied to food-foraging behaviour where it asks how animals manage to allocate their time exploiting richer, rather 246 Evolution, Thought and Cognition Box 9.7 (cont.) than poorer food patches (the word that foraging theory uses for the ‘clumps’ of food in the environment, such as a blackberry bush) taking into account costs such as the energy expend- ed while foraging. Foraging theory has demonstrated that animals are very efficient at finding and exploiting high-quality food patches, as might be expected given the importance of ener- gy and nutrition to an animal’s survival. It has also shown that animals are flexible foragers, able to modulate their behaviour based on internal factors such as need and external factors such as risk of predation. For instance, under normal circumstances, animals might avoid food patches where there is a high risk of predation. This is one reason why squirrels avoid venturing out into areas where there is no cover such as fields. However, when needs are high, such as when the animal is extremely hungry, they may risk foraging in open spaces. Foraging theory shows that the animals are in fact making some quite complex calculations, assessing the risk of predation and weighing it against the risk of starvation. The risks, it seems, are always calculated. Another impressive feature of animal cognition is to decide how long to spend in a patch. To return to our blackberry example, anyone who has picked blackberries will know that to begin with you usually get a lot of nice juicy berries with ease, but as time goes on – as we deplete the patch – pickings start to get slimmer as the berries become increasingly inaccessible. So the question that occurs to us and our animal cousins is, as the Clash put it, ‘should I stay or should I go?’ The answer, as it often is, is it depends. If it looks like there are other berry bushes nearby, then it makes sense to leave early: why waste time reaching through thorns to get poorer berries from a rapidly depleting patch when you could be plucking them at a faster rate from another bush? On the other hand, if the next bush is some distance away and would take a significant amount of time and energy to get there, then it makes sense to stay put. Better to spend more time collecting inaccessible berries than waste time travelling. All foraging animals understand this, and it is known as Charnov’s marginal value the- orem (named after its creator Eric Charnov). It is very mathematical, but we can depict the maths graphically in the illustration below. Time goes from left to right and the vertical axis represents the amount of resource accumulated such as calories consumed, or berries picked. The red curve (called a ‘gain curve’) shows that over time the resource depletes: the longer you stay, the less you get per unit time. When should you leave the patch? It depends on how long it will take to get to the next patch, which is represented by travel time (the left-hand side of the graph). The green dotted line shows a short travel time to the next patch, the blue hashed line a longer travel time. To determine how long to stay in the patch you look at the point at which each of these lines touch the gain curve at a tangent. You can see that the longer the travel time to the next patch (blue line) the longer you should remain exploiting the patch. Incidentally it also predicts that the further away from the supermarket you live, the bigger the shop you will tend to do. The marginal value theorem has been applied to behaviours as diverse as apple picking in humans, insect hunting in great tits, root growth in plants (soil nutrients are patchy too) and how long dung flies spend having sex. Evolution and Cognition 247 Box 9.7 (cont.) Cumul

Use Quizgecko on...
Browser
Browser