Biopsychology of Emotion PDF
Document Details

Uploaded by TruthfulCitrine4977
Tags
Summary
This document covers the biopsychology of emotion, stress, and health, including historical introductions and the role of the autonomic nervous system. It discusses early research, Darwin’s theory, the James-Lange and Cannon-Bard theories, and Klüver-Bucy Syndrome. Focus is also given to specific patterns of autonomic nervous system activity in relation to specific emotions.
Full Transcript
This chapter is about the biopsychology of emotion, stress, and health. It begins with a historical introduction to the biopsychology of emotion and then focuses in the next two modules on the dark end of the emotional spectrum: fear. Biopsychological research on emotions has concentrated on fear no...
This chapter is about the biopsychology of emotion, stress, and health. It begins with a historical introduction to the biopsychology of emotion and then focuses in the next two modules on the dark end of the emotional spectrum: fear. Biopsychological research on emotions has concentrated on fear not because biopsychologists are a scary bunch, but because fear has three important qualities: It is the easiest emotion to infer from behavior in various species; it plays an important adaptive function in motivating the avoidance of threatening situations; and chronic fear is one common source of stress. In the final two modules of the chapter, you will learn how some brain structures have been implicated in human emotion, and how stress increases susceptibility to illness. Biopsychology of Emotion: Introduction To introduce the biopsychology of emotion, this module reviews several classic early discoveries and then discusses the role of the autonomic nervous system in emotional experience and the facial expression of emotion. Early Landmarks in the Biopsychological Investigation of Emotion LO 1.1 Summarize the major events in the history of research on the biopsychology of emotion. This section describes, in chronological sequence, six early landmarks in the biopsychological investigation of emotion. It begins with the 1848 case of Phineas Gage. In 1994, Damasio and her colleagues brought the power of computerized reconstruction to bear on Gage’s classic case. They began by taking an x-ray of the skull and measuring it precisely, paying particular attention to the position of the entry and exit holes. From these measurements, they reconstructed the acci-dent and determined the likely region of Gage’s brain damage (see Figure 1.1). It was apparent that the damage to Gage’s brain affected both medial prefrontal lobes, which we now know are involved in plan-ning, decision making, and emotion (see Jin & Maren, 2015; Lee & Seo, 2016; Simon, Wood & Moghaddam, 2015). DARWIN’S THEORY OF THE EVOLUTION OF EMOTION. The first major event in the study of the biopsychology of emotion was the publication in 1872 of Darwin’s book The Expression of Emotions in Man and Animals. In it, Darwin argued that particular emotional responses, such as human facial expressions, tend to accompany the same emotional states in all members of a species. Darwin believed that expressions of emo-tion, like other behaviors, are products of evolution; he therefore tried to understand them by comparing them in different spe-cies (see Brecht & Freiwald, 2012). From such interspecies comparisons, Darwin developed a theory of the evolution of emotional expression that was composed of three main ideas: Expressions of emotion evolve from behaviors that indicate what an animal is likely to do next. If the signals provided by such behaviors benefit the animal that displays them, they will evolve in ways that enhance their communicative function, and their origi-nal function may be lost. Opposite messages are often signaled by opposite movements and postures, an idea called the principle of antithesis. Consider how Darwin’s theory accounts for the evo-lution of threat displays. Originally, facing one’s enemies, rising up, and exposing one’s weapons were the com-ponents of the early stages of combat. But once enemies began to recognize these behaviors as signals of impend-ing aggression, a survival advantage accrued to attackers that could communicate their aggression most effectively and intimidate their victims without actually fighting. As a result, elaborate threat displays evolved, and actual combat declined. To be most effective, signals of aggression and submis-sion must be clearly distinguishable; thus, they tended to evolve in opposite directions. For example, gulls signal aggression by pointing their beaks at one another and sub-mission by pointing their beaks away from one another; primates signal aggression by staring and submission by averting their gaze. Figure 1.2 reproduces the woodcuts Darwin used in his 1872 book to illustrate this principle of antithesis in dogs. JAMES-LANGE AND CANNON-BARD THEORIES. The first physiological theory of emotion was proposed inde-pendently by James and Lange in 1884. According to the James-Lange theory, emotion-inducing sensory stimuli are received and interpreted by the cortex, which triggers changes in the visceral organs via the autonomic nervous system and in the skeletal muscles via the somatic ner-vous system. Then, the autonomic and somatic responses trigger the experience of emotion in the brain. In effect, what the James-Lange theory did was to reverse the usual commonsense way of thinking about the causal relation between the experience of emotion and its expression (see Figure 1.3). James and Lange argued that the autonomic activity and behavior that are trig-gered by the emotional event (e.g., rapid heartbeat and running away) produce the feeling of emotion, not vice versa (see Figure 1.3). Around 1915, Cannon proposed an alternative to the James-Lange theory of emotion, and it was subsequently extended and promoted by Bard. According to the Cannon-Bard theory, emotional stimuli have two inde-pendent excitatory effects: They excite both the feeling of emotion in the brain and the expression of emotion in the autonomic and somatic nervous systems. That is, the Cannon-Bard theory, in contrast to the James-Lange theory, views emotional experience and emotional expres- sion as parallel processes that have no direct causal relation. The James- Lange and Cannon-Bard theories make dif-Aggression ferent predictions about the role of feedback from auto-nomic and somatic nervous system activity in emotional experience. According to the James- Lange theory, emo-tional experience depends entirely on feedback from auto- nomic and somatic nervous system activity; according to the Cannon-Bard theory, emotional experience is totally independent of such feedback. Both extreme positions have proved to be incorrect. On the one hand, it seems that the autonomic and somatic feedback is not necessary for the experience of emotion: Human patients whose auto-nomic and somatic feedback has been largely eliminated by a broken neck are capable of a full range of emotional experiences, though there does seem to be some dampen-ing of fear and anger (see Pistoia et al., 2015). On the other hand, there have been numerous reports—some of which you will soon encounter—that autonomic and somatic responses to emotional stimuli can influence emotional experience. Failure to find unqualified support for either the Submission James-Lange or the Cannon-Bard theory led to the modern biopsychological view. According to this view, each of the three principal factors in an emotional response—the per-ception of the emotion-inducing stimulus, the autonomic and somatic responses to the stimulus, and the experience of the emotion—can influence the other two (e.g., Scherer & Moors, 2019; see Figure 1.3). SHAM RAGE. In the late 1920s, Bard (1929) discov-ered that decorticate cats—cats whose cortex has been removed—respond aggressively to the slightest provoca-tion: After a light touch, they arch their backs, erect their hair, hiss, and expose their teeth. The aggressive responses of decorticate animals are abnormal in two respects: They are inappropriately severe, and they are not directed at particular targets. Bard referred to the exaggerated, poorly directed aggressive responses of decorticate animals as sham rage. Sham rage can be elicited in cats whose cerebral is critical for the expression of aggres-sive responses and that the function of the cortex is to inhibit and direct these responses. Feeling of fear Physiological reactions Commonsense View Perception of bear Perception of bear Physiological reactions Feeling of fear James-Lange View Feeling of fear LIMBIC SYSTEM AND EMOTION. In 1937, Papez (pronounced “Payps”) proposed that emotional expression is controlled by several interconnected nuclei and tracts that ring the thalamus. Figure 1.4 illustrates some of the key structures in this circuit: the amygdala, mammillary body, hippocampus, fornix, cingulate cortex, septum, olfactory bulb, and hypothalamus. Papez proposed that emotional states are expressed through the action of the other structures of the circuit on the hypothalamus and that they are experienced through their action on the cortex. Papez’s theory of emotion was revised and expanded by Paul MacLean in 1952 and became the influential limbic system theory of emotion. Indeed, many of the structures in Papez’s circuit are part of what is now known as the limbic system. KLÜVER-BUCY SYNDROME. In 1939, Klüver and Bucy observed a striking syn- drome (pattern of behavior) in monkeys whose anterior temporal lobes had been removed. This syndrome, which is com-monly referred to as the Klüver- Bucy syndrome, includes the following behav-iors: the consumption of almost anything that is edible, increased sexual activity often directed at inappropriate objects, a tendency to repeatedly investigate familiar objects, a tendency to investigate objects with the mouth, and a lack of fear. Monkeys that could not be handled before surgery were transformed by bilateral anterior temporal lobectomy into tame subjects that showed no fear whatsoever—even in response to snakes, which terrify normal monkeys. In and they are not directed at particular targets. Bard referred to the exaggerated, poorly directed aggressive responses of decorticate animals as sham rage. Sham rage can be elicited in cats whose cerebral hemispheres have been removed down to, but not including, the hypothalamus; but it cannot be elicited if the hypothalamus is also removed. On the basis of this observation, Bard concluded that the hypothalamus primates, most of the symptoms of the Klüver-Bucy syndrome have been attributed to damage to the amygdala (see LeDoux, Michel, & Lau, 2020; Schröder, Moser, & Huggenberger, 2020), a structure that has played a major role in research on emotion, as you will learn later in this chapter. The Klüver-Bucy syndrome has been observed in sev-eral species. Following is a description of the syndrome in a human patient with a brain infection. Emotions and the Autonomic Nervous System LO 1.2 Summarize the research on the relationship between the autonomic nervous system and emotions. Olfactory bulb Hypothalamus Amygdala Mammillary body Hippocampus A Human Case of Klüver-Bucy Syndrome At first he was listless, but eventually he became very placid with flat affect. He reacted little to people or to other aspects of his environment. He spent much time staring at the television, even when it was not turned on. On occasion he would become extremely silly, smiling inappropriately and mimicking the actions of others, and once he began copying the movements of another person, he would persist for extended periods of time. In addition, he tended to engage in oral exploration, sucking, licking, or chewing all small objects that he could reach. Research on the role of the autonomic nervous system (ANS) in emotion has focused on two issues: the degree to which specific patterns of ANS activity are associated with specific emotions and the effectiveness of ANS measures in polygraphy (lie detection). EMOTIONAL SPECIFICITY OF THE AUTONOMIC NERVOUS SYSTEM. The James-Lange and Cannon-Bard theories differ in their views of the emotional specificity of the autonomic nervous system. The James-Lange theory says that different emotional stimuli induce different pat-terns of ANS activity and that these different patterns produce different emotional experiences. In contrast, the Cannon-Bard theory claims that all emotional stimuli pro-duce the same general pattern of sympathetic activation, which prepares the organism for action (i.e., increased heart rate, increased blood pressure, pupil dilation, increased flow of blood to the muscles, increased respiration, and increased release of epinephrine and norepinephrine from the adrenal medulla). The experimental evidence suggests that the specific-ity of ANS reactions lies somewhere between the extremes of total specificity and total generality (see Kreibig, 2010; Quigley & Barrett, 2014). On one hand, ample evidence indicates that not all emotions are associated with the same pattern of ANS activity; on the other, there is no evidence that each emotion is characterized by a distinct pattern of ANS activity (see Siegel et al., 2018). The six early landmarks in the study of brain mecha-nisms of eotion just reviewed are listed in Table 1.1. POLYGRAPHY. Polygraphy (more commonly known as the “lie detector test”) is a method of interrogation that employs ANS indexes of emotion to infer the truth-fulness of a person’s responses. Polygraph tests admin-istered by skilled examiners can be useful additions to normal interrogation procedures, but they are far from infallible. The main problem in evaluating the effectiveness of polygraphy is that it is rarely possible in real-life situations to know for certain whether a suspect is guilty or innocent. Consequently, many studies of polygraphy have employed the mock-crime procedure: Volunteers participate in a mock crime and are then subjected to a polygraph test by an examiner who is unaware of their “guilt” or “innocence.” The usual interrogation method is the control-question technique, in which the physiological response to the target question (e.g., “Did you steal that purse?”) is com-pared with the physiological responses to control questions. whose answers are known (e.g., “Have you ever been in jail before?”). The assumption is that lying will be associ-ated with greater sympathetic activation. A review of the use of the control-question technique in real-life crime set-tings led to an estimated success rate of about 55 percent—just slightly better than chance (i.e., 50%) (see Iacono & Ben-Shakhar, 2019). Despite being commonly referred to as lie detection, polygraphy detects ANS activity, not lies. Consequently, it is less likely to successfully identify lies in real life than in experiments. In real- life situations, questions such as “Did you steal that purse?” are likely to elicit an emotional reac-tion from all suspects, regardless of their guilt or innocence, making it difficult to detect deception (see Ambach & Gamer, 2018). The guilty-knowledge technique, also known as the concealed information test, circumvents this problem. In order to use this technique, the polygrapher must have a piece of information concerning the crime that would be known only to the guilty person. Rather than attempting to catch the suspect in a lie, the polygrapher simply assesses the suspect’s reaction to a list of actual and contrived details of the crime. Innocent suspects, because they have no knowledge of the crime, react to all such details in the same way; the guilty react differentially (see Ambach & Gamer, 2018). In the classic study of the guilty-knowledge technique (Lykken, 1959), volunteers waited until the occupant of an office went to the washroom. Then, they entered her office, stole her purse from her desk, removed the money, and left the purse in a locker. The critical part of the interrogation went something like this: “Where do you think we found the purse? In the washroom?... In a locker?... Hanging on a coat rack? ” Even though electrodermal activity was the only measure of ANS activity used in this study, 88 percent of the mock criminals were correctly identified; more importantly, none of the innocent control volunteers was judged guilty—see Ben-Shakhar (2012), Ambach & Gamer (2018). Emotions and Facial Expression LO 1.3 Describe some research on the facial expression of emotions. Ekman and his colleagues have been preeminent in the study of facial expression (see Ekman, 2016). They began in the 1960s by analyzing hundreds of films and photo-graphs of people experiencing various real emotions. From these, they compiled an atlas of the facial expres-sions that are normally associated with different emo-tions (Ekman & Friesen, 1975). For example, to produce the facial expression for surprise, models were instructed to pull their brows upward so as to wrinkle their forehead, Check It Out Experiencing Facial Feedback Why don’t you try the facial feedback hypothesis? Pull your eyebrows down and together; raise your upper eyelids and tighten your lower eyelids, and narrow your lips and press them together. Now, hold this expression for a few seconds. If it makes you feel slightly angry and uncomfortable, you have just experienced the effect of facial feedback. 7 Biopsychology of Emotion, Stress, and Health 467 to open their eyes wide so as to reveal white above the iris, to slacken the muscles around their mouth, and to drop their jaw. Try it. UNIVERSALITY OF FACIAL EXPRESSION. Several early studies found that people of different cultures make similar facial expressions in similar situations and that they can cor-rectly identify the emotional significance of facial expres-sions displayed by people from cultures other than their own. The most convincing of these studies was a study of the members of an isolated New Guinea tribe who had had little or no contact with the outside world (see Ekman & Friesen, 1971). PRIMARY FACIAL EXPRESSIONS. Ekman and Friesen concluded that the facial expressions of the following six emotions are primary: surprise, anger, sadness, disgust, fear, and happiness. They further concluded that all other facial expressions of genuine emotion are composed of mix-tures of these six primaries. Figure 1.5 illustrates these six primary facial expressions. FACIAL FEEDBACK HYPOTHESIS. Is there any truth to the old idea that putting on a happy face can make you feel better? Research suggests that there is. The hypothesis that our facial expressions influence our emotional experience is called the facial feedback hypothesis. In a test of the facial feedback hypothesis, Rutledge and Hupka (1985) instructed volunteers to assume one of two patterns of facial contrac-tions while they viewed a series of slides; the patterns cor-responded to happy or angry faces, although the volunteers were unaware of that. They reported that the slides made them feel more happy and less angry when they were mak-ing happy faces and less happy and more angry when they were making angry faces (see Figure 1.6). A recent meta-analysis of the facial feedback hypothesis confirmed the reliability of these and similar findings; however, the effects were smaller than originally believed (see Coles, Larsen, & Lench, 2019). to substitute false ones. There are many reasons for choos-ing to put on a false facial expression. Some of them are positive (e.g., putting on a false smile to reassure a worried friend), and some are negative (e.g., putting on a false smile to disguise a lie). In either case, it is difficult to fool an expert. There are two ways of distinguishing true expressions from false ones (Ekman, 1985). First, microexpressions (brief facial expressions) of the real emotion often break through the false one (see Wang et al., 2015). Such microexpressions last only about 0.05 second, but with practice they can be detected without the aid of slow-motion photography. Second, there are often subtle differences between genuine facial expressions and false ones that can be detected by skilled observers. The most widely studied difference between a genu-Happy Angry Facial Expression VOLUNTARY CONTROL OF FACIAL EXPRESSION. Because we can exert voluntary control over our facial muscles, it is possible to inhibit true facial expressions and ine and a false facial expression was first described by the French anatomist Duchenne in 1862. Duchenne said that the smile of enjoyment could be distinguished from delib-erately produced smiles by consideration of the two facial muscles that are contracted during genuine smiles: orbicu-laris oculi, which encircles the eye and pulls the skin from the cheeks and forehead toward the eyeball, and zygomati-cus major, which pulls the lip corners up (see Figure 1.7). According to Duchenne, the zygomaticus major can be con-trolled voluntarily, whereas the orbicularis oculi is normally contracted only by genuine pleasure. Thus, inertia of the orbicularis oculi in smiling unmasks a false friend—a fact you would do well to remember. Ekman named the genuine smile the Duchenne smile. FACIAL EXPRESSIONS: CURRENT PERSPECTIVES. Ekman’s work on facial expressions began before video recording became commonplace. Now, video recordings provide almost unlimited access to natural facial expres-sions made in response to real-life situations. This tech-nology has contributed to four important qualifications to Ekman’s original theory. First, it is now clear that Ekman’s six primary facial expressions of emotion rarely occur in pure form—they are ideals with many subtle variations. Second, the existence of other primary emotions has been recognized (see Whalen et al., 2013). Third, body cues, not just facial expressions, are known to play a major role in expressions of emotion (see Sznycer, 2019). For example, pride is expressed through a small smile, with the head tilted back slightly and the hands on the hips, raised above the head, or clenched in fists with the arms crossed on the chest—see Figure 1.8 (see Witkower & Tracy, 2019). Fourth, there is evidence that Ekman’s six primary facial expressions may not be as universal as originally believed. For example, there seem to be distinct differ-ences, in terms of both the expression and recognition of facial expressions, between Western Caucasian and East Asian individuals (see Calvo & Nummenmaa, 2015; Jack et al., 2012; Wood et al., 2016). Moreover, recent studies of isolated tribes by Crivelli et al. (2016, 2017) indicate that facial expressions of emotion are not as universal as once thought. Fear, Defense, and Aggression Most biopsychological research on emotion has focused on fear and defensive behaviors. Fear is the emotional reaction to threat; it is the motivating force for defensive behaviors. Defensive behaviors are behaviors whose primary function is to protect the organism from threat or harm. In contrast, aggressive behaviors are behaviors whose primary func- tion is to threaten or harm. Although one purpose of this module is to discuss fear, defense, and aggression, it has another important purpose: to explain a common problem faced by bio-psychologists and the way in which those who conduct research in this particular area have managed to circum-vent it. Barrett (2006) pointed out that progress in the study of the neural basis of emotion has been limited because neuroscientists have often been guided by unsub-stantiated cultural assumptions about emotion: Because we have words such as fear, happiness, and anger in our language, scientists have often assumed that these emo-tions exist as entities in the brain, and they have searched for them—often with little success. The following lines of research on fear, defense, and aggression illustrate how biopsychologists can overcome the problem of vague, sub-jective, everyday concepts by basing their search for neu-ral mechanisms on the thorough descriptions of relevant Types of Aggressive and Defensive Behaviors LO 1.4 Describe the work that led to the distinction between aggressive and defensive behaviors in mammals. Considerable progress in the understanding of aggres-sive and defensive behaviors has come from the research of Blanchard and Blanchard (see Blanchard, Summers, & Blanchard, 2013; Koolhaas et al., 2013) on the colony intruder model of aggression and defense in rats. Blanchard and Blanchard have derived rich descriptions of rat intraspecific aggressive and defensive behaviors by studying the interac-tions between the alpha male— the dominant male—of an established mixed sex colony and a small male intruder: Upon encountering the intruder, the alpha male typically chases it away, repeatedly biting its back during the pursuit. The intruder eventually stops running and turns to face the alpha male. The intruder then rears up on its hind legs, still facing its attacker and using its forelimbs to ward off the attack. In response, the alpha male changes to a lateral ori-entation, with the side of its body perpendicular to the front of the defending intruder. Then, the alpha moves sideways toward the intruder, crowding and trying to push it off bal-ance. If the defending intruder stands firm against this “lat-eral attack,” the alpha often reacts by making a quick lunge around the defender’s body in an attempt to bite its back. In response to such attacks, the defender pivots on its hind feet, in the same direction as the attacker is moving, con-tinuing its frontal orientation to the attacker in an attempt to prevent the back bite. Another excellent illustration of how careful obser-vation of behavior has led to improved understanding of aggressive and defensive behaviors is provided by Pellis and colleagues’ (1988) study of cats. They began by video-taping interactions between cats and mice. They found that different cats reacted to mice in different ways: Some were efficient mouse killers, some reacted defensively, and some seemed to play with the mice. Careful analysis of the “play” sequences led to two important conclusions. The first con-clusion was that, in contrast to the common belief, cats do not play with their prey; the cats that appeared to be play-ing with the mice were simply vacillating between attack and defense. The second conclusion was that one can best understand each cat’s interactions with mice by locating the interactions on a linear scale, with total aggressiveness at one end, total defensiveness at the other, and various pro-portions of the two in between. Pellis and colleagues tested their conclusions by reduc-ing the defensiveness of the cats with an antianxiety drug. As predicted, the drug moved each cat along the scale toward more efficient killing. Cats that avoided mice before the injection “played with” them after the injection, those that “played with” them before the injection killed them after the injection, and those that killed them before the injection killed them more quickly after the injection. Based on the numerous detailed descriptions of aggressive and defensive behaviors provided by the Blanchards, Pellis and colleagues, and other biopsycholo-gists who have followed their example, most research-ers now distinguish among different categories of such behaviors. These categories of aggressive and defensive behaviors are based on three criteria: (1) their topography (form), (2) the situations that elicit them, and (3) their apparent function. Several of these categories for rats are described in Table 1.2 (see Blanchard et al., 2011; Jager et al., 2017; Kim & Jung, 2018). The analysis of aggressive and defensive behaviors has led to the development of the target-site concept—the idea that the aggressive and defensive behaviors of an animal are often designed to attack specific sites on the body of another animal while protecting specific sites on its own. For example, the behavior of a socially aggressive rat (e.g., lateral attack) appears to be designed to deliver bites to the defending rat’s back and to protect its own face, the likely target of a defensive attack. Conversely, most of the maneu-vers of the defending rat (e.g., boxing and pivoting) appear to be designed to protect the target site on its back. The discovery that aggressive and defensive behaviors occur in a variety of stereotypical species-common forms was the necessary first step in the identification of their neu-ral bases. Because the different categories of aggressive and defensive behaviors are mediated by different neural cir-cuits, little progress was made in identifying these circuits before the categories were first delineated. For example, the lateral septum was once believed to inhibit all aggres-sion, because lateral septal lesions rendered laboratory rats notoriously difficult to handle—the behavior of the lesioned rats was commonly referred to as septal aggression or septal rage. However, we now know that lateral septal lesions do not increase aggression: Rats with lateral septal lesions do not initiate more attacks, but they are hyperdefensive when threatened. Aggression and Testosterone LO 1.5 Describe the relation between testosterone levels and aggression in males. The fact that social aggression in many species occurs more commonly among males than among females is usually explained with reference to the organizational and acti-vational effects of testosterone. The brief period of testosterone release that occurs around birth in genetic males is thought to organize their nervous systems along masculine lines and hence to create the potential for male patterns of social aggression to be activated by the high testosterone levels that are present after puberty. These organizational and activational effects have been demon-strated in some mammalian species. For example, neonatal castration of male mice eliminates the ability of testoster-one injections to induce social aggression in adulthood, and adult castration eliminates social aggression in male mice that do not receive testosterone replacement injections. Unfortunately, research on testosterone and aggression in other species has not been so straightforward (see Carré & Olmstead, 2015). The extensive comparative research literature on tes-tosterone and aggression has been reviewed several times (Demas et al., 2005; Munley, Rendon, & Demas, 2018; Soma, 2006). Here are the major conclusions: Testosterone increases social aggression in the males of many species; aggression is largely abolished by castra-tion in these same species (see Hashikawa et al., 2018). In some species, castration has no effect on social aggres-sion; in still others, castration reduces social aggression during the breeding season but not at other times. The relation between aggression and testosterone lev-els is difficult to interpret because engaging in aggres-sive activity can itself increase testosterone levels—for example, just playing with a gun increased the tes-tosterone levels of male college students (Klinesmith, Kasser, & McAndrew, 2006). The blood level of testosterone, which is the only mea-sure used in many studies, is not the best measure. What matters more are the testosterone levels in the rel-evant areas of the brain. Although studies focusing on brain levels of testosterone are rare, it has been shown that testosterone can be synthesized in particular brain sites and not in others. It is unlikely that humans are an exception to the usual involvement of testosterone in mammalian social aggres-sion. However, the evidence is far from clear. In human males, aggressive behavior does not increase at puberty as testosterone levels in the blood increase; aggressive behav-ior is not eliminated by castration; and it is not increased by testosterone injections that elevate blood levels of testoster-one. A few studies have found that violent male criminals (see Fragkaki, Cima, & Granic, 2018) and aggressive male and female athletes tend to have higher testosterone levels than normal (see Batrinos, 2012; Denson et al., 2018); how-ever, this correlation may indicate that aggressive behaviors increase testosterone, rather than vice versa. The lack of strong evidence of the involvement of tes-tosterone in human aggression could mean that hormonal and neural regulation of aggression in humans differs from that in many other mammalian species. Or, it could mean that the research on human aggression and testosterone is flawed. For example, human studies are typically based on blood levels of testosterone (often inferred from saliva levels because collecting saliva is safer and easier than collecting blood) rather than on brain levels. However, the blood lev-els of a hormone aren’t necessarily indicative of how much hormone is reaching the brain. Also, the researchers who study human aggression have often failed to appreciate the difference between social aggression, which is related to tes-tosterone in many species, and defensive attack, which is not (see Montoya et al., 2012; Sobolewski, Brown, & Mitani, 2013). Most seemingly aggressive outbursts in humans are overreactions to real or perceived threat, and thus they are more appropriately viewed as defensive attack, not social aggression. Neural Mechanisms of Fear Conditioning Much of what we know about the neural mechanisms of fear has come from the study of fear conditioning. Fear conditioning is the establishment of fear in response to a previously neutral stimulus (the conditional stimulus) by pre-senting it, usually several times, before the delivery of an aversive stimulus (the unconditional stimulus). In a standard fear conditioning experiment, the sub-ject, often a rat, hears a tone (conditional stimulus) and then receives a mild electric shock to its feet (unconditional stim-ulus). After several pairings of the tone and the shock, the rat responds to the tone with a variety of defensive behav-iors (e.g., freezing and increased susceptibility to startle) and sympathetic nervous system responses (e.g., increased heart rate and blood pressure). LeDoux and his colleagues have mapped the neural mechanism that mediates this form of auditory fear conditioning (see Kim & Jung, 2018; Ledoux, 2014). Amygdala and Fear Conditioning LO 1.6 Describe the role of the amygdala in fear conditioning. LeDoux and his colleagues began their search for the neu-ral mechanisms of auditory fear conditioning (fear condition-ing that uses a sound as a conditional stimulus) by making lesions in the auditory pathways of rats. They found that bilateral lesions to the medial geniculate nucleus (the auditory relay nucleus of the thalamus) blocked fear conditioning to a tone, but bilateral lesions to the auditory cortex did not. This indicated that for auditory fear conditioning to occur, it is necessary for signals elicited by the tone to reach the medial geniculate nucleus but not the auditory cortex. It also indicated that a pathway from the medial geniculate nucleus to a structure other than the auditory cortex plays a key role in fear conditioning. This pathway proved to be the pathway from the medial geniculate nucleus to the amyg-dala. Lesions of the amygdala, like lesions of the medial geniculate nucleus, blocked auditory fear conditioning. The amygdala receives input from all sensory systems, and it is believed to be the structure in which the emotional signifi-cance of sensory signals is learned and retained. Several pathways carry signals from the amygdala to brain stem structures that control the various emotional responses (see Dampney, 2015). For example, a pathway to the periaqueductal gray of the midbrain elicits appropriate defensive responses (see Kim et al., 2013), whereas another pathway to the lateral hypothalamus elicits appropriate sympathetic responses. The fact that auditory cortex lesions do not disrupt fear conditioning to simple tones does not mean that the auditory cortex is not involved in auditory fear condition-ing. There are two pathways from the medial geniculate nucleus to the amygdala: the direct one, which you have already learned about, and an indirect one that projects via the auditory cortex. Both routes are capable of mediating fear conditioning to simple sounds; if only one is destroyed, conditioning progresses normally. However, only the corti-cal route is capable of mediating fear conditioning to com-plex sounds (see Chang & Grace, 2015). Figure 1.9 illustrates the circuit of the brain that is thought to mediate the effects of fear conditioning to an auditory conditional stimulus (see Calhoun & Tye, 2015; Herry & Johansen, 2014). The sound signal from an audi-tory conditional stimulus travels from the medial geniculate nucleus of the thalamus to reach the amygdala directly, or indirectly via the auditory cortex. The amygdala assesses the emotional significance of the sound on the basis of pre-vious encounters with it, and then the amygdala activates the appropriate response circuits—for example, behavioral circuits in the periaqueductal gray and sympathetic circuits in the hypothalamus. Contextual Fear Conditioning and the Hippocampus LO 1.7 Describe the role of the hippocampus in contextual fear conditioning. Environments, or contexts, in which fear-inducing stimuli are encountered can come to elicit fear. For example, if you repeatedly encountered a bear on a particular trail in the forest, the trail itself would begin to elicit fear. The process Contextual fear conditioning has been produced in the laboratory in two ways. First, it has been produced by the conventional fear conditioning procedure, which we just discussed. For example, if a rat repeatedly receives an electric shock following a conditional stimulus, such as a tone, the rat will become fearful of the conditional context (the test chamber) as well as the tone. Second, contextual fear conditioning has been produced by delivering aversive stimuli in a particular context in the absence of any other conditional stimulus. For example, if a rat receives shocks in a distinctive test chamber, the rat will become fearful of that chamber. In view of the fact that the hippocampus role in memory for spatial location, it is reasonable to expect that it would be involved in contextual fear conditioning. This seems to be the case (see Chaaya, Battle, & Johnson, 2018; Maren, Phan, & Liberzon, 2013). Bilateral hippocam-pal lesions block the subsequent development of a fear response to the context without blocking the development of a fear response to the explicit conditional stimulus (e.g., a tone; see Moscarello & Maren, 2018) The preceding discussion has probably left you with the impression that the amygdala is a single brain structure; it isn’t. It is actually a cluster of many nuclei, often referred to as the amygdala complex. The amygdala is composed of a dozen or so major nuclei, which are themselves divided into subnuclei. Each of these subnuclei is structurally dis-tinct, has different connections, and is thus likely to have different functions (see Duvarci & Pare, 2014; Janak & Tye, 2015). The study of fear conditioning provides a compelling demonstration of the inadvisability of assuming that the amygdala is a single structure. Evidence has been accumu-lating that the lateral nucleus of the amygdala—not the entire amygdala—is critically involved in the acquisition, storage, and expression of conditioned fear (see Duvarci & Pare, 2014; Janak & Tye, 2015; Tovote, Fadok, & Lüthi, 2015). Both the prefrontal cortex and the hippocampus project to the lateral nucleus of the amygdala: The prefrontal cortex is thought to act on the lateral nucleus of the amygdala to suppress conditioned fear (see Gilmartin, Balderston, & Helmstetter, 2014), and the hippocampus is thought to inter-act with that part of the amygdala to mediate learning about the context of fear-related events. The amygdala is thought to control defensive behavior via outputs from the central nucleus of the amygdala (see Janak & Tye, 2015; Kim & Jung, 2018; Pellman & Kim, 2016; Ressler & Maren, 2019) Brain Mechanisms of Human Emotion This module deals with the brain mechanisms of human emotion. We still do not know how the human brain controls the experience or expression of emotion, or how the brain interprets emotion in others, but progress has been made. Each of the following sections illustrates an area of progress. Cognitive Neuroscience of Emotion LO 1.9 Describe the current status of cognitive neuroscience research on emotion. Cognitive neuroscience is currently the dominant approach being used to study the brain mechanisms of human emotion. There have been many functional brain imaging stud-ies of people experiencing or imag-ining emotions or watching others experiencing them. These studies have established three points that have advanced our understanding of the brain mechanisms of emotion in fundamental ways (see Neumann et al., 2014; Wood et al., 2016): Brain activity associated with each human emotion is diffuse—there is not a center for each emotion (see Feinstein, 2013). Think “mosaic,” not “center,” for locations of brain mechanisms of emotion. There is virtually always activity in motor and sensory cortices when a person experiences an emotion. Similar patterns of brain activity tend to be recorded when a person experiences an emotion, imagines that emotion, or sees somebody else experience that emotion (see Figure 1.10 These three fundamental findings are influencing how researchers are thinking about the neural mecha-nisms of emotion. For example, the activity observed in sensory and motor cortex during the experience of human emotions is now believed to be an important part of the mechanism by which the emotions are experienced. The re-experiencing of related patterns of motor, autonomic, and sensory neural activity during emotional experiences is generally referred to as the embodiment of emotions (see Wang et al., 2016). Amygdala and Human Emotion LO 1.10 Describe the role of the amygdala in human emotion. You have already learned that the amygdalae play an important role in fear conditioning in rats. Numerous functional brain-imaging studies have suggested that the function of the human amygdalae is more general. Although the human amygdalae appear to respond most robustly to fear, they also respond to other emotions (see Hsu et al., 2015; Koelsch & Skouras, 2014; Patin & Pause, 2015). Indeed, the amygdalae appear to play a role in the performance of any task with an emotional component, whether positive or negative (see Fastenrath et al., 2014; Stillman, Van Bavel, & Cunningham, 2015). This has led to the view that the amygdalae play a role in evaluating the emotional significance of situations. Although the results of brain-imaging studies sug-gest that the amygdalae play a general role in emotions, the study of some patients with amygdalar damage sug- gests a specific role in fear. The following case illustrates this point. The case of S.P. is similar to reported cases of Urbach-Wiethe disease (see Meletti et al., 2014). Urbach-Wiethe disease is a genetic disorder that often results in calcification (hardening by conversion to calcium carbonate, the main component of bone) of the amygdala and surrounding ante-rior medial temporal lobe structures in both hemispheres. One Urbach- Wiethe patient with bilateral amygdalar damage was found to have lost the ability to recognize facial expressions of fear (see Adolphs, 2006). Indeed, she could not describe fear-inducing situations or produce fearful expres-sions, although she had no difficulty on tests involving other emotions. Medial Prefrontal Lobes and Human Emotion LO 1.11 Describe the role of the medial prefrontal lobes in human emotion. Emotion and cognition are often studied independently, but it is now believed that they are better studied as differ-ent components of the same system (see Barrett & Satpute, 2013). The medial portions of the prefrontal lobes (including the medial portions of the orbitofrontal cortex and anterior cingulate cortex) are the sites of emotion–cognition inter-action that have received the most attention (e.g., Etkin, Büchel, & Gross, 2015; Hiser & Koenigs, 2017; Kragel et al., 2018). Functional brain-imaging studies have found evidence of activity in the medial prefrontal lobes when emotional reactions are being cognitively suppressed or re-evaluated (see Okon-Singer et al., 2015). Many studies of medial prefrontal lobe activity employ suppression paradigms or reappraisal paradigms. In studies that use suppression paradigms, participants are directed to inhibit their emotional reactions to unpleas-ant films or pictures; in studies that use reappraisal paradigms, participants are instructed to reinterpret a pic-ture to change their emotional reaction to it. The medial prefrontal lobes are active when both of these paradigms are used, and they seem to exert their cognitive control of emotion by interacting with the amygdala (see Whalen et al., 2013). Many theories of the specific functions of the medial prefrontal lobes have been proposed. The medial prefron-tal lobes have been hypothesized to monitor the difference between outcome and expectancy (see Diekhof et al., 2012), to encode stimulus value over time (Tsetsos et al., 2014), to predict the likelihood of error (see Hoffmann & Beste, 2015), to mediate the conscious awareness of emotional stimuli (see Mitchell & Greening, 2011), and to mediate social decision making (see Lee & Seo, 2016; Phelps, Lempert, & Sokol-Hessner, 2014). Which hypothesis is correct? Perhaps all are; the medial prefrontal cortex is large and complex, and it likely performs many functions. This point was made by the study of Kawasaki and colleagues (2005). Kawasaki and colleagues used microelectrodes to record from 267 neurons in the anterior cingulate cortices (part of the medial prefrontal cortex) of four patients prior to surgery. They assessed the activity of the neurons when the patients viewed photographs with emotional content. Of these 267 neurons, 56 responded most strongly and consistently to negative emotional content. This confirms previous research linking the medial prefrontal lobes with negative emotional reactions, but it also shows that not all neurons in the area perform the same function— neurons directly involved in emotional processing appear to be sparse and widely distributed in the human medial pre-frontal lobes. Lateralization of Emotion LO 1.12 Describe the research on the lateralization of emotion. There is evidence suggesting that emotional functions are lateralized, that is, the left and right cerebral hemispheres are specialized to perform different emotional functions. This evidence has led to several theories of the cerebral lateralization of emotion; the following are the two most prominent (see Gainotti, 2019): The right hemisphere model of the cerebral lat-eralization of emotion holds that the right hemisphere is specialized for all aspects of emotional processing: perception, expres- sion, and experience of emotion. The valence model proposes that the right hemi-sphere is specialized for processing negative emotion and the left hemisphere is specialized for processing positive emotion. Which of the two theories does the evi-dence support? Most studies of the cerebral lateralization of emotion have employed functional brain-imaging methods, and the results have been com-plex and variable. Wager and colleagues (2003) per-formed a meta analysis of the data from 65 such studies. The main conclusion of Wager and colleagues was that the current theories of lateralization of emotion are too general from a neuroanatomical perspective. Overall com-parisons between left and right hemispheres revealed no interhemispheric differences in either the amount of emo-tional processing or the valence of the emotions being pro-cessed. However, when the comparisons were conducted on a structure-by-structure basis, they revealed substantial evidence of lateralization of emotional processing. Some kinds of emotional processing were lateralized to the left hemisphere in certain structures and to the right in others. Functional brain-imaging studies of emotion have commonly observed lateralization in the amygdalae—more activity is often observed in the left amygdala. Clearly, neither the right hemisphere model nor the valence model of the lateraliza-tion of emotion is supported by the evidence. The models are too general. Another approach to studying the lateralization of emotions is based on observing the asymmetry of facial expressions. In most people, each facial expres-sion begins on the left side of the face and, when fully expressed, is more pronounced there—which implies right hemisphere dominance for facial expressions (see Figure 1.11). Remarkably, the same asymmetry of facial expressions has been documented in monkeys (see Lindell, 2013). Neural Mechanisms of Human Emotion: Current Perspectives LO 1.13 Describe the current perspective on the neural mechanisms of human emotion that has emerged from brain-imaging studies. Although there is a general consensus that the amygdalae and medial prefrontal cortex play major roles in the perception and experience of human emotion, the results of brain-imaging studies have put this consensus into perspective (see Pessoa, 2018; Todd et al., 2020). Here are four important points: Emotional situations produce widespread increases in cerebral activity, not just in the amygdalae and prefron-tal cortex. All brain areas activated by emotional stimuli are also activated during other psychological processes. No brain structure has been invariably linked to a par-ticular emotion. The same emotional stimuli often activate different areas in different people. Stress and Health When the body is exposed to harm or threat, the result is a cluster of physiological changes generally referred to as the stress response—or just stress. All stressors (experiences that induce the stress response) produce the same core pattern of physiological changes, whether psychological (e.g., dismay at the loss of one’s job) or physical (e.g., long-term exposure to cold). However, it is chronic psychological stress that has been most frequently implicated in ill health, which is the focus of this module. The Stress Response LO 1.14 Describe the components of the stress response. Hans Selye (pronounced “SELL-yay”) first described the stress response in the 1950s, and he emphasized its dual nature. In the short term, it produces adaptive changes that help the animal respond to the stressor (e.g., mobilization of energy resources); in the long term, however, it produces changes that are maladaptive (e.g., enlarged adrenal glands). Selye attributed the stress response to the activation of the anterior pituitary adrenal cortex system. He concluded that stressors acting on neural circuits stimulate the release of adrenocorticotropic hormone (ACTH) from the anterior pituitary, that ACTH in turn triggers the release of glucocor-ticoids from the adrenal cortex, and that the glucocorticoids produce many of the components of the stress response (see Russell & Lightman, 2019; Shirazi et al., 2015; Spiga et al., 2014). The level of circulating glucocorticoids is the most commonly employed physiological measure of stress. Selye largely ignored the contributions of the sym-pathetic nervous system to the stress response. However, stressors activate the sympathetic nervous system, thereby increasing the amounts of epinephrine and norepinephrine released from the adrenal medulla. Most modern theories of stress acknowledge the roles of both the anterior pituitary adrenal cortex system and the sympathetic nervous system adrenal medulla system (see Carter & Goldstein, 2015). Figure 1.12 illustrates the two-system view. The major feature of Selye’s landmark theory is its assertion that both physical and psychological stressors induce the same general stress response. This assertion has proven to be partly correct. There is good evidence that all kinds of common psychological stressors— such as losing a job, taking a final exam, or ending a relationship—act like physical stressors. However, Selye’s contention that there is only one stress response has proven to be a simplification. Stress responses are complex and varied, with the exact response depending on the stressor, its timing, the nature of the stressed person, and how the stressed person reacts to the stressor (see Hostinar, Sullivan, & Gunnar, 2014; Oken, Chamine, & Wakeland, 2015). For example, in a study of women awaiting surgery for possible breast cancer, the lev-els of stress were lower in those who had convinced them-selves that they could not possibly have cancer, that their prayers were certain to be answered, or that it was counter-productive to worry (see Katz et al., 1970). In the 1990s, there was an important advance in the understanding of the stress response (see Grippo & Scotti, 2013). It was discovered that stressors produce physiologi-cal reactions that participate in the body’s inflammatory responses. Most notably, it was found that stressors pro-duce an increase in blood levels of cytokines, a group of peptide hormones that are released by many cells and par-ticipate in a variety of physiological and immunological responses, causing inflammation and fever (see Padro & Sanders, 2014). Animal Models of Stress LO 1.15 Describe research on animal models of stress, including that on subordination stress. Most of the early research on stress was conducted with nonhumans, and even today most lines of stress research begin with controlled experiments involving nonhumans before moving to correlational studies of humans. Early stress research on nonhumans tended to involve extreme forms of stress such as repeated exposure to electric shock or long periods of physical restraint. There are two prob-lems with this kind of research. First is the problem of eth-ics. Any research that involves creating stressful situations is going to be controversial, but many of the early stress studies were “over the top” and would not be permitted today in many countries. The second problem is that stud-ies that use extreme, unnatural forms of stress are often of questionable scientific value. Responses to extreme stress tend to mask normal variations in the stress response, and it is difficult to relate the results of such studies to common human stressors. Better animal models of stress involve the study of social threat from conspecifics (members of the same species). Virtually all mammals—particularly males—experience threats from conspecifics at certain points in their lives. When conspecific threat becomes an enduring feature of daily life, the result is subordination stress (e.g., Rodriguez- Arias et al., 2016). Subordination stress is most readily studied in social species that form dominance hierarchies (pecking orders). What do you think happens to subordinate male rodents who are continually attacked by more dominant males? They are more likely to attack juveniles, and they have smaller testes, shorter life spans, lower blood levels of testosterone, and higher blood levels of glucocorticoids (see Barik et al., 2013). If it has not already occurred to you, the chronic social threat that induces subordination stress in the members of many species is termed bullying in our own. Psychosomatic Disorders: The Case of Gastric Ulcers LO 1.16 Describe how our view of psychosomatic disorders has been refined by the results of research on gastric ulcers. Interest in pathological effects of stress has increased as researchers have identified more and more psychosomatic disorders (medical disorders in which psychological factors play a causal role). So many adverse effects of stress on health (e.g., in heart disease, asthma, and skin disorders) have been documented that it is now more reasonable to think of most, if not all, medical disorders as psychosomatic. Gastric ulcers were one of the first medical disorders to be classified as psychosomatic. Gastric ulcers are painful lesions to the lining of the stomach and duodenum, which in extreme cases can be life threatening. About 500,000 new cases are reported each year in the United States. The view of gastric ulcers as the prototypical psycho-somatic disorder changed with the discovery that they seemed to be caused by bacteria. It was claimed that the bacteria Helicobacter pylori (i.e., H. pylori) are responsible for all cases of gastric ulcers except those caused by nonsteroi-dal anti-inflammatory agents such as aspirin. This seemed to rule out stress as a causal factor, but a consideration of the evidence suggests otherwise. There is no denying that H. pylori damage the stom-ach wall or that antibiotic treatment of gastric ulcers helps many sufferers. The facts do, however, suggest that H. pylori infection alone is insufficient to produce the dis-order in most people. Although most patients with gastric ulcers display signs of H. pylori infection, so too do many healthy individuals (see Maixner et al., 2016; Testerman & Morris, 2014). Also, antibiotics improve the condition of many patients with gastric ulcers, but so do psychologi-cal treatments—and they do it without reducing signs of H. pylori infection. Apparently, another factor increases the susceptibility of the stomach wall to damage from H. pylori, and this factor appears to be stress. Gastric ulcers occur more commonly in people living in stressful situations, and stressors can produce gastric ulcers in laboratory animals. Psychoneuroimmunology: Stress, the Immune System, and the Brain LO 1.17 Define psychoneuroimmunology, and describe the four components that make up our bodies’ defenses against foreign pathogens. A major change in the study of psychosomatic disorders came in the 1970s with the discovery that stress can increase susceptibility to infectious diseases. Up to that point, infec-tious diseases had been regarded as “strictly physical.” The discovery that stress can increase susceptibility to infection led to the emergence of a new field of research in the early 1980s: psychoneuroimmunology—the study of interactions among psychological factors, the nervous system, and the immune system. Psychoneuroimmunological research is the focus of this section. Let’s begin with an introduction to the immune system. Microorganisms of every description revel in the warm, damp, nutritive climate of your body. However, the body has four lines of defense to keep it from being overwhelmed. First is what has been termed the behavioral immune systems: Humans are motivated to avoid contact with individuals who are displaying symptoms of illness (see Murray & Schaller, 2016), and their bodies are primed to respond more aggressively to infection when they per-ceive signs of infection in others (see Schaller et al., 2010). Second are a variety of surface barriers that keep the body from being overwhelmed. The major surface barrier is skin, but there are other mechanisms that protect from invasions through bodily openings (e.g., respiratory tract, eyes, and gastrointestinal tract). These mechanisms include coughing, sneezing, tears, mucous, and numerous chemical barriers. If microorganisms do manage to breach the surface bar-riers and enter the body, they are met by two additional lines of defense: the innate immune system and the adaptive immune system. Together, these two lines of defense consti-tute the immune system (see Kipnis, 2018; Pringle, 2013). INNATE IMMUNE SYSTEM. The innate immune system is the first component of the immune system to react. It reacts quickly and generally near points of entry of patho-gens (disease-causing agents) to the body. It is triggered when receptors called toll-like receptors (because they are similar to toll, a receptor previously discovered in fruit flies) bind to molecules on the surface of the pathogens or when injured cells send out alarm signals (see De Nardo, 2015). The reaction of the innate immune system includes a complex, but general, array of chemical and cellular reactions—they are general in the sense that the reactions to all pathogens are the same. One of the first reactions of the innate immune system to the invasion of pathogens is inflammation (swelling). Inflammation is triggered by the release of chemicals from damaged cells. Particularly influential are the cytokines, which attract leukocytes (white blood cells) and other phagocytes (cells that engulf and destroy pathogens) into the infected area. Microglia are phagocytes that are spe-cific to the central nervous system (see Aguzzi, Barres, & Bennett, 2013; Su et al., 2016). Cytokines also promote heal-ing of the damaged tissue once the pathogens are destroyed (see Kyritsis et al., 2012; Werneburg et al., 2017). Phagocytosis (destruction of pathogens by phagocytes) is thought to be one of the first immune reactions to have evolved. Phagocytes have been identified in all vertebrates and invertebrates that have been examined. A phagocyte is shown attacking bacteria in Figure 1.13. ADAPTIVE IMMUNE SYSTEM. The adaptive immune system differs from the innate immune system in the fol-lowing four respects: It evolved more recently, first appearing in early vertebrates. It is slower; its immune reaction to pathogens takes lon-ger to be fully manifested. It is specific in the sense that it reacts against specific antigens It has a memory; once it has reacted against a particular pathogen, it reacts more effectively against that same pathogen in the future. The main cells of the adaptive immune system are spe-cialized leukocytes called lymphocytes. Lymphocytes are produced in bone marrow and the thymus gland and are stored in the lymphatic system until they are activated. There are two major classes of lymphocytes: T cells and B cells (see Plesnila, 2016). Cell-mediated immunity is directed by T cells (T lymphocytes); antibody- mediated immunity is directed by B cells (B lymphocytes). The cell-mediated immune reaction begins when a phagocyte ingests a foreign microorganism. The phagocyte then displays the microorganism’s antigens (molecules, usually proteins, that can trigger an immune response) on the surface of its cell membrane, and this display attracts T cells. Each T cell has two kinds of receptors on its surface, one for molecules that are normally found on the surface of phagocytes and other body cells, and one for a specific foreign antigen. There are millions of different receptors for foreign antigens on T cells, but there is only one kind on each T cell, and there are only a few T cells with each kind of receptor. Once a T cell with a receptor for the foreign antigen binds to the surface of an infected macrophage, a series of reactions is initiated. Among these reactions is the multiplication of the bound T cell, creating more T cells with the specific receptor necessary to destroy all invaders that contain the target antigens and all body cells that have been infected by the invaders. The antibody-mediated immune reaction begins when a B cell binds to a foreign antigen for which it contains an appropriate receptor. This causes the B cell to multiply and to synthesize a lethal form of its receptor molecules. These lethal receptor molecules, called antibodies, are released into the intracellular fluid, where they bind to the foreign antigens and destroy or deactivate the microorganisms that possess them. Memory B cells for the specific antigen are also produced during the process; these cells have a long life and accelerate antibody- mediated immunity if there is a subsequent infection by the same microorganism. The memory of the adaptive immune system is the mechanism that gives vaccinations their prophylactic (pre-ventive) effect— vaccination involves administering a weakened form of a virus so that if the virus later invades, the adaptive immune system is prepared to act against it. For example, smallpox has been largely eradicated by pro-grams of vaccination with the weakened form of its largely benign relative, cowpox. The process of creating immunity through vaccination is termed immunization. Until recently, most immunological research has focused on the adaptive immune system; however, the dis-covery of the role of cytokines in the innate immune system stimulated interest in that system. WHAT EFFECT DOES STRESS HAVE ON IMMUNE FUNCTION: DISRUPTIVE OR BENEFICIAL? It is widely believed that the main effect of stress on immune function is disruptive. We are sure you have heard this from family members, friends, and even physicians. But is this true? One of the logical problems with the view that stress always disrupts immune function is that it is inconsistent with the principles of evolution. Virtually every individual organism encounters many stressors during the course of its life, and it is difficult to see how a maladaptive response to stress, such as a disruption of immune function, could have evolved—or could have survived if it had been created by a genetic accident or as a spandrel (a nonadaptive byproduct of an adaptive evolutionary change). Two events have helped clarify the relation between stress and immune function. The first was the meta-analysis of Segerstrom and Miller (2004), which reviewed about 300 previous studies of stress and immune function. Segerstrom and Miller found that the effects of stress on immune function depended on the kind of stress. They found that acute (brief) stressors (i.e., those lasting less than 100 minutes, such as public speaking, an athletic competi-tion, or a musical performance) actually led to improvements in immune function. Not surprisingly, the improvements in immune function following acute stress occurred mainly in the innate immune system, whose components can be mar-shaled quickly. In contrast, chronic (long-lasting) stressors, such as caring for an ill relative or experiencing a period of unemployment, adversely affected the adaptive immune system. Stress that disrupts health or other aspects of functioning is called distress, and stress that improves health or other aspects of functioning is called eustress. The second event that has helped clarify the relation between stress and immune function was the discovery of the bidirectional role played by the cytokines in the innate immune system. Short-term cytokine-induced inflamma-tory responses help the body combat infection, whereas long-term cytokine release is associated with a variety of adverse health consequences (see Dhabhar, 2014). This find-ing provided an explanation of the pattern of results discov-ered by Segerstrom and Miller’s meta-analysis. HOW DOES STRESS INFLUENCE IMMUNE FUNCTION? The mechanisms by which stress influences immune func-tion have been difficult to specify because there are so many possibilities. Stress produces widespread changes in the body through its effects on the anterior-pituitary adrenal-cortex system and the sympathetic-nervous-system adrenal-medulla system, and there are innumerable mechanisms by which those systems can influence immune function. For example, both T cells and B cells have receptors for gluco- corticoids; and lymphocytes have receptors for epinephrine, norepinephrine, and glucocorticoids. In addition, many of the neuropeptides that are released by neurons are also released by cells of the immune system. Conversely, cyto-kines, originally thought to be produced only by cells of the immune system, have been found to be produced by cells of the nervous system (see Jin & Yamashita, 2016). It is important to appreciate that there are behavioral routes by which stress can affect immune function. For example, people under severe stress often change their diet, exercise, sleep, and drug use, any of which could influence immune function. Also, the behavior of a stressed or ill per-son can produce stress and illness in others. For example, Wolf and colleagues (2007) found that stress in mothers aggravates asthmatic symptoms in their children; con-versely, asthma in the children increases measures of stress in their mothers. DOES STRESS AFFECT SUSCEPTIBILITY TO INFEC-TIOUS DISEASE? You have just learned that stress influ-ences immune function. Most people assume that this means that stress increases susceptibility to infectious dis-eases. But it doesn’t mean this at all, and it is important that you understand why. Journal Prompt 1.4 Before reading further, try jotting down some reasons as to why it would be a mistake to think that stress increases susceptibility to infectious diseases. There are at least three reasons why stress-produced decreases in immune function may not be reflected in an increased susceptibility to infectious disease: The immune system seems to have many redundant components; thus, disruption of one of them may have little or no effect on vulnerability to infection. Stress-produced changes in immune function may be too short-lived to have substantial effects on the prob-ability of infection. Declines in some aspects of immune function may induce compensatory increases in others. It has been difficult to prove that stress causes increases in susceptibility to infectious diseases in humans. One rea-son for this difficulty is that only correlational studies are possible. Numerous studies have reported positive correla-tions between stress and ill health in humans; for example, students in one study reported more respiratory infections during final exams (Glaser et al., 1987). However, interpre- tation of such correlations is never straightforward: People may report more illness during times of stress because they expect to be more ill, because their experience of illness dur-ing times of stress is more unpleasant, or because the stress changed their behavior in ways that increased their suscep-tibility to infection. Despite the difficulties of proving a direct causal link between stress and susceptibility to infectious disease in humans, the evidence for such a link is strong. Three basic types of evidence, when considered together, are persuasive: Correlational studies in humans—as you have just learned—have found correlations between stress levels and numerous measures of health. Controlled experiments conducted with laboratory animals show that stress can increase susceptibility to infectious disease in these species. A few partially controlled studies of humans have added greatly to the weight of evidence. One of the first partially controlled studies demon- strating stress-induced increases in the susceptibility of humans to infectious disease was conducted by Cohen and colleagues (1991). Using questionnaires, they assessed psy-chological stress levels in 394 healthy participants. Then, each participant randomly received saline nasal drops that contained a respiratory virus or only saline. Then, all of the participants were quarantined until the end of the study. A higher proportion of those participants who scored highly on the stress scales developed colds. Early Experience of Stress LO 1.18 Describe the effects of early exposure to severe stress. Early exposure to severe stress can have a variety of adverse effects on subsequent development. Children subjected to maltreatment or other forms of severe stress display a variety of brain and endocrine system abnor- malities (see Klengel & Binder, 2015). For example, early exposure to stress often increases the intensity of subse-quent stress responses (e.g., increases the release of glu-cocorticoids in response to stressors). It is important to understand that the developmental period during which early stress can adversely affect neu-ral and endocrine development begins before birth. Many experiments have demonstrated the adverse effects of pre-natal stress in laboratory animals; pregnant females have been exposed to stressors, and the adverse effects of that exposure on their offspring have subsequently been docu-mented (e.g., Sowa et al., 2015). One particularly interesting line of research on the role of early experience in the development of the stress response began with the observation that handling of rat pups by researchers for a few minutes per day during the first few weeks of the rats’ lives has a variety of salutary (health-promoting) effects (see Raineki, Lucion, & Weinberg, 2014). The majority of these effects seemed to result from a decrease in the magnitude of the handled pups’ responses to stressful events. As adults, rats that had been handled as pups displayed smaller increases in circulating glucocorti-coids in response to stressors (see Francis & Meaney, 1999). It seemed remarkable that a few hours of handling early in life could have such a significant and lasting effect. However, evidence supports an alternative interpretation. Liu and colleagues (1997) found that handled rat pups are groomed (licked) more by their mothers, and they hypothesized that the salutary effects of the early handling resulted from the extra grooming, rather than from the handling itself. They confirmed this hypothesis by show-ing that unhandled rat pups that received a lot of groom-ing from their mothers developed the same profile of less glucocorticoid release that was observed in handled pups (see Champagne et al., 2008). Early separation of rat pups from their mothers seems to have effects opposite to those that result from high lev-els of early grooming (see Zhang et al., 2013). For example, rats that are separated from their mothers in infancy dis-play elevated behavioral and hormonal responses to stress as adults. Stress and the Hippocampus LO 1.19 Describe the effects of stress on the hippocampus. Exposure to stress affects the structure and function of the brain in a variety of ways (see Lupien et al., 2018; McEwen, Gray, & Nasca, 2015; Sandi & Haller, 2015). However, the hippocampus appears to be particularly susceptible to stress-induced effects (see Kim, Pellman, & Kim, 2015; McEwen, Nasca, & Gray, 2016). The reason for this susceptibility may be the particularly dense population of glucocorticoid recep-tors in the hippocampus. Stress has been shown to reduce dendritic branching in the hippocampus, to reduce adult neurogenesis in the hippo-campus (see Egeland, Zunszain, & Pariante, 2015), to modify the structure of some hippocampal synapses, and to disrupt the performance of hippocampus- dependent tasks (see Kim, Pellman, & Kim, 2015). These effects of stress on the hippocam-pus appear to be mediated by elevated-glucocorticoid levels: They can be induced by corticosterone (a major glucocorticoid and can be blocked by adrenalectomy (surgical removal of the adrenal glands)—see de Quervain, Schwabe, & Roozendaal (2017); Shirazi et al. (2015). CONCLUSION. In this chapter, you have learned that the amygdala plays a role in emotion. The chapter ends with a troubling case that reinforces this point. Fortunately, not everybody reacts in the same way to amygdalar damage. HOOFDSTUK 2 This chapter is about the biopsychology of psychiatric disorders (disorders of psychological function sufficiently severe to require treatment). One of the main difficulties in studying or treating psychiatric disorders is that they are difficult to diagnose. The psychiatrist or clinical psychologist must first decide whether a patient’s psychological function is pathological or merely an extreme of normal human variation: For example, does a patient with a poor memory suffer from a pathological condition, or is he merely a healthy person with a poor memory? If a patient is judged to be suffering from a psychiatric disorder, then the particular disorder must be diagnosed. Because we cannot yet identify the specific brain pathology associated with various disorders, their diagnosis usually rests entirely on the patient’s symptom profile. Currently, the diagnosis is guided by the DSM-5 (the current edition of the Diagnostic and Statistical Manual of the American Psychiatric Association). There are two main difficulties in diagnosing particular psychiatric disorders: (1) patients suffering from the same disorder often display different symptoms, and (2) patients suffering from different disorders often display many of the same symptoms. Consequently, experts often disagree on the diagnosis of particular cases, and the guidelines provided by the DSM change with each new edition (see Blashfield et al., 2014). One purpose of this chapter is to help you understand why it is important to periodically revise the diagnosis of psychiatric disorders. This chapter begins with discussions of five sorts of psychiatric disorders: schizophrenia, depressive disorders, bipolar disorder, anxiety disorders, and Tourette’s disor-der. It ends with a description of how new psychotherapeutic drugs are developed and tested. Schizophrenia: The Case of Lena Lena’s mother was hospitalized with schizophrenia when Lena was 2. As a child, Lena displayed periods of hyperactivity; as an adolescent, she was viewed as odd. She enjoyed her classes and got good grades, but she had few friends. Shortly after their marriage, Lena’s husband noticed that Lena was becoming more withdrawn. She would sit for hours barely moving a muscle, often having lengthy discussions with nonexistent people. One day, Lena’s husband found her sitting on the floor in an odd posture staring into space. She was totally unresponsive. When he tried to move her, Lena displayed waxy flexibility—that is, she reacted like a mannequin, not resisting movement and holding her new position until she was moved again. She was diagnosed with schizophrenia with catatonia (schizophrenia characterized by long periods of immobility and waxy flexibility). In the hospital, Lena displayed a speech pattern exhibited by some individuals with schizophrenia: echolalia (vocalized rep-etition of some or all of what has just been heard). Doctor: How are you feeling today? Lena: I am feeling today, feeling the feelings today. Doctor: Are you still hearing the voices? Lena: Am I still hearing the voices, voices? What Is Schizophrenia? LO 2.1 Describe the positive and negative symptoms of schizophrenia, and provide specific examples of each. Schizophrenia Schizophrenia means “the splitting of psychic functions.” The term was coined in the early years of the 20th century to describe what was assumed at the time to be the primary symptom of the disorder: the breakdown of integration among emotion, thought, and action. Schizophrenia is considered to be a severe psychiatric disorder. It attacks about 1 percent of individuals of all races and cultural groups, typically beginning in adolescence or early adulthood (see Sikela & Quick, 2018). Schizophrenia occurs in many forms, but the case of Lena introduces you to some of its common features (Meyer & Salmon, 1988). The major difficulty in studying and treating schizophrenia is accurately defining it (see Bhati, 2013). Its symptoms are complex and diverse; they overlap greatly with those of other psychiatric disorders and frequently change during the progression of the disorder. Also, various neurological conditions (e.g., complex seizures) have symptoms that might suggest a diagnosis of schizophrenia. Because the current definition of schizophrenia overlaps with that of several different disorders, the DSM-5 prefers to use the label schizophrenia spectrum disorders to refer to schizophrenia and related disorders (see Bhati, 2013). The following are some symptoms of schizophrenia, although none of them appears in all cases. In an effort to categorize cases of schizophrenia so that they can be stud-ied and treated more effectively, it is common practice to consider positive symptoms (symptoms that seem to repre-sent an excess of typical function) separately from negative symptoms (symptoms that seem to represent a reduction or loss of typical function)—see Smigielski et al. (2020). Examples of positive symptoms include the following: Delusions. Delusions of being controlled (e.g., “Martians are making me steal”), delusions of persecution (e.g., “My mother is poisoning me”), or delusions of grandeur (e.g., “Steph Curry admires my jump shot”). Hallucinations. Imaginary voices making critical com-ments or telling patients what to do. Inappropriate affect. Reacting with an inappropriate emotional response to positive or negative events. Disorganized speech or thought. Illogical thinking, pecu-liar associations among ideas, belief in supernatural forces. Odd behavior. Talking in rhymes, difficulty performing everyday tasks. Examples of negative symptoms include the following: Affective flattening. Diminished emotional expression. Avolition. Reduction or absence of motivation. Catatonia. Remaining motionless, often in awkward positions for long periods. The frequent recurrence of any two of these symp-toms for 1 month is currently sufficient for the diagnosis of schizophrenia—provided that one of the symptoms is delusions, hallucinations, or disorganized speech. Discovery of the First Antipsychotic Drugs LO 2.2 Describe the discovery of the first two widely prescribed antipsychotic drugs. The first major breakthrough in the study of the biochemistry of schizophrenia was the accidental discovery in the early 1950s of the first antipsychotic drug (a drug that is meant to treat certain symptoms of schizophrenia and bipolar disorder), chlorpromazine. Chlorpromazine was developed by a French drug company as an antihistamine. Then, in 1950, a French surgeon noticed that chlorpromazine given prior to surgery to counteract swelling had a calming effect on some of his patients, and he suggested that it might have a calming effect on difficult-to-handle patients with psychosis (a loss of touch with reality). His suggestion triggered research that led to the discovery that chlorpromazine alleviates the symptoms of schizophrenia: Agitated patients with schizophrenia were calmed by chlorpromazine, and emotionally blunted patients with schizophrenia were activated by it. Don’t get the idea that chlorpromazine cures schizophrenia. It doesn’t. But it often reduces the severity of symptoms enough to allow institutionalized patients to be discharged. Shortly after the antipsychotic action of chlorpromazine was first documented, an American psychiatrist became 27 Biopsychology of Psychiatric Disorders 487 interested in reports that the snakeroot plant had long been used in India for the treatment of mental illness. He gave reserpine—the active ingredient of the snakeroot plant—to his patients with schizophrenia and confirmed its antipsy-chotic action. Reserpine is no longer used in the treatment of schizophrenia because it produces a dangerous decline in blood pressure at the doses needed for successful treatment. Although the chemical structures of chlorpromazine and reserpine are dissimilar, their antipsychotic effects are similar in two major respects. First, the antipsychotic effect of both drugs is manifested only after a patient has been medicated for 2 or 3 weeks. Second, the onset of this antipsychotic effect is usually associated with motor effects similar to the symptoms of Parkinson’s disease (e.g., muscular rigidity, a general decrease in voluntary movement). These similarities suggested to researchers that chlorpromazine and reserpine were acting through the same mechanism—one that was related to Parkinson’s disease. The Dopamine Theory of Schizophrenia LO 2.3 Describe the evolution of the dopamine theory of schizophrenia. The next major breakthrough in the study of schizophrenia came from research on Parkinson’s disease. In 1960, it was reported that the striatums (caudates plus putamens; see Figure 3.28) of persons with Parkinson’s disease had been depleted of dopamine (see Goetz, 2011). This finding suggested that a disruption of dopaminergic transmission might produce both Parkinson’s disease and the antipsychotic effects of chlorpromazine and reserpine. Thus was born the dopamine theory of schizophrenia—the theory that schizophrenia is caused by too much dopamine and, conversely, that antipsychotic drugs exert their effects by decreasing dopamine levels (see McCutcheon, Abi-Dargham, & Howes, 2019). Lending instant support to the dopamine theory of schizophrenia were two already well-established facts. First, the antipsychotic drug reserpine was known to deplete the brain of dopamine and other monoamines by breaking down the synaptic vesicles in which these neurotransmitters are stored. Second, drugs such as amphetamine and cocaine, which can trigger episodes that resemble schizophrenia in healthy users, were known to increase the extracellular levels of dopamine and other monoamines in the brain. An important step in the evolution of the dopamine theory of schizophrenia came in 1963, when Carlsson and Lindqvist assessed the effects of chlorpromazine on extra-cellular levels of dopamine and its metabolites (substances that are created by the breakdown of another substance in cells). Although they expected to find that chlorpromazine like reserpine, depletes the brain of dopamine, they didn’t. The extra- cellular levels of dopamine were unchanged by chlorpromazine, and the extracellular levels of its metabolites were increased. The researchers concluded that both chlorpromazine and reserpine antagonize transmission at dopa-mine synapses but that they do it in different ways: reserpine by depleting the brain of dopamine and chlorpromazine by binding to dopamine receptors. Carlsson and Lindqvist argued that chlorpromazine is a receptor blocker at dopamine synapses—that is, it binds to dopamine receptors without activating them and, in so doing, keeps dopamine from acti-vating them (see Figure 2.1). We now know that many psychoactive drugs are receptor blockers, but chlorpromazine was the first to be identified as such. Carlsson and Lindqvist fur-Figure 2.1 Chlorpromazine is a receptor blocker at dopamine synapses. Chlorpromazine was the first receptor blocker to be identified, and its discovery changed psychopharmacology. 1 Chlorpromazine binds to postsynaptic dopamine receptors; it does not activate them, and it blocks the ability of dopamine to activate them. Chlorpromazine Dopamine receptor 2 3 The blockage of dopamine receptors by chlorpromazine sends a feedback signal to the presynaptic neuron, which increases the release of dopamine. The feedback signal increases the release of dopamine, which is ther postulated that the lack of activity at postsynaptic dopamine receptors sent a feedback signal to the presynaptic cells that increased their release of dopamine, which was broken down in the synapses. This explained why dopaminergic activity was reduced while extracellular levels of dopamine stayed about the same and extracellular levels of its metabolites were increased. Carlsson and Lindqvist’s findings led to an important revi-sion of the dopamine theory of schizophrenia: Rather than high dopamine levels, the main factor in schizophrenia was presumed to be high levels of activity at dopamine receptors. In the mid-1970s, Snyder and his colleagues (see Creese, Burt, & Snyder, 1976; Madras, 2013) assessed the degree to which the various antipsychotic drugs that had been developed by that time bind to dopamine recep-tors. First, they added radioactively labeled dopamine to samples of dopamine-receptor-rich neural membrane obtained from calf striatums. Then, they rinsed away the unbound dopamine molecules from the samples and mea-sured the amount of radioactivity left in them to obtain a measure of the number of dopamine receptors. Next, in other samples, they measured each drug’s ability to block the binding of radioactive dopamine to the sample; the assumption was that the drugs with a high affinity for dopamine receptors would leave fewer sites available for the dopamine. In general, they found that chlorpromazine broken down in the synapse, resulting in elevated levels of dopamine metabolites. Dopamine Dopamine metabolites and the other effective antipsychotic drugs had a high affinity for dopamine receptors, whereas ineffective antipsychotic drugs had a low affinity. There were, how-ever, several major exceptions, including haloperidol. Although haloperidol was one of the most potent anti-psychotic drugs of its day, it had a relatively low affinity for dopamine receptors. A solution to the haloperidol puzzle came with the discovery that dopamine binds to more than one dopa-mine receptor subtype— five have been identified (see Beaulieu, Espinoza, & Gainetdinov, 2015). It turns out that chlorpromazine and other antipsychotic drugs in the same chemical class (the phenothiazines) all bind effectively to both D1 and D2 receptors, whereas haloperidol and the other antipsychotic drugs in its chemical class (the butyro-phenones) all bind effectively to D2 receptors but not to D1 receptors. This discovery of the selective binding of butyro- phenones to D2 receptors led to an important revision in the dopamine theory of schizophrenia. It suggested that schizophrenia is caused by hyperactivity specifically at D2 receptors, rather than at dopamine receptors in general. Snyder and his colleagues (see Madras, 2013; Snyder, 1978) subsequently confirmed that the degree to which typical antipsychotics (the first generation of antipsychotic drugs) Schizophrenia: Beyond the Dopamine Theory LO 2.4 Describe two current lines of research on schizophrenia. Haloperidol Spiroperidol Although the dopamine theory of schizophrenia is still influ-ential, current lines of research into atypical antipsychotics and psychedelic drug effects are leading to interesting new perspectives. These two areas of research will be described in the following two subsections. Chlorpromazine Potency of D2 Binding Based on Snyder, S. H. (1978). Neuroleptic drugs and neurotransmitter receptors. Journal of Clinical and Experimental Psychiatry, 133, 21–31. ATYPICAL ANTIPSYCHOTICS. Currently, atypical antipsychotics (also known as second-generation antipsy-chotics) are often the drugs of choice for the treatment of schizophrenia. Atypical antipsychotics are drugs that are effective against schizophrenia but yet do not bind strongly to D2 receptors. For example, clozapine, the first atypical antipsychotic to be approved for clinical use, has an affin-ity for D1 receptors, D4 receptors, and several serotonin and histamine receptors, but only a slight affinity for D2 receptors (see Humbert-Claude et al., 2012). Aripiprazole, respiridone, and quetiapine are but a few of the other com-monly prescribed atypical antipsychotics. drugs) bind to D2 receptors is highly correlated with their effectiveness in suppressing the symptoms of schizophrenia (see Figure 2.2). For example, the butyrophenone spiroperidol had the greatest affinity for D2 receptors and the most potent antipsychotic effect. Although the evidence implicating D2 receptors in schizophrenia is strong, it has become apparent that the D2 version of the dopamine theory of schizophrenia could not explain two general findings: Although typical antipsychotics block activity at D2 receptors within hours, their therapeutic effects are usually not apparent for several weeks. Almost all antipsychotics are only effective in the treat-ment of schizophrenia’s positive symptoms, but not its negative symptoms (see Aleman et al., 2018; Osoegawa et al., 2018). Appreciation of these limitations has led to the current version of the dopamine theory. This version holds that excessive activity at D2 receptors is one factor in the disor-der but that there are many other factors as well (see Poels et al., 2014; Sibley & Shi, 2018). Those of you who read about the mesocorticolimbic dopamine pathway and the nigrostriatal dopamine pathway might be wondering which of those two pathways is affected in schizophrenia. The dopamine theory of schizophrenia proposes that schizophrenia is caused by excessive activity in the mesocorticolimbic pathway. RENEWED INTEREST IN HALLUCINOGENIC DRUGS. The study of psychedelic drugs (drugs whose primary action is to alter perception, emotion, and cog- nition) began in 1943 with the discovery of lysergic acid diethylamide (LSD) (see Garcia-Romeu & Richards, 2018; Nichols, 2016). In addition to classical hallucinogens (such as LSD, psilocybin, and mescaline), psychedelic drugs include a variety of other drugs such as the dissociative hallucinogens (e.g., ketamine and phencyclidine). Researchers have pursued two lines of research on psychedelics. One line focused on those psychedelic drugs that produce effects similar to the symptoms of psychiatric disorders (e.g., illusions, hallucinations, paranoia, panic), and they used the drugs to model the disorders. The other line focused on the feelings of boundlessness, unity, and bliss reported by some users and attempted to use psychedelics in the treatment of psychiatric disorders. Unfortunately, these promising lines of research ground to a halt in the 1970s when many governments, troubled by the association of LSD and related drugs with various societal subcultures, made it extremely difficult for researchers to study their effects, particularly in humans (see Belouin & Henningfield, 2018; Rucker, Iliff, & Nutt, 2018; but see Oram, 2016). In the 1990s, there was a gradual renewal of interest in utilizing psychedelic drugs to study the mechanisms of schizophrenia and other psychiatric disorders (see Belouin & Henningfield, 2018; Rucker, Iliff, & Nutt, 2018). This renewal was stimulated by the development of tech-niques for imaging the effects of drugs in the human brain and by an increased understanding of the mechanisms of psychedelic drug action (e.g., Tylš, Páleníček, & Horáček, 2014). This research led to three important conclusions: The psychedelic effects of classical hallucinogens, such as LSD, mimic the positive symptoms of schizophrenia (e.g., hallucinations and disorganized thought) by act-ing as an agonist of the serotonin type-2a receptor. That antagonists of the serotonin type-2a receptor are effective antipsychotics (e.g., the atypical antipsychotic risperidone) (see Girgis et al., 2018). Dissociative hallucinogens (e.g., ketamine) mimic the negative symptoms of schizophrenia by acting as antagonists of glutamate receptors (see Laruelle, 2014). Genetic and Epigenetic Mechanisms of Schizophrenia LO 2.5 Explain what is currently known about the genetics and epigenetics of schizophrenia. It is clear that schizophrenia involves multiple genetic and epigenetic mechanisms. Many genes have been linked to the disorder (see Flint & Manufò, 2014; Reardon, 2014; Ripke et al., 2014), but no single gene seems capable of causing schizophrenia by itself, although certain genes have been more strongly implicated than others (see Dhindsa & Goldstein, 2016; Sikela & Quick, 2018; Sekar et al., 2016). The study of schizophrenia- related genes and their expression is still in its early stages, but it has already pointed to several physiological changes that could play important roles in development of the disorder (see Kotlar et al., 2015). For example, the expression of schizophrenia-related genes is associated with multiple aspects of brain development (see Birnbaum & Weinberger, 2017; Jaffe et al., 2018; Smigielski et al., 2020), myelination (see Voineskos et al., 2012), transmission at glu-tamatergic and GABAergic synapses (see Sacchetti et al., 2013), and changes in dopaminergic neuron physiology (see Dong et al., 2018; but see Gürel et al., 2020); and some genes that increase a person’s susceptibility to schizophrenia have also been linked to other psychiatric and neurological dis-orders (see Rizzardi et al., 2019). A variety of early experiential factors have been implicated in the development of schizophrenia—for example, birth complications, maternal stress, prenatal infections, socioeconomic factors, urban birth or residing in an urban setting, and childhood adversity (see Owen, Sawa, & Mortensen, 2016). Such early experiences are thought to alter the typical course of neurodevelopment leading to schizophrenia in individuals who have a genetic susceptibility (see Negrón-Oyarzo et al., 2016; Owen, Sawa, & Mortensen, 2016), presumably through epigenetic mechanisms—see Birnbaum & Weinberger (2017), Hannon et al. (2016), Jaffe et al. (2016), and Sharp & Akbarian (2016). Supporting this neurodevelopmental theory of schizophrenia are (1) the fact that schizophrenia and autism spectrum disorders share many of the same causal factors (e.g., genetic risk factors, environmental triggers)—see Millan et al. (2016), and (2) the study of two 20th-century famines: the Nazi-induced Dutch famine of 1944– 1945 and the Chinese famine of 1959–1961. Fetuses whose pregnant mothers suffered in those famines were more likely to develop schizophrenia as adults (see Li et al., 2015; Schmitt et al., 2014). Recent research has identified many epigenetic mecha-nisms that contribute to the emergence and persistence of schizophrenia (see Rizzardi et al., 2019). For example, DNA methylation and histone modifications have both been implicated in the expression of genes for synapse-specific proteins in prefrontal cortex neurons (see Caldeira, Peça, & Carvalho, 2019; Rizzardi et al., 2019). This and other research on the role of epigenetic mechanisms has given researchers better insight into how genes interact with the environment to produce schizophrenia (see Breen et al., 2019; Gandal et al., 2018; Girdhar et al., 2018). The role of transgenerational epigenetic mechanisms in psychiatric disorders like schizophrenia is also of great interest to researchers (see Yeshurun & Hannan, 2019). Neural Bases of Schizophrenia LO 2.6 Describe the various brain changes associated with schizophrenia. There is a long history of research on the neural bases of schizophrenia. Many studies have assessed brain develop-ment in patients with, or at risk for, schizophrenia. Four important findings have emerged from various meta- anal-yses of those studies (see Fusar-Poli et al., 2011; Steen et al., 2006; Vita et al., 2006): Individuals who have not been diagnosed with schizo-phrenia but are at risk for the disorder (e.g., because they have close relatives with schizophrenia) display volume reductions in some parts of the brain—for example, in the hippocampus (see Haukvik et al., 2018). Extensive brain changes already exist when patients first seek medical treatment and receive their first brain scans. Subsequent brain scans reveal that the brain changes continue to develop after the initial diagnosis. Alterations to different areas of the brain develop at different rates (see Gogtay & Thompson, 2010). One exciting line of research on the neural bases of schizophrenia has challenged the idea that the mesocorticolimbic dopamine pathway is involved in schizophrenia; if you remember, this was a tenet of the classic dopamine theory of schizophrenia. This line of research has shown that the neuropathological changes occur in the nigrostriatal dopamine pathway rather than in the mesocorticolimbic pathway (see McCutcheon, Abi-Dargham, & Howes, 2019). Recent research on the neural bases of schizophrenia has used modern functional brain-imaging techniques to study functional connectivity in the brains of individuals with schizophrenia. Research on functional con-nectivity in schizophrenia has been of two sorts. The first is the study of functional connectivity during hallucinations in individuals with schizophrenia. This line of research has shown that when participants with schizophrenia are hal-lucinating there is a change in the pattern of functional con-nectivity as compared to when they are not hallucinating (see Weber et al., 2020). The second line of research is exam-ining whether patterns of intrinsic functional connectivity might be used to predict treatment response to antipsychotic medications (see Chan et al., 2019). CONCLUSION. Although there has been significant prog-ress in our understanding of the mechanisms and treatment of schizophrenia, a careful reading of the research results sugges