Umits 9-12 Psych PDF
Document Details
Uploaded by SteadfastNihonium
Tags
Summary
This document discusses social psychology, focusing on how situational factors affect our behavior. It explores concepts like mimicry, social norms, and roles. The document analyzes classic studies, highlighting the impact of social environments on individuals, and how perceived social norms can impact actions like alcohol use in a social setting.
Full Transcript
13.1 We tend to feel that we are in charge of our own behaviour-that we are free to determine what we do and what we choose not to do, and that we act for good reasons, not just to go along with the crowd. Social psychology challenges these ideas with strong evidence that much of our behaviour depe...
13.1 We tend to feel that we are in charge of our own behaviour-that we are free to determine what we do and what we choose not to do, and that we act for good reasons, not just to go along with the crowd. Social psychology challenges these ideas with strong evidence that much of our behaviour depends more on where we are than on who we are. Kurt Lewin, one of the first social psychologists, conceptualized psychology by a simple equation- B =f (P,E) -symbolizing that Behaviour is a function of the Person and the Environment (1936). This insight challenged the Freudian theories of the early 20th century, which focused solely on the person and their unconscious drives. Lewin's formula also challenged behaviourism, which focused solely on the environment. Lewin, therefore, did what is so often the most reasonable thing to do: take all things in moderation. In doing so, he emphasized the role of the individual in choosing what situations to go into, how to interpret a situation, and, ultimately, how to respond. The decades of research in social psychology that has flowed from this insight has pieced together a deep understanding of the situational forces and individual characteristics that determine human behaviour. Situational Influence on Behaviour: Mimicry, Norms, and Roles To begin our study of social psychology, we must first acknowledge that humans are fundamentally social creatures; perhaps the biggest part of the "E" is the social environment. Even the biggest introverts among us are remarkably sensitive to what is socially acceptable or unacceptable. This doesn't limit itself to the difference between right and wrong-it can be something as simple as how to walk down the street. Is it appropriate to make eye contact with strangers and give them a warm greeting? In some situations, it would be incredibly rude not to. In others, it could lead to an awkward interaction or even get you beaten up. Of course, we don't always get it right and some people are better than others at making these social judgments, but the vast majority of people in most situations somehow seem to know how to behave. In this section, we will find out why. Synchrony and Mimicry Exactly how do social influences become incorporated into our thoughts and behaviour? Neuroscientists and psychologists have observed that humans often become synchronized, in a sense. Synchrony occurs when two individuals engage in social interactions, and their speech, language, and even physiological activity become more alike (Gordon et al., 2019). Similarly, humans often engage in mimicry, taking on for ourselves the behaviours, emotional displays, and facial expressions of others. Perhaps you have caught yourself inadvertently copying another's behaviour. But most of the time it is a completely unconscious activity. You tend to laugh and smile when others are laughing and smiling. More generally, you display the same emotional expressions on your face as those you see on the faces around you and then pick up their moods as well. And if someone else is whispering, you will likely whisper, even if it is to ask, "Why are we whispering?" The examples are literally endless; practically every moment of social interaction between people involves mimicry. This kind of subtly attuned mimicry is highly functional (Lakin et al., 2003; Tschacher et al., 2014), serving as a "social glue" and helping to coordinate behaviours in social settings. Mimicry helps people feel reassured and validated by each other, sending the unconsciously processed message to others that you are kind of like them, and more so, that you are paying attention to them in that moment. However, it's a different story if you try to intentionally mimic people's behaviour in order to manipulate them. Consciously trying to "steer" this process could lead you into trouble, just like focusing too much on a well-practised movement can cause you to mess it up. Indeed, if someone notices that a person is mimicking them, they like that person less as a result (Maddux et al., 2008). So, if you are using this power for your own sneaky purposes, at the very least, be subtle about it! Norms and Roles Social norms are the (usually unwritten) guidelines for how to behave in social contexts. Some of the more readily observable norms are those associated with age, gender, and socioeconomic class, and you can see them influence everything from our manners (e.g., you probably make different jokes when out with your friends than when you meet your significant other's parents for the first time) to the clothes we wear (e.g., you shouldn't show up to the typical funeral wearing cargo shorts and a sleeveless T-shirt). Norms are mostly implicit and emerge naturally in social interactions, although there are plenty of examples to the contrary. When you were a child, adults most likely told you specifically how you were expected to behave in different situations. As an employee, your supervisor may have provided you with verbal instructions or a policy manual about what is expected for manners, dress, and so on. Despite these examples, we adapt to new norms all the time without even realizing it. In fact, people often fail to realize this and instead believe that their behaviour is freely chosen (Nolan et al., 2008). Our tendency for mimicry helps us figure out normative behaviour, but what motivates us to go along with norms? One very important motivator is social approval. Individuals who don't appear "normal" (meaning some aspect of their behaviour challenges the norm) are often subject to all kinds of unpleasantness, ranging from insults to legal trouble. Ostracism, being ignored or excluded from social contact, is another powerful form of social pressure (Hartgerink et al., 2015; Pfundmair & Wetherall, 2018). Imagine this scenario: You arrive in the psychology department to participate in a study. In the waiting area, another student spots a ball in a basket of toys, picks it up, and playfully tosses it to you. A third student in the room holds up his hands, so you toss him the ball. This isn't a fun game, but it does pass the time. But what happens if the other two students for no apparent reason and without any provocation begin to only toss the ball back and forth with each other? This is an experimental method to produce ostracism developed over a 20-year period by Kip Williams and his graduate students (Williams & Nida, 2011). Although being left out of the game may sound trivial, the effects are anything but. The most noticeable observations across dozens of studies include anger and sadness; these effects have held up across many variations in the ball-toss procedure. Other typical responses include temporarily lowered self-esteem, self-confidence, and even a reduced sense of a meaningful existence. With all of these negative effects, you can see how ostracism could encourage someone to go along with the norms. In fact, ostracism can lead to hyper-normative behaviour. For example, individuals who experience a high need to belong—a type of personality trait-have a strong response to ostracism. In one study on group moralization, for example, high need-to-belong participants responded to ostracism by (1) increasing how much they identified with their ingroup's beliefs (such as a political group, church, social organization, etc.) and (2) increasing how morally important those beliefs are (Pfundmair & Wetherell, 2018). At its worse, ostracism can produce aggression in laboratory studies (Poon & Wong, 2019), and this has led researchers to note that, at the time of their writing, 13 of the 15 most recent school shooting perpetrators had experienced significant ostracism (Williams & Nida, 2011). The #Psych feature offers further insight into the effects of ostracism. While norms are general rules that apply to members of a group, social roles are guidelines that apply to specific positions within the group. Because roles are so specific, we often have labels for them, such as professor, student, coach, parent, and even prison guard. This latter role happens to be one of the most famous roles in psychology. The Stanford Prison Experiment of the early 1970s has become a memorable and controversial narrative of how quickly people might adapt to assigned roles-it has even been made into a feature film by that name (directed by Alvarez, 2015) and inspired one other. What makes this study so memorable? In 1971, researchers at Stanford University recruited a group of young men and randomly assigned them to play the part of prisoner or guard in a makeshift jail in the basement of the psychology building. The lead investigator, Phillip Zimbardo, coached the guards on how to play the role, even relying on consultation from a former prisoner on how to best mimic actual prison guard behaviours he experienced while incarcerated. Unsurprisingly, some guards became quite hostile and abusive, and in response many of the "prisoners" became helpless and submissive (Haney et al., 1973). The study was terminated before the end of planned two-week period. The reason offered by Zimbardo at the time was that the situation had gotten out of hand-the role-playing exercise became its own reality, and the prisoners were starting to show extreme duress. (However, this has been refuted multiple times by one of the prisoners in a 2004 interview; Toppo, 2018). Whether you accept Zimbardo's worst-case-scenario explanation, or the more moderate one offered by some participants, the point is still very important: People placed into situations change their behaviour. Sometimes the change is intentional, but often it is not a conscious act. Sometimes the change is minor, but the situation can also demand enormous changes. Classic studies have always been an essential part of learning about social psychology-they are often fascinating and illustrate concepts very well. It is also important to separate interesting story and narrative from scientific reality. Recall the concept of demand characteristics covered in Module 2.1: Providing the guards with specific instructions on how to play their role weakens Zimbardo's conclusion that the power of these randomly assigned roles turned otherwise good people into cruel guards and desperate victims (see Banuazizi & Movahedi, 1975). Interestingly, in the early 2000s a similar study, the British Prison Study, controlled for demand characteristics and found that the guards were actually very reluctant to engage in abusive behaviour, and the prisoners eventually coalesced to agree upon strategies with how to deal with being locked up together, and thereby improved their well-being over the course of the study (Haslam & Richer, 2012). Thus, the power of the situation can bring people together to play the role of "survivor." Prison studies aside, the importance of norms can be illustrated in many other ways. For example, alcohol abuse among university students is an increasing problem in Canada and the United States, and psychologists have found that perceived norms are likely a factor. Students who perceive norms to be high tend to overestimate rates of drinking on campus and are much more likely to be binge drinkers and heavy drinkers themselves (Foster et al., 2015; Guo et al., 2020). We should keep in mind that this is correlational research, so we cannot establish whether the norms lead to more drinking or vice-versa, but research suggests that interventions aimed at correcting misperceptions of the norm can lead to decreased alcohol abuse (LaBrie et al., 2013; Ridout & Campbell, 2014). We should also keep in mind that alcohol use is just one example of the power of norms. In fact, our perceptions of what is normal are likely to influence everything we do. And as you'll see in the next section on group dynamics, perceptions about what is normal can be formed and exert influence on people almost instantly. Group Dynamics Mimicry, roles, and social norms highlight the fact that much of our lives are spent in groups, whether it's hanging out with friends, collaborating on school projects, or navigating a crowded sidewalk. A key question in social psychology is whether the subtleties of mimicry and norms lead us to behave differently in groups than we would alone, and how the behaviour of individuals may differ from the behaviour of a group. Social Loafing and Social Facilitation Let's start with a question about your own experiences in groups-how do you feel about group assignments? Do you like them because they're an opportunity to get to know people, or maybe because your previous experiences show that groups can accomplish something more impressive together than you could alone? Or do you hate group projects because other people waste so much time or because people don't have very good ideas or because some people are slackers whose work doesn't meet your standards and you end up having to do everything yourself? Research at various types of higher education settings finds that students' opinions are divided, but many of those feelings are quite strong (e.g., Chang & Brickman, 2018; Gottschall & Garcia-Bayonas, 2008). Regardless of your feelings, you are almost certainly going to be working in groups in the future. Whether it's your job, family and community groups, or the group project your professor assigns to your class, it's pretty tough to avoid working with other people. Often one of the main purposes of a group is to work on more complex and sophisticated projects than an individual could by working alone. But does this really happen? Do groups produce better work, making the most out of individuals' ideas and encouraging their best efforts? Or do they produce poorer outcomes, limiting people's creativity and enabling them to slack off? Oddly enough, the answer to both questions is "yes, sometimes." Groups sometimes produce poorer outcomes due to social loafing, which occurs when an individual puts less effort into working on a task with others. There are various phrases for describing this-coasting, slacking, free-riding. Social loafing can occur in all sorts of tasks, including physical activities (e.g., swimming, rope-pulling), cognitive activities (e.g., problem-solving, perceptual tests), and creativity (e.g., song writing), and across all types of groups, regardless of age, gender, or nationality (Karau & Williams, 2001; Latané et al., 2006). One reason why people loaf is because they think others in the group are also not doing their best, setting up an apparent social norm that "people in this group don't work very hard." There are two likely outcomes of social loafing. Either the group performs quite poorly (i.e., crashes and burns), or a small number of people end up saving the group by doing everything themselves. Given the importance and inevitability of group work, it is important to understand what factors encourage loafing, so we can avoid them (Hall & Buzwell, 2012). Low efficacy beliefs. This occurs if tasks are too difficult or complex, so people don't know where to start. Structure tasks so people know exactly what to do, provide clear deadlines, and give people feedback so they know how well they are doing and how they can improve. Believing that an individual's contributions are not important to the group. This occurs if people can't see how their own input matters to the group. Overcome this by helping people understand how group members rely on and affect each other, and assigning tasks to people that they feel are significant or they've had some say in choosing (if possible). Not caring about the group's outcome. This occurs when a person is not personally identified with the group, perhaps feeling socially rejected from the group or perceiving the group as unsuccessful or unimportant. Overcome this by making the group's goals and values clear and explicit, encouraging friendships to form and group activities to be fun and socially rewarding. Feeling like others are not trying very hard. As discussed earlier, people loaf if they feel others are loafing (Karau & Williams, 2001). Overcome this by providing feedback about the progress of group members on their individual tasks. Strong groups often have regular meetings where people's progress is discussed and, ideally, celebrated! In contrast to social loafing, social facilitation occurs when one's performance is affected by the presence of others (Belletier et al., 2019). For example, in perhaps the first social psychology experiment ever published, Norman Triplett (1898) found that cyclists ride faster when racing against each other than when trying to beat the clock. Many other researchers have found similar effects, even in animals. For example, ants are able to dig more when other ants are working alongside them (Chen, 1937), and even cockroaches run down a runway more quickly when other cockroaches are around (Zajone et al., 1969). The presence of others does not always improve performance, however. We're all familiar with the athlete who "choked" at the big moment. The presence of others is likely to interfere with our performance when our skills are poor, or the task is difficult. Even the cockroaches mentioned earlier did more poorly when other cockroaches watched them try to navigate a more complex maze (Zajone et al., 1969). There are many mechanisms that explain the social facilitation effect (Uziel, 2007; Belletier et al., 2019). One of the most important is that the presence of others is (emotionally) arousing, and arousal tends to strengthen our dominant responses. Similarly, the presence of others occupies our attention, reducing our ability to consciously direct our behaviour. For both reasons, when the task is simple (e.g., run in a straight line), our dominant responses are the right ones. But when the task is very complex (e.g., juggle three axes for the first time), we need to be able to pay more attention and control our responses more carefully, and then arousal decreases performance. Based on these tendencies, it is probably not surprising that, for true masters of a skill, audiences and competitors generally enhance performance, but novices tend to perform best in practice sessions when nobody is watching (Bell & Yee, 1989; MacCracken & Stadulis, 1985). Conformity At the most basic level, conformity can be found in mimicry, and it can be a very useful skill at times. Imagine travelling to a country where you do not know the language, and no one is around to help you translate. Could you get by? If you want to ride public transportation, just watch what the other passengers are doing. Do they buy tickets before boarding, or do they pay a driver once they board? Following another's lead is often the best way to go, even if you are just walking into a new restaurant in your neighbourhood. The study of mimicry focuses on how we are influenced by a single individual, but being part of a group can affect our behaviours as well. Conformity refers to a change in behaviour to fit in with a group, whether it is intentional or not. In the 1950s Solomon Asch developed a very creative way to study conformity in the lab and conducted a series of studies that are nearly as famous as the Stanford Prison Experiment. In this method, a research participant would join a group of subjects in a room and complete a series of very simple and obvious perceptual judgments-judgments that anyone should get right (see Figure 13.1*). However, the other "participants" in the room were actually research confederates, meaning that Asch had placed them there with instructions to give the wrong answer at specific times. Despite the simplicity of the task, the participants would often conform to the rest of the group and give an incorrect answer (Asch, 1951, 1955, 1956). Why we sometimes conform so readily is an important psychological question. There are two pretty clear reasons, and they lead to different types of conformity. First, normative influence is the result of social pressure to adopt a group's perspective in order to be accepted, rather than rejected, by the group. This is sometimes referred to as public compliance because the individual modifies what they say or do without internalizing their conformity-it is a public rather than private type of conformity (Cialdini & Goldstein, 2004). This generally means that the person sacrifices a little honesty about their own beliefs in order to avoid criticism or rejection from the group. Second, informational influence occurs when people feel the group is giving them useful information. This can be referred to as private acceptance, when people actually change their internalized beliefs and opinions as well as their public behaviour. In this situation, the conforming individual is likely to see other group members as being better informed, having more skill, or perhaps better taste; thus, they are a good source of information. The following experiment demonstrates that the two types of conformity may work together: A group of young heterosexual men participated in a study of facial attractiveness by rating photographs of females on a scale from 1 to 10. Then they received randomly assigned feedback indicating that the average rating for that same face was higher, lower, or the same that they gave. In subsequent trials, many of the participants changed their ratings to conform to the perceived group norm. Perhaps they wanted to make sure they were doing a good job and using the same standards as others (normative compliance). However, they changed their behaviour even though their responses were not being observed. It would appear that this represents private compliance— perhaps perceptions of attractiveness really are influenced by what others think (Huang et al., 2014). Both types of influence seemed to be occurring in Asch's studies as well. For example, some of the conforming participants said afterwards that they thought they had misunderstood something, or that there was some sort of "trick" the others picked up on that they didn't, because surely the others couldn't all be wrong if they were all saying the same thing. Other people reported that they didn't want to stand out or make a scene by being the disagreeable person, so they just went along with the group. In everyday contexts, both types of influence are often at work, making us easily swayed by other people. We will be especially vulnerable to social influence when we are uncertain about the situation, although as Asch showed us, social influence is powerful enough to make us doubt ourselves even when the situation is pretty clear and unambiguous. Many factors work together to determine, in a given situation, the strength of social influence pressures and whether or not a person ends up conforming (see Table 13.1*). Groupthink Despite the old proverb, two heads are not always better than one, and six can be downright harmful. Probably the best example of this case is the phenomenon of groupthink, a decision-making problem in which group members avoid arguments and strive for agreement. At first, this might sound like a good thing. Conflicts can be unpleasant for some people, and they can certainly get in the way of group decision making. But groupthink does not always promote good decision making. When group members are more concerned with avoiding disagreements than with generating ideas, three main problems occur. First, group members may minimize or ignore potential problems and risks in the ideas they are considering. The lack of ability to critically question or disagree with ideas means that people will emphasize potential rewards and successes and overlook potentially disastrous things that might go wrong. Second, groups will likely settle too quickly on ideas, because social pressures will make people uncomfortable with prolonging a decision-making process. Instead, they will simply agree with one of the existing ideas. As a result, many potential ideas are never brought to the table for consideration. Third, groups often become overconfident and therefore less likely to carefully examine the consequences of their decisions, leading them to be less likely to learn from their mistakes (Ahlfinger & Esser, 2001; Janis, 1972). All things considered, groupthink seems like a pretty bad outcome! Historians have implicated groupthink in some truly terrible decisions. There was the 1986 decision to launch the space shuttle Challenger despite safety concerns raised by engineers (the shuttle broke apart 73 seconds into its flight, killing seven astronauts). A more recent example comes from the decisions made by the Bush administration in the United States and the Blair administration in the United Kingdom to start a preemptive war in Iraq. Their justification was that Iraqi leader Saddam Hussein was manufacturing weapons of massive destruction (WMDs), and therefore he needed to be stopped before he could launch them. However, both administrations were widely criticized for seeking and accepting supporting information while ignoring or downplaying conflicting information. Because of the power of groupthink, the leaders became more and more confident in their use of faulty evidence. Now, nearly 20 years later, no WMDs have ever been found, thousands of military personnel and over 100,000 civilians died, and that region remains in turmoil. Some groups are more susceptible to groupthink than others, and psychologists have turned to laboratory research to find out when and why. Their work revealed that when groupthink occurs, there is often a strong or "directive" leader-specifically, an individual who suppresses dissenters and encourages the group to consider fewer alternative ideas (Ahlfinger & Esser, 2001). Also, groups in which members are more similar to each other, especially in shared sociopolitical perspectives, are more likely to fall into groupthink (e.g., Schulz-Hardt et al., 2000). To Act or Not to Act: Obedience, the Bystander Effect, and Altruism So far in this module, we have seen that situational factors can have a great impact on behaviour. In some cases, these effects happen completely without our awareness; that can certainly be true for mimicry, adopting roles and norms, and participating in groupthink. Although there are many cases in which people do make conscious decisions-particularly with conformity and social loafing, in this section, we turn to situations in which people have to make a decision to act or not to act. Obedience to Authority If there was a pivotal world event that stimulated research on obedience, it would have to be the number of military personnel in World War Il who committed atrocities. The fact that so many average German citizens actively participated in the rounding up, incarceration, torture, and murder of millions of people must raise the question "Were they already evil people? Or were most of them just normal people following the instructions of their leaders?" Most of us believe that we would never do such things, no matter how powerful the situation. If we were asked to harm somebody against their will, and we found it immoral, we would say no. Right? The Milgram obedience experiments (1963, 1974) have thoroughly shaken our confidence in that belief. In his now-famous studies, Stanley Milgram showed the world just how powerful authority could be, and how easily otherwise good, normal people could be made to act inhumanely. Consider what happened in Milgram's study: Participants-all of them were men-were told the study was about the effects of punishment on memory. They, and the other supposed participant (who was actually a confederate), a friendly middle-aged man, drew slips of paper in order to determine who would be the "teacher" and who would be the "learner." The draw was secretly rigged so that the participants were always the teacher. The teacher's job was to read a series of word pairs to the learner, and then to test him on his memory of the word pairs. The learner was in a separate room hooked up to an electric shock machine. Each time the learner got an answer wrong, the teacher had to administer a shock by flipping a switch on a panel in front of him, and increasing the voltage after each wrong answer. The switches went up by 15 volts until reaching a maximum of 450 volts, which was labelled "xxx." This process was watched by "the experimenter," a man wearing a lab coat. As the experiment progressed, the learner started to make sounds of discomfort in the other room, grunting audibly as he was shocked. By 150 volts he was protesting loudly and saying that he no longer wanted to continue in the study. If the subjects continued reading the word pairs and increasing the shock level, the learner got to the point of screaming in pain, demanding and pleading, over and over again, to be let out, pleading that he couldn't take it anymore, even that his heart condition was bothering him, and his heart was acting up. And then, at 330 volts, the learner fell silent and gave no further responses. At this point, subjects were informed by the experimenter that a non-response was to be considered "wrong," and the punishing shock was to be administered. If, at any point, subjects expressed concern for the learner, or said that they didn't want to continue, the experimenter simply said a few stock responses, such as "Please continue" or "The experiment requires that you continue. Now let's step back for a moment and put the situation in perspective. As part of a psychology experiment, people were asked to shock a person in another room and ignore this person as he expressed increasing discomfort, screaming repeatedly, begging and pleading to be let out of the experiment, angrily refusing to continue, indicating that he might be having a heart attack, and eventually falling completely silent. There was no compelling reason for people to continue, except a man in a lab coat was telling them to do so. What would you do? If you are like most people, you probably feel that you would refuse to continue whenever the "learner" said that he didn't want to continue (which happened quite early, 150 volts). In fact, a group of psychiatrists at Yale University were asked to predict ahead of time how many people would obey all the way to the end of the experiment, and they estimated it would be about 1 in 1000- the base rate of sadistic or psychopathic individuals in the population (Milgram, 1974). But overall, Milgram's results were pretty grim: Most subjects continued (approximately 65% in most versions of the study), despite the protestations of the learner, simply because an "authority figure" told them to. It's important to point out that subjects were not sadists, gleefully shocking their partners. Many were deeply distressed themselves, telling the experimenter they didn't want to continue, arguing with him, and so on. As Milgram wrote: Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh. These were characteristic rather than exceptional responses.... At one point he (one of the participants) pushed his fist into his forehead and muttered, "Oh God, let's stop it." And yet he continued to respond to every word of the experimenter, and obeyed to the end.... (1963, pp. 371-378) Why would people put themselves and another person through such agony just for an experiment? Interestingly, Milgram ran other variants of this experiment, trying to see what might change obedience rates. Milgram tried to reduce the pressure from authority in several ways, such as having the experimenter give orders from another room, by phone. Milgram also tried to increase sense of the learner's distress, such as by having subjects and learners in the same room, and even requiring subjects to physically press the learner's hand onto a shock. Although the rates of obedience are somewhat lower in these experiments, they remained higher than anyone expected (often around 30%). Reducing the appearance of authority and increasing the suffering of the learner clearly helped, but did not resolve the situation. There were two especially interesting and powerful variations. One experiment looked at whether it is easier for a group to resist the experimenter, pitting the power of the group against the power of authority. In this experiment, there were three teachers making decisions collectively. Two of the teachers were confederates, pretending to be real subjects; the other teacher was the actual subject. When the two confederate teachers made the decision to not continue with the experiment, 90% of subjects also refused. (We would note that it seems surprising that only 90% of them refused, leaving a full 10% of people still obeying the experimenter to the bitter end. Still, 10% obedience is a far cry from the 65% of the original study.) This particular variation is important because it illustrates again the power of dissent. As in the Asch study, if even a couple of people are courageous enough to fight for what is "right," they make it much easier for others to do the same. Milgram himself believed that these studies provided insight into the horrors of the Holocaust, particularly how so many millions of people could be "evil" enough to willingly participate in the Nazi death machine or to stand passively by while such a brutal genocide took place. The Bystander Effect: Situational Influences on Helping Behaviour As is often the case-as it certainly was for Milgram-a single, shocking, real-world event led to a flurry of psychological research on a social topic. In this case, the topic is why bystanders may or may not help someone in need. The event was reported on the front pages with sensationalized reporting: In the middle of a cold night in 1964, a young woman, Kitty Genovese, was sexually assaulted and stabbed to death outside an apartment building in New York City. The papers said 38 neighbours heard her screams, but did nothing for over 30 minutes. When the police were finally called, they were too late to save Kitty's life. Naturally, people were shocked and outraged that so many could have allowed a young woman to be assaulted without doing anything to help her. How is it possible that not one person intervened? Have we become so selfish and disconnected from each other that we don't get involved even when someone's life is on the line? Before continuing, we should mention that several decades later, the sensationalism in the reporting became more apparent. Far fewer than 38 people actually understood what they were hearing. After all, when you live in a highly populated urban area, it is not uncommon to hear noises, including shouting, in the middle of the night. If you called the police every time someone shouted, they might soon stop taking your calls. So, some of the apparent apathy could have been due to confusion and uncertainty, rather than a lack of caring (Manning et al., 2007). Nevertheless, the event launched an important line of research that found similar effects in many situations, as you can read in Working the Scientific Literacy Model. The Bystander Effect We have seen how powerful norms can be in shaping behaviour as well as how difficult it can be if you don't follow along. This is definitely true for the reciprocity norm, which is basically the social psychology way of saying "We should all look out for each other." This is the norm that leads us to help others while understanding that others will help us. If you drop your keys on the sidewalk, more often than not, someone will call to you and pick up your keys for you. It's a very powerful norm and we respond to it every day. So why does it break down sometimes, as it seems to have done in the Kitty Genovese case? What do we know about the bystander effect? Although the Kitty Genovese story was exaggerated in the news, it is an example of individuals failing to help someone in need. This phenomenon is now known as the bystander effect (also known as bystander apathy), and is the observation that an individual is less likely to help when they perceive that others are not helping. Sadly, there are many more stories that tell of similar events. Although they usually unfold in the same way, modern technology adds a new dimension. In 2018, for example, Alexandra Levine wrote in the New York Times that she had called for emergency help the prior week when she came across a woman in a subway station who had fallen down the stairs toward the platform. The station was far from empty, yet everyone else was hurrying past her, some even stopping to catch an image of it on their mobile phones (Levine, 2018). How can science study the bystander effect? Within months of the Kitty Genovese tragedy, researchers began developing methods of re-creating bystander effects in the laboratory. For example, in one of the first studies, an individual volunteer was ushered into a small room with an intercom under the premise that they would be conversing with other participants in reality, research confederates) waiting in similar rooms down the hall. As the conversation began, one confederate reported being prone to seizures and subsequently asked for help as a seizure apparently began. The researchers could then observe if the participant helped and if so, how long they waited before acting. Remember, the researchers were interested in how the presence of others might affect a bystander's reaction. Therefore, each time a new participant arrived for the study, the researchers manipulated the number of confederates so that the participant would be talking to one, two, or three confederates. It turns out that the more confederates there were, the longer it took the true participant to react to the calls for help (Latané & Darley, 1968). Why does the presence of others reduce the tendency to help? If you think of bystander apathy as a type of conformity, you might be able to anticipate the hypotheses they proposed and then confirmed with their experiments. First, there are normative influences. When one person sees another in need of help, they may ask themselves, "What happens if I try to intervene and wind up embarrassing myself?" Second, there are informational influences. The bystander is likely to wonder, "What if the others know something I don't? Am I blowing this out of proportion?" (Karakashian et al., 2006; Prentice & Miller, 1993). In addition to explanations based on conformity, Latané and Darley also observed diffusion of responsibility, the reduced personal responsibility that a person feels when more people are present in a situation (Figure 13.2"). In other words, if everybody thinks someone else will take on the responsibility of helping, nobody will do anything. These are natural questions and assumptions people make. Fortunately, they do not always result in the bystander effect. This is especially true for bystanders with specific training, such as CPR (Huston et al., 1981), or those with a social connection to the person in need (Levine & Crowther, 2008). Can we critically evaluate this evidence? Research makes it clear that bystander effects can happen. But is it really a regular event? One study asked this question by examining closed-circuit security recordings of 219 street disputes in Lancaster (U.K.), Amsterdam, and Cape Town. In the vast majority of cases, people intervened, even when in groups (Philpot et al., 2020). Therefore, we must approach the topic critically. The shocking nature of bystander effects can easily lead to exaggeration and overly emotional thinking. But you should be able to recall some counter examples: Can you think of situations in which crowds have rushed to help? A quick search on the internet will turn up more stories of compassion and altruism than of bystander effect. Therefore, researchers should also study why people do help, even when it puts them at risk. Finally, the bystander effect can be explained by principles of conformity, but what about personal safety? If someone is being physically attacked or is in some other dangerous situation, should we expect others to put themselves at risk? Why is this relevant? To put this topic into context, let us return to the situation where someone is at risk of being assaulted. This is all too common on university campuses where as many as one in five women will report being victim to a sexual assault. A substantial number of these crimes begin at parties and clubs where the victims and perpetrators are presumably surrounded by peers. What if those peers intervened when they noticed one of their friends was behaving aggressively with a woman? Or took a friend home when her judgment had been affected by alcohol so much that she was unaware of a threat? In fact, there are a number of programs designed to reduce the incidence of these types of assaults by educating students and encouraging them to get involved. For example, over 25 years, one U.S. university sponsored seminars to teach students how to spot risks, effective ways to respond, and foster a climate where intervening is expected. During this period, surveys show that unwanted sexual experiences have been cut by over 50% (University of New Hampshire, 2012). When People Decide to Act If this module is starting to bring you down, don't worry. The world is filled with human beings who have acted-sometimes incredibly bravely-to help others who have been hurt or threatened. To counteract the unpleasant outcomes of the Milgram studies, consider altruism-helping others in need without receiving or expecting reward for doing so. For an individualistic perspective, altruism can be a bad deal, especially when putting yourself at risk for a complete stranger. However, as you can see in Figure 13.3", people are capable of incredibly heroic acts. The capacity for empathy-understanding what another's situation feels like and what its implications might be—is a prerequisite for helping others. The more empathy an individual reports on personality scales, the more likely the person is to help. This is true even if helping requires very little effort (Davis & Knowles, 1999). At the individual level, the willingness to help depends on the situations. After all, some situations seem more urgent than others. Willingness to help can also depend on the individual. Some individuals regularly feel more empathy than others. Also, individuals who feel they have a strong, secure bond with family and friends seem to be more likely to help others regardless of their group membership (Mikulincer & Shaver, 2005; Stürmer et al., 2005). 13.2 The field of social-cognitive psychology is a fusion of social psychology's emphasis on social situations and cognitive psychology's emphasis on cognitions (perceptions, thoughts, and beliefs). Social-cognitive researchers study the cognitions that people have about social situations, and how situations influence cognitive processes. It is an exciting area to study because it deals directly with the everyday social experiences we encounter in our lives. One of the central ideas in this field is that there are two major types of processes in our consciousness: explicit processes and implicit processes. Explicit processes, which correspond roughly to "conscious" thought, are deliberative, effortful, relatively slow, and generally under our intentional control. This explicit level of consciousness is our subjective inner awareness, our "mind" as we know it. Implicit processes comprise our "unconscious" thought; they are intuitive, automatic, effortless, very fast, and operate largely outside of our intentional control. The implicit level of consciousness is the larger set of patterns that govern how our mind generally functions-all the "lower-level" processes that comprise the vast bulk of what our brains actually do (Chaiken & Trope, 1999; Kahneman, 2003; Todorov et al., 2005). These two sets of processes work together to regulate our bodies, continually update our perceptions, infuse emotional evaluations and layers of personal meaning to our experiences, and affect how we think, make decisions, and self-reflect. But not only do these two sets of processes carry out their independent functions, they also can influence each other. For example, explicit processes influence implicit processes when our beliefs (e.g., my friend Bob is a kind person!) influence how we process information (e.g., how much attention we pay to Bob's positive and negative behaviours). On the other hand, implicit processes can influence explicit processes, as when our automatic tendency to categorize a person into a stereotyped group influences the judgments we make about that person. Explicit and implicit processes are intertwined, each influencing the other as we navigate the social world. In social-cognitive psychology, models of behaviour that account for both implicit and explicit processes are called dual-process models (Chaiken & Trope, 1999). One of the major contributions that this understanding has given us is how our conscious acts are conditioned or influenced by a huge amount of unconscious processing. For example, when a person makes a specific choice to do something, that decision occurs after a whole slew of processes have already occurred-the person paying attention in the first place (choosing some parts of reality to focus on and ignoring many others), interpreting information into an overall understanding, evaluating different pieces of information, and forming judgments and beliefs. So, who really made this decision then? And how can you say that it was a conscious act, if the vast bulk of the processing was actually unconscious? The critical insight is that because implicit processes happen so quickly and subtly, our presumably conscious and intentional acts are constantly being influenced and guided by our implicit processes, and we are not generally aware of this at all. Consider the sports commentators calling the World Cup match between Senegal and Poland. When seeing a Black player, a "Black male stereotype" may have become implicitly activated. This implicit stereotype then would have guided their explicit evaluation of the player's actions, even while remaining unconscious. It would have continued to influence their interpretation even after the game was completed. That is the doubleedged sword of implicit processes; they help us process information efficiently, but they do so through creating biases. And when these biases lead to bad judgments or decisions, it is very difficult to recognize this or fix it, because we are not consciously aware of these implicit processes at work. Person Perception The effects of implicit processes are dramatically illustrated by research on person perception, the processes by which individuals categorize and form judgments about other people (Kenny, 2004). Person perception begins the instant we encounter another person, guided by our past experiences with people and the interpersonal knowledge we have absorbed from our culture. When we make a first impression of someone, we rely heavily on implicit processes, using whatever schemas we may have available. Schemas are organized clusters of knowledge, beliefs, and expectations about individuals and groups that influence our attention and perceptual processes in many ways (see Module 7-3). For example, a person's visible characteristics (e.g., gender, race, age, style of dress) all activate schemas, and these schemas can bring certain traits to mind automatically. Thin Slices of Behaviour One amazing aspect of these implicit processes is that they are practically instantaneous, and in some cases, they can be quite accurate. For example, within the first minute of seeing your professor at the front of the room, you have already evaluated them and made some basic judgments. If you were to fill out your course evaluations after, say, one minute of the first class (which would seem highly unfair!), your ratings would likely be very similar to your course evaluations after an entire semester's worth of exposure to that person (Ambady & Rosenthal, 1993; Tom et al., 2010). What happens in these situations is that we make very rapid, implicit judgments based on thin slices of behaviour, very small samples of a person's behaviour. In even a few seconds, our implicit processes, guiding our perceptions holistically and using well-practised heuristics, are able to perceive very small cues and subtle patterns. This gives us instantaneous, intuitive accuracy, at least in part. Surprisingly, many of our social judgments are made in this way-instantaneously, based on very little information. Whether it's judging people based on tiny snippets of conversations we happen to overhear (Holleran et al., 2009; Mehl et al., 2006), or catching a mere glimpse of their face (e.g., we judge trustworthiness, competence, likability, and aggressiveness after seeing a photograph for less than one second; Willis & Todorov, 2006). Research by Nicholas Rule from the University of Toronto has shown that we can tell surprising things about people given incredibly little information. For example, people can guess a male's sexual orientation at rates greater than chance after viewing his photograph for a mere 1/2oth of a second (Rule & Ambady, 2008), and Americans can accurately guess whether other people tend to vote Republican or Democrat merely by looking at a photograph of their face (Rule & Ambady, 2010). Republicans are viewed as having more powerful faces, but Democrats are seen as warmer. Thin-slice research demonstrates just how quickly impressions are formed, and how surprisingly accurate they often can be. Of course, they are not perfectly accurate, and therein lies the problem. Self-Fulfilling Prophecies and Other Consequences of First Impressions First impressions have a big impact on many of our social behaviours. Even very simple cues, such as facial appearance, guide a wide range of behaviours, from how a jury treats a defendant to how people vote. For example, one study asked participants to act as jurors and evaluate evidence against a defendant. If shown a photograph of a defendant who simply "looked more trustworthy," participants were less likely to come to a guilty verdict (Porter et al., 2010). In another study, the outcome of U.S. elections of congressional candidates could be predicted 70% of the time simply using participants' judgments of how competent the candidates appeared in photographs (Todorov et al., 2005). The fact that our implicit judgments can influence our perceptions and behaviours has countless implications for our social lives, particularly in terms of self-fulfilling prophecies, which occur when a first impression (or an expectation) affects one's behaviour, and then that affects other people's behaviour, leading one to "confirm" the initial impression or expectation. For example, if you expect someone you meet to be warm and friendly, you will probably be more at ease with them and will treat them in a warm and friendly manner yourself. This friendly behaviour will make them comfortable and will lead them to behave in a warm and friendly way in return, leaving you with the conclusion that they are-surprise! —warm and friendly. You can easily imagine the opposite process, if your initial expectation is that the person will be cold and unfriendly. Despite their power, we are not destined to live out self-fulfilling prophecies. Evidence shows that, although first impressions are powerful, they can be modified over time. This is particularly true when encountering highly diagnostic information (information that says a great deal about one's personality) or information that clearly explains away an earlier misinterpretation of one's actions (Ferguson et al., 2019). For example, seeing someone carrying a heavy bag is not very diagnostic-almost everyone does it sometimes. However, seeing someone going out of their way to help a stranger with a heavy bag is diagnostic-that signals a helpful, caring person. If you had a negative first impression of that individual, seeing the helpful side might be enough to change your opinion. The Self in the Social World How do we decide what information to use when we re trying to understand other people or form impressions of them? What schemas do we activate to guide our judgments? As discussed above, we may use subtle cues in people's faces or non-verbal behaviours, but what else guides our judgments? Certainly, if the person falls into a group about which there are specific stereotypes, such as categories based on race, class, and gender, then these stereotypes often are automatically activated and can colour our judgments through implicit processes. But one additional schema that is highly accessible, contains a vast amount of information, and is therefore often used in guiding our social judgments is-ourselves! Much of the time, we look out at the social world through the lens of our own self-concepts. This has two very important consequences. The first is that we tend to think that the way we are is the way people should be, and therefore, people who are substantially different from us have something wrong with them. The second is that we have a strong tendency to split the world into Us and Them, and we are motivated to see Us more positively than how we see Them. Understanding these dynamics gets right to the heart of why there is so much intergroup hostility in the world. It also reveals a tragic irony, which is that in the quest to feel good about ourselves and be happy, we sow the seeds that will grow into distrust, prejudice, and discrimination, thereby causing much suffering and unhappiness. Let's examine these arguments carefully, because they have major implications for understanding why the world is the way it is. Projecting the Self onto Others: False Consensus and Naive Realism One way in which our self-concept affects our social perceptions is that we tend to project our self-concepts onto the social world. This means that the qualities we see in ourselves and the attitudes and opinions that we hold, we tend to assume are similar for society at large. If we are sports fans, we assume that sports is generally important for other people as well. Even qualities we have that we know are not popular are still projected onto society. So, for example, if we are believers in Scientology, we will tend to assume that a larger proportion of the population believes in Scientology than is likely the case, and we will assume there are more Scientology believers out there than a non-believer would assume. This tendency to project the self-concept onto the social world is known as the false consensus effect (Marks & Miller, 1987). It's important to understand that this is a pretty sensible way to be, much of the time. After all, if we have to make guesses about people, why not base these guesses on ourselves? We also generally assume that our perceptions of reality are accurate, that we see things the way they are; this is called naïve realism (Ross & Ward, 1996). And it makes sense that we would make this assumption. After all, who wants to assume that they are walking around deluded and wrong all the time? Imagine being beset by doubts constantly, your life uncertain and stressful because you are never able to trust your own judgments. So instead, we operate under a basic framework of "I make sense," and then, by extension, "the people that I agree with, who are kind of like me, also make sense." And then, of course, by one more extension, "the people whom I disagree with are deluded, wrong, and quite fundamentally different from me." You can see the problem here. At the personal level, we just want to feel good about ourselves and function effectively in the world. But at the group level, we create intergroup biases and an Us vs. Them way of thinking. Self-Serving Biases and Attributions This tendency toward naïve realism reflects a larger, more general need to want to feel positively about ourselves, to have a positive sense of self-evaluation or self-esteem (Allport, 1955; Maslow, 1968; Sedikides & Strube, 1995). Undergraduate students clearly enjoy boosts to their self-esteem, reporting to prefer receiving such a boost even over eating a favourite food, getting paid, having sex, or seeing a best friend (Bushman et al., 2011). We strive to maintain our positive self-feelings through a host of self-serving biases, which are biased ways of processing self-relevant information to enhance our positive self-evaluation (Miller & Ross, 1975). For example, we tend to take credit for our successes but blame our failures on other people, circumstances, or bad luck. Interestingly, for many of the qualities and skills that are important to us, we assume that we are "better" than average. This rather appropriately named better-than-average effect has been shown in many domains. In one study of almost one million American students, a whopping 85% viewed themselves as "above average" in their ability to get along with other people, and a full 25% believed they were in the top 1% of this ability (Alicke & Olesya, 2005). If only the laws of math would allow this to be true..... These same self-serving processes also influence the way we explain or interpret people's behaviour. Much in the same way that first impressions are formed implicitly (which we discussed earlier), our explanations for behaviours tend to start out as automatic and seemingly intuitive. Imagine that you're driving down the highway and all of a sudden some other driver swerves in front of you, honking. You slam on the brakes and turn the wheel sharply, narrowly avoiding a collision. Quick-what is the first thing that comes to mind about the other driver? Probably, your first thought is not the kindest or gentlest. You assume the other driver is an aggressive jerk or maybe a bad driver. You yell, "You idiot!" and shake your fist, which of course makes them feel shame. This type of explanation is called an internal attribution (also known as a dispositional attribution), whereby an observer explains the behaviour of an actor in terms of some innate quality of that person (see Figure 13.4*). In other words, you (the observer) explain the actor's behaviour (the driver who cut in front of you) as an internal part of who the driver is as a human being (being an aggressive jerk, bad driver, or all-around "idiot"). But of course, there may be other reasons for the driver's behaviour. Perhaps the driver is swerving out of the way of a piece of debris on the road, or just blew a tire, or just received a phone call that their partner is in the hospital and is distracted, or just didn't look in their blind spot that one crucial moment before swerving in front of you. These are external attributions (also known as situational attributions), whereby the observer explains the actor's behaviour as the result of the situation (Heider, 1958). Generally, these external attributions are not what first come to mind. Rather, we come to them after thinking about it for a bit (an explicit process), and realizing that maybe there were other factors causing the person's behaviour that we didn't initially consider. This tendency to over-emphasize internal (dispositional) attributions and under-emphasize external (situational) factors when explaining other people's behaviour is known as the fundamental attribution error (FAE) (Ross, 1977). On the other hand, when we explain our own behaviours, we tend to emphasize whichever kind of explanation paints us in the best light. For our negative behaviours, the mistakes we make and embarrassing things we do, our attributions are much more generous. We emphasize the situational factors that cause us to do undesirable things (e.g., we had a headache, we were under a lot of stress, a family member was sick). This bias may seem a little selfish, but there is reason to believe it contributes to well-being. For example, people with severe forms of depression and anxiety appear much less susceptible to the self-serving bias—by perhaps as much as 50% (Mezulis et al., 2004). Thus, the self-serving bias might actually reduce our chances for psychological distress. However, it also might sometimes prevent us from taking responsibility for negative behaviours. One rather ironic wrinkle in the story of the FAE is that it doesn't seem to be quite as "fundamental" as was originally thought. Research on cross-cultural differences has shown that people make the FAE the most in predominantly individualistic cultures such as Canada or the United States, and the least in more collectivistic cultures such as China or Japan (Shimizu et al, 2017). This different approach to explaining others' behaviour can be seen in how people interpret social events such as news stories. For example, after reading about recent mass murderers in the newspaper, subjects from China are more likely to emphasize situational explanations for the murders (such as recent stressful events in the person's life), whereas North American subjects are much more likely to emphasize dispositional explanations (such as the murderer being an evil person; Morris & Peng, 1994). This greater emphasis on situational factors in collectivistic societies reflects stronger values toward maintaining harmony in interpersonal relationships and fulfilling a person's social roles in the larger community, values that lead people to be more aware of situational information (Choi et al., 1999; Nisbett, 2003). Ingroups and Outgroups Although this desire to feel good about ourselves seems functional and healthy, it often has negative side effects. As we discussed earlier, our self-serving processes also reinforce a tendency to be biased against others. We are motivated to be biased against others because one of the key ways we maintain positive feelings about ourselves is through our identification with larger social groups (Fein & Spencer, 1997), and we can therefore make ourselves feel good by feeling positively toward these groups. In turn, one way to feel positively about our own group is to focus on how much better we are than other groups we compare ourselves to. Groups we feel positively toward and identify with are our ingroups, including our family, home team, and coworkers. In contrast, outgroups are those "other" groups that we don't identify with. In fact, we actively dis-identify with outgroups. This is where our self-serving biases can be so destructive. As positive biases toward the self get extended to include one's ingroups, people become motivated to see their ingroups as superior to their outgroups-engaging in ingroup bias and, potentially, outgroup derogation. All in the service of maintaining our self-esteem, we carve the world into categories of Us and Them and then we automatically show a preference for Us. A set of studies that began in the 1970s added a crucial insight to the discussion of how we process information about groups. In real-world social interactions between people, there is already a lot of relevant group information available simply based on the physical characteristics of the individuals. Rather than creating groups based on established characteristics such as ethnicity or gender, researchers using the minimal group paradigm divided participants into new groups based on essentially meaningless criteria. In different studies, people were divided into groups based on whether they preferred one painting over another (Tajfel, 1970; Tajfel et al., 1971), or whether they flipped heads or tails on a coin toss (Locksley et al., 1980). These newly formed groups had no history, no actual affiliation with each other, and no future together after the experiment was over. Amazingly, even these completely meaningless ways of forming groups are enough to drive prejudice and discrimination. For example, if people are asked to distribute money between the two groups, they consistently give more to their new ingroup members. These results suggest that the process of categorizing the world into Us and Them is a fundamental and practically unavoidable part of how we process the social world. It also has some sobering implications. If the people in the group who flipped heads in a coin toss prefer their fellow Heads over those nasty Tails, even though they have no history of animosity, no competition over resources, or any other grounds whatsoever on which to base their preferences, imagine how much more powerful people's biases will be when faced with real-world distinctions and long histories of conflict and violence. Appreciating the deeply biasing influences of making ingroup-outgroup distinctions in the first place adds an important layer to our understanding of these larger conflicts. Social psychology tends to focus on processes that have caused social problems. Because of this, the field is sometimes accused of making people look silly, arrogant, or mean, but that is not the intent. So, in closing this section, we want to make sure you remember that all of these processes serve important functions for us. Without the false consensus effect and our tendency to project our selfconcept onto others, we would be in a great deal of uncertainty about what other people are like. It would be like living on a planet of mysterious and unpredictable aliens. Without naïve realism, we would be plagued by doubts and would constantly second-guess our perceptions of the world. Without a positive sense of self- evaluation, it would be easy to feel useless, helpless, and generally miserable. Without the ability to attach ourselves to desired ingroups and distance ourselves from undesired outgroups it would be hard to feel a sense of belonging (Cacioppo et al., 2003; Myers & Diener, 1995). From that perspective, social psychology should be seen as a tool for helping us appreciate and respect each other a little bit more. Stereotypes, Prejudice, and Discrimination Obviously, the roots of prejudice are planted very deeply in our psyches, stemming ultimately from our deep- rooted attachment to our own selves and our automatic social categorization tendencies. Thus, while at the explicit level we may strive to be egalitarian and not discriminate based on dimensions such as race, class, and gender, our normally functioning implicit processes continually split the world into Us and Them. In fact, using ERP technology to measure brain activation, research has shown that the perceptual system starts to react differently to people based on race and gender within a mere 200 milliseconds (Ito & Urland, 2003). When we try to change these implicit tendencies, we are battling our vast and speedy implicit system with our slower explicit system. Much of the time, our explicit, consciously controlled self is going to lose, and we will fall prey to our implicit biases. These implicit biases lay the foundation for stereotyping, prejudice, and intergroup discrimination. From a social-cognitive perspective, a stereotype is a cognitive structure, a set of beliefs about the characteristics that are held by members of a specific social group; these beliefs function as schemas, serving to guide how we process information about our social world. Based on stereotypic beliefs, prejudice is an affective, emotionally laden response to members of outgroups, including holding negative attitudes and making critical judgments of other groups. Stereotyping and prejudice lead to discrimination, behaviour that disfavours or disadvantages members of a certain social group. Taken together, stereotyping, prejudice, and discrimination underlie many of the destructive "isms" in society- racism, sexism, and classism, among others. One of the central goals of social-cognitive psychology has been to understand how these processes work. Prejudice in a Politically Correct World Recent decades have seen incredible changes in acceptance of and sensitivity toward social diversity and equality. Along with this acceptance, a large part of the population gladly accept changes in the words we use to describe each other. These people view language as a way of being respectful. However, others sometimes disparagingly call it "political correctness (or being PC)." This label suggests that the battles for equality are basically over and everyone has equal opportunities and freedoms. If that is the case, then asking for sensitivity and respect comes across as demanding special attention or "playing the race card." The truth is quite different. Outgroup stereotypes and prejudices are by no means a thing of the past, and neither are the discriminatory practices that go along with them. In the United States, despite the victories of the civil rights movement in shifting the racial attitudes of the general North American population, there is still prejudice toward non-White cultural groups. For example, it still seems as though members of these groups experience the legal system differently from others. In Canada, Indigenous Peoples (about 5% of the national population) made up over 25% of the intakes in provincial and federal prisons (Statistics Canada, 2018a). A review of the Toronto Police Department found that Black men were 20 times more likely to be arrested than their White counterparts, and Black people were 20 times more likely to be killed by police action than White people (Ontario Human Rights Commission, 2018). This is not just a Canadian phenomenon. In the United States, Black people make up about 40% of the incarcerated population but only 13% of the US population (Sakala, 2014) and are more than twice as likely to die from police action than their White counterparts (Edwards et al., 2019). Records of police encounters over the past 30 years confirm what many minority groups have long claimed-that the police use more aggressive techniques on minority suspects than White suspects (Smith, 2004; Weitzer & Tuch, 2004), making them three to five times more likely to die in police confrontations than White suspects (Centers for Disease Control, 2019b). This prejudice has seeped into the basic social-psychological functioning of many people. For example, even though the general public denounces prejudice and discrimination and holds values of universal equality, studies of implicit processes tell a different story. When people (generally, White people) first are exposed to Black faces, this automatically influences a variety of physiological responses, including the activation of facial muscles, cardiovascular responses, and brain activity related to fear and negative emotions (Cunningham et al., 2004; Eberhardt, 2005). In fact, measures of brain activity reveal the battle between implicit and explicit processes. Over very short amounts of time, exposure to White or Black faces activates implicit processes such as those described above, indicating a racially biased pattern of processing. However, over longer periods of time, such as 30 seconds, brain activity shifts, showing heightened activity in the prefrontal cortex. This area relates to explicit processes-the control of emotions and abstract thinking, consistent with a neurological effort to bring values into our mind in order to control emotional reactions. This teaches us a powerful lesson: Even if people abhor prejudice at the explicit level of their awareness, they may implicitly hold negative stereotypes and experience prejudiced emotional reactions. Clearly, there can be important discrepancies between stereotyping, prejudice, and discrimination at the explicit and implicit levels. This has created huge challenges for researchers attempting to study these processes, because of course simply asking subjects how they feel is only going to reveal their explicit processes, which rarely include overt racism and sexism. This has led to the invention of measurement techniques to try to reveal implicit processes. You can read more about what we are learning about explicit and implicit biases in the next three features: Living through the Pandemic, Working the Scientific Model, and Psych@. Working the Scientific Literacy Model Explicit vs. Implicit Measures of Prejudice If a great deal of modern prejudice has "gone underground" in the sense that people hide it and give politically correct responses at the explicit level, how can researchers accurately measure prejudice in today's society? What do we know about measuring prejudice? Psychologists have developed clever ways of measuring the forms of stereotyping and prejudice that are kept silent, either intentionally or because individuals are unaware of their own prejudices (Greenwald & Banaji, 1995; Nosek, 2007). In order to do so, researchers needed to come up with measurement devices that would reveal people's implicit processes. This is no easy challenge, because implicit processes can operate so quickly (in less than a second) and so subtly that we are typically not consciously aware of them. How can science study implicit prejudice? A major research breakthrough occurred in the 1990s with the invention of the Implicit Associations Test (IAT; Greenwald et al., 1998). The IAT measures how fast people can respond to images or words flashed on a computer screen. To complete the test, a person uses two fingers and two computer buttons, and responds to stimuli, as directed (see Figure 13:5"). In the first block of trials, subjects are supposed to press one button if they see a White face or a positive word (such as peace), and a different button if they see a Black face or a negative word (such as pain). Thus, in this round, the buttons are associating stereotype-consistent stimuli. With these particular pairings, it takes people around 800 milliseconds (four-fifths of a second) to press the correct button. The second block of trials rearranges the associations. This time subjects press one button if they see a White face or a negative word, and a different button if they see a Black face or a positive word. Thus, in this round, the buttons are associating the stimuli in stereotype-inconsistent ways. In this situation, people take an average of 1015 milliseconds to press the correct button, more than one-fifth of a second longer than in round 1. (To control for any possible effects of going first vs. going second, the order in which a person goes through these tasks is usually counterbalanced across subjects, which means some participants go in the order presented here, and others in the reverse order.) Why does it take longer to respond when there is a Black/ positive button than when there is a Black/negative button? The researchers reasoned that racial schemas associate more negativity with Black people than with White people. Because schemas guide information processing, they facilitate the processing of information that is schema-consistent. Thus, it is easier for a person to make snap judgments to always press one button for either Black or negative stimuli. But schema-inconsistent information is more difficult to process. Thus, having two different buttons for Black and for negative means that a person has to override their automatic, implicit association between Black and negative, in order to choose the correct response. The size of the reaction time discrepancy between these two rounds is believed to be a direct measure of the strength of people's implicitly held negative beliefs or stereotypic associations with Black people. The IAT was a major breakthrough, suddenly allowing us to directly measure a person's implicit biases. Researchers quickly started to develop ways of measuring all sorts of implicit things-implicit attitudes, self-esteem, feelings of connection to nature, and prejudice toward many groups. Can we critically evaluate this evidence? Although the data gathered with this instrument show reliable results, some psychologists have questioned the test's validity: Is the IAT really a measure of prejudice? Or is it possible that the IAT is merely measuring the extent to which people have been exposed to negative stereotypes, but have not necessarily developed prejudices? After all, simply knowing about a stereotype does not mean an individual believes it, uses it to judge people, or engages in discriminatory behaviour. Studies by Elizabeth Phelps and her colleagues (2000) suggest that the IAT reflects a person's emotional reactions to outgroup members. In her studies, White participants were shown pictures of Black and White faces while having their brains scanned using MRI. The amount of activity detected in the amygdala (a brain area related to fear responses) when looking at Black faces was positively correlated with participants' IAT measures of implicit prejudice (see Figure 13.6*). This suggests that the IAT is measuring something real enough to be reflected in neural activity in areas related to fear and emotional processing. Why is this relevant? The development of the IAT has fostered a great deal of research and has been applied to at least a dozen forms of stereotyping, including stereotypes of social classes (Rudman et al., 2002), sexual orientation (Banse et al., 2001), and even fraternity and sorority members (Wells & Corts, 2008). The results of all these tests illustrate that implicit prejudice seems to be more prevalent than people are willing to express in explicit tests (Nosek et al., 2002). The IAT is also being applied to clinical settings. For example, one research group developed an IAT that measures attitudes about alcohol use. This instrument can successfully predict how much alcohol someone is likely to consume, even when explicit measures fail to do so (Ostafin et al., 2008). To the extent that this methodology is valid, it is extremely valuable, giving us a window into people's private minds. Psych@ The Law Enforcement Academy Imagine that instead of linking positive or negative terms with Black faces in the Implicit Associations Test (discussed in the Working the Scientific Literacy Model) you were asked to make a snap decision whether or not to shoot a potential criminal. A number of researchers have used video-game-like tasks to put participants in these situations. In these video simulations, a figure will suddenly appear, either holding a weapon or a non-weapon (e.g., a wallet or a cell phone). It turns out that when making these split-second decisions, people are a little bit slower to decide whether or not to shoot a Black man holding a non-weapon, and they make the wrong decision more often. When a Black man is holding a gun, however, they make the "shoot" decision more quickly than if the gun is held by a White man (Correll et al., 2007; Correll et al., 2006). The logic is similar to the IAT just discussed. Because Black and "gun" are stereotypically consistent with each other, people have an easier time processing these stimuli together than when Black and "wallet" are paired with each other. Hostile vs. Benevolent Stereotypes So far in our discussion of stereotyping, you may have noticed the examples are usually based on negative characteristics (e.g., individuals from minority groups are more likely to commit crimes). However, it is certainly not the case that all stereotypes sound negative. Masculine stereotypes include qualities such as determination and toughness. Soccer players can be powerful and athletic. Those can be admirable qualities, right? What might be counterintuitive to many people is that even the positive aspects of a stereotype carry a kind of hidden danger, leading to a tendency for people to believe it is okay to emphasize the positive aspects of a stereotype in a "benevolent or wellintentioned way." This has been examined a great deal with regard to sexism. Researchers have distinguished between hostile sexism, or stereotypes that have explicitly negative views of one or both sexes, and benevolent sexism, which includes views of one or both sexes that sound positive (Glick & Fiske, 1996, 2001). For example, consider the dated saying that women are "the fairer sex." A person using this phrase may mean it as a compliment, implying that women are virtuous, nurturing, and empathetic. However, even stereotypes that a person may defend as being "well-intentioned" can place restrictions on an individual's behaviour. If we consider women to be "virtuous," they may be held to different sexual standards than men and, as a result, may be judged more harshly when they violate those standards. Similarly, considering women to be nurturing and empathetic reinforces the notion that women are the primary hubs of family life, and therefore less inclined toward career advancement in our competitive world. Even when women go toe-to-toe with men in the workplace, they may be hindered in careers that call for assertive or aggressive behaviours (such as being successful in the business world) because the "fairer sex" stereotype is pervasive in the organization (Glick & Fiske, 1996, 2001). Finally, you do not have to be a member of the stereotyped group to be harmed by benevolent stereotypes. If women are the nurturers, men who are seen as kind and nurturing will be seen as less masculine. Thus, even seemingly positive stereotypes can result in negative, unforeseen consequences. The same is true for other forms of stereotypes as well; if you think about all the types of people in the world, you can probably find examples of hostile and benevolent racism or any other stereotypes. Improving Intergroup Relations We are left with an immense practical challenge: How can we overcome the implicit processes we have examined in this module and work toward eliminating harmful stereotypes, prejudices, and discrimination from our society? Unfortunately, there are no easy answers. But there are some promising possibilities. Kerry Kawakami at York University has spent more than a decade researching how to overcome implicit stereotyping and prejudice. Research in her lab has shown that people's implicit networks can be "reprogrammed" through practice. For example, we know that many people automatically make dispositional attributions for others' negative behaviours (the guy driving the pickup is an idiot!), and this is especially true for other groups (like all teenage boys!). But people can be trained to resist the implicit bias to make situational attributions (maybe the kid swerved to avoid hitting something in the road). This helps to prevent people from thinking of others in stereotypic ways (Stewart et al., 2010). In another study, Kawakami and her colleagues used a computer task to teach people to make different associations with a stereotyped group. Subjects were presented with photographs of Black and White people, coupled with either stereotypic or non-stereotypic traits, and were instructed to respond "NO" to stereotypic pairings and "YES" to non-stereotypic pairings. After extensive training involving many such trials, subjects no longer activated negative racial stereotypes, even at the implicit level (Kawakami et al., 2000). This suggests that, over time, it may be possible for people to unlearn the stereotypes that history has provided us with. However, there is a huge gap between the kind of intensive training that Kawakami's participants experienced in the lab and the real-world experience of individuals who are bombarded with both stereotypic and non-stereotypic messages on a daily basis. Nevertheless, these results suggest that it is at least possible for people to "reprogram" themselves. One of the most well-supported ideas in all of social psychology is the contact hypothesis, which predicts that social contact between members of different groups is extremely important to overcoming prejudice (Allport, 1954; Pettigrew & Tropp, 2006), especially if that contact occurs in settings in which the groups have equal status and power and, ideally, in which group members are cooperating on tasks or pursuing common goals (Sherif, 1961). Negative stereotypes and the attendant prejudices thrive under conditions of ignorance, whereas allowing people to get to know members of outgroups, to work together to pursue common goals, to come to appreciate their membership in common groups or as part of the same ingroup (e.g., we're both Blue Jays fans, Canadians, or members of the human species; Gaertner & Dovidio, 2000), and to develop friendships with members of outgroups (Pettigrew, 1997, 1998) are all different ways in which contact helps to overcome prejudice. In fact, contact between members of different groups not only helps to combat their own prejudices, but that of their friends as well. Simply knowing that someone is friends with an outgroup member serves to decrease the prejudice of that person's friends (Wright et al., 1997). Coming to see our fellow human beings as all part of the same human family is an opportunity that recent advances in technology (the internet, space exploration), economics (globalization), and, ironically, global problems (climate change, nuclear proliferation) have made available to all of us. This global perspective shift may, we hope, help us to overcome our age-old group prejudices. Astronauts who travel into space and look back on this one little planet that we inhabit often report that the experience profoundly affects them: The first day or so we all pointed to our countries. The third or fourth day we were pointing to our continents. By the fifth day, we were aware of only one Earth. 13.3 According to the American Psychological Association's official task force on climate change, "Addressing climate change is arguably one of the most pressing tasks facing this planet and its inhabitants" (American Psychological Association, 2010, p. 6). Climate change, like many other social and political issues, is fundamentally a problem of psychology. After all, it is human decision making and behaviour that lead to its effects. This module examines how psychologists study changes in attitudes and behaviours-not just in regard to climate change but in all kinds of beliefs and behaviours. Changing People's Behaviour There are four common approaches to encouraging positive behaviour change and reducing negative behaviours: Technological. Making desired behaviours easier to accomplish and undesired behaviours more difficult Legal. Creating policies and laws to encourage or reward positive behaviours while discouraging or punishing negative behaviours Economic. Providing financial incentives and penalties, generally through taxes and pricing Social. Using information and communication to raise awareness, educate, and illustrate positive and negative outcomes of relevant behaviours Although each of these approaches obviously can have an impact on public behaviour, they can almost always be combined in a campaign for behaviour change. To return to the example of climate change, there are clear examples of how organizations and governments have encouraged ecologically responsible behaviour. Technology. Creating cleaner fuels, improved solar and wind power, and more energy-efficient appliances that allow consumers to choose greener options Legal actions. Regulating the amount of waste corporations can produce and how toxic chemicals must be handled Governments. Providing economic incentives through tax breaks for pro-environmental action, and fines for violating legal regulations Advertising campaigns. Spreading factual information while social media campaigns help establish social norms As you can see, none of these approaches rules out the possibility of other approaches being implemented. In fact, they are often stronger together. Learning how to communicate effectively in order to influence attitudes and behaviour has been a major focus of psychology for most of its history, and we have learned a great deal about how to do so. Persuasion: Changing Attitudes through Communication Social psychologists have discovered many important principles underlying effective communication and have shaped these into tools for influencing all sorts of behaviours, from advertising meant to get us to buy more stuff to pro-social causes such as getting us to donate blood. The gist of these principles is explained by the elaboration likelihood model (ELM), a dual-process model of persuasion that predicts whether factual information or other types of information will be most influential. You should recall from Module 13.2 that dual-process models distinguish between implicit processes (automatic, nonconscious thoughts) versus more explicit, deliberate thinking. According to the ELM, which process a person uses is the product of two factors: motivation and time. When audiences have interest in the topic, they are more motivated to think rationally about it. When audiences have time to make a decision, they will also be more rational. However, if they lack either of these-motivation or time-then they almost certainly will react more intuitively. Knowing this, psychologists can appeal to people through two general routes: the central route and the peripheral route (Cacioppo et al., 1986). The central route to persuasion is all about substance; it focuses on facts, logic, and the content of a message in order to persuade. If the message is sufficiently compelling, people will be convinced, internalizing the message as something they believe in (see Figure 13.7*). As a result, attitude or belief change that occurs through the central route tends to be strong and long-lasting. However, as you may have surmised, this is a rational process that requires the audience to have both motivation and time. However, much of the time, people are not going to pay sufficient attention to the content of a message. In this case, you would be wise to take the peripheral route to persuasion, which focuses on features of the issue or presentation that are not factual. Seemingly irrelevant factors such as the attractiveness of the person delivering the information, or the number of arguments made (regardless of the quality of those arguments), can create a quick, positive impression. Lacking the time and/or motivation to think about it, that first impression can be largely-even solely-responsible for persuading an individual. In so many cases, the central route is preferable because it is factual. You would hope your physician prescribes a medication for you based on data from randomized, controlled medical experiments, not the fact that the pharmaceutical salesperson for one brand was really attractive. It is also often the case that peripheral tools can be quite dangerous; they can make even very weak arguments persuasive because few people are thinking critically about them. Using the Central Route Effectively In order to use the central route effectively, you need to be confident that you have the facts on your side and that they are presented at a level the audience understands. Also, recall that, according to the ELM, people are only likely to take advantage of the central route if they are motivated and have the time and opportunity to think. With these factors in mind, this section of the module will examine some key strategies for maximizing the central route. Make It Personal Making a message self-relevant is crucially important to motivating people. Consider one striking study from the early 1980s (Gregory et al., 1982), a time when cable television (CATV) was still making its way into the North American viewing market. Researchers compared two very similar persuasive appeals, which were presented to two samples of homeowners to try to convince them to subscribe to CATV. The first was an information-only condition in which homeowners heard this appeal: CATV will provide a broader entertainment and information service to its subscribers. Used properly, a person can plan in advance to enjoy events offered. Instead of spending money on the babysitter and gas, and putting up with the hassles of going out, more time can be spent at home with family, alone, or with friends. The second group was an imagination condition in which homeowners received this appeal: Take a moment and imagine how CATV will provide you with a broader entertainment and information service. When you use it properly, you will be able to plan in advance which of the events offered you wish to enjoy. Take a moment and think of how, instead of spending money on the babysitter and gas, and then having to put up with the hassles of going out, you will be able to spend your time at home, with your family, alone, or with your friends. As you can see, the two appeals are almost identical, providing the exact same arguments; from the perspective of the central route, they should have exactly the same impact. However, that's not what happened: Only 19.5% of the people who received the information-only appeal signed up for CATV, whereas a whopping 47% subscribed when they were simply told to imagine themselves in the scenario! Imagine the profit difference between selling your product to one in five or one in two people. This is the power of making things personal. This power has been explained by construal-level theory (Trope & Liberman, 2010), which describes how information affects us differently depending on our psychological distance from the information. Information that is specific, personal, and described in terms of concrete details feels more personal, or closer to us, whereas information that is more general, impersonal, and described in more abstract terms feels less personal, or more distant. Importantly, psychological distance depends not only on geography (people or places that are farther away are less personal), but temporal factors (distant future or past times feel less personal), social factors (people or groups that are further removed from your identity are less personal), how abstract the information is (abstractions are less personal than things that are specific), and even the level of certainty a person feels about an outcome (outcomes that are less certain are less personal). Communicators should be able to make their messages feel more personally relevant to the audience by working with these factors, bringing the message close to home in time and space, showing how it affects the audience themselves or their social groups, and making consequences or outcomes as certain as possible (see Working the Scientific Literacy Model). Think about some of the issues we have addressed at various points in the text. For example, many people fear the measles vaccine despite the mounds of evidence from research and decades of clinical data that speak to the contrary. It is likely that construal- level effects are happening. For a parent, you cannot get much more psychologically close to anything than you do your children-that opens up the opportunity to be swayed by non-factual events such as fear of autism and the need to feel control over a situation. working the scientific Literacy Model BACK TO DASHBOARD The Identifiable Victim Effect We are often most motivated by issues when they have a clear message and strong images. The issue of school safety in the United States is influenced by stories about school shootings. Following the news coverage, people called for action, either for gun control laws or for tightening security. The crises facing refugees fleeing war-torn areas regularly gain international attention. These are upsetting social issues filled with real human tragedy. But some causes seem to reach far fewer people. What do we know about communicating about tragedy and danger? Communicators need to understand how persuasion works in order to get people to respond to tragedies and threats. ELM tells us that we have to understand the audience's motivation to think about the issue and their opportunities to do so. Although we may not always be able to control these, it is certainly possible to address motivation by decreasing psychological distance. In terms of climate change, communicators can talk about the effects scientists observe now, but these are often in other parts of the world; they talk about what will happen if behaviour doesn't change, but that is often in the distant future. Therefore, communications have often felt "distant" to many people, particularly those who do not rely on a dependable climate in their day-to-day lives. In fact, the term climate change itself implies something global and abstract. When people do think of specific others who may suffer due to climate change, they tend to think of others in the distant future or in distant parts of the world (Leiserowitz et al., 2010; Lorenzoni & Pidgeon, 2006). How can science explain the identifiable victim effect? Many experiments have shown that information about tragedies and threats has much more impact if it focuses on specific details and concrete events than if it relies upon more abstract, statistical information. For example, the identifiable victim effect describes how people are more powerfully moved to action by the story of a single suffering person than by information about a whole group of people. In one study (Small et al., 2007), researchers gave participants a chance to donate up to $5 of their earnings from participating in the study to an organization, Save the Children, based on information provided in one of three conditions. In the identifiable victim condition, participants read about Rokia, a 7-year-old girl from Mali, who was desperately poor and facing severe hunger and possibly starvation. In the statistical victims condition, participants read about food shortages and rainfall deficits affecting more than 20 million people in four countries in Africa. In the third, combined condition, both types of information were provided; participants read about Rokia and then were also given statistical information about mass suffering in African countries. The identifiable victim effect was clearly demonstrated. People who read about Rokia gave significantly more ($2.38) than people who read general statistical information ($1.14). Clearly, Rokia tugs on the heartstrings more than abstract numbers do (see Figure 13.8"). It is worth pointing out how strictly illogical this is; if we were rational processors of information, we would respond more strongly to statistics, which is essentially many, many Rokia- like stories combined with each other, than to one single story of Rokia, which is, after all, just an anecdote. One interesting twist in this study was that participants who were given information about Rokia combined with the statistics donated only $1.43, which was statistically no different than what participants gave after being presented with the statistics alone, and was certainly much less than participants gave after only hearing Rokia's story. This study suggests that trying to simultaneously appeal to the head and the heart might not always work! This has enormous implications for anyone who wants to commun