Summary

This document summarizes Chapter 6 of Psych 201, Fall 2021, focusing on learning, including types of learning and the roles of different brain structures in learning processes. The document also touches on observational learning and behaviorism.

Full Transcript

Rudy Zalzal – Psych 201 Fall 2021 Chapter 6: Learning • Learning: o A systematic, relatively permanent change in behavior that occurs through experience. o After we have learned to do something, we don’t really forget it. o Example: Navigating an area, writing down words, the alphabet, numbers, dri...

Rudy Zalzal – Psych 201 Fall 2021 Chapter 6: Learning • Learning: o A systematic, relatively permanent change in behavior that occurs through experience. o After we have learned to do something, we don’t really forget it. o Example: Navigating an area, writing down words, the alphabet, numbers, driving… • Brain and learning: o The subcortical structures: Control/modulate learning o Frontal lobes: Learning, movement, reasoning, planning, problem solving, personality, and risk-taking o Basal ganglia: Control of voluntary motor movements, conditioning, procedural learning, habitual behavior, eye movements, cognition, and emotion. o Hypothalamus: Regulates hunger, body temperature, food, water intake, and emotional behavior, as well as helping maintain the sleep/wake cycle. It produces and secretes hormones and controls the autonomic portion of the nervous system (e.g., breathing, heart rate, digestion, fight-or-flight, rest-and-digest). No role in learning o The amygdala: Responsible for feelings of fear, anxiety, and other negative emotions typically associated with the fight-or-flight response. The amygdalae are part of the limbic system, which is the seat of emotional processing in the brain and includes the hippocampus and other brain structures, learning the locations of food and water, AND predators. o Hippocampus: Belongs to the limbic system and plays important roles in the consolidation of information from short-term memory to longterm memory, conditioning, and in spatial memory that enables navigation. • Behaviorism: o A theory of learning that focuses solely on observable behaviors, discounting the importance of mental activity such as thinking, wishing, and hoping. o Psychologists who examine learning from a behavioral perspective define learning as relatively stable, observable changes in behavior. o Behaviorism maintains that the principles of learning are the same whether we are talking about humans or nonhuman animals o Behaviorists are concerned with associative learning and not observational learning because the latter requires mental processing • In regards to associative learning, humans and animals are the same • In regards to observational learning, a human baby would imitate an adult of how to go about things; a monkey would perform its own way. • Types of Learning: 1. Associative learning: Behaviorists like this a. Associative occurs when an organism makes a connection, or an association, between two events. b. Conditioning is the process of learning associations. c. It is divided into classical and operant conditioning: i. ii. In classical conditioning, organisms learn the association between two stimuli. As a result of this association, organisms learn to anticipate events. For example, lightning is associated with thunder and regularly precedes it. Thus, when we see lightning, we anticipate that we will hear thunder soon afterward. Another example: Early one morning, Bob is in the shower. While he showers, his wife enters the bathroom and flushes the toilet. Scalding hot water suddenly bursts down on Bob, causing him to yell in pain. The next day, Bob is back for his morning shower, and once again his wife enters the bathroom and flushes the toilet. Panicked by the sound of the toilet flushing, Bob yelps in fear and jumps out of the shower stream. Bob’s panic at the sound of the toilet flushing illustrates the learning process of classical conditioning, in which a neutral stimulus (the sound of a toilet flushing) becomes associated with an innately meaningful stimulus (the pain of scalding hot water) and acquires the capacity to elicit a similar response (panic). In operant conditioning, organisms learn the association between a behavior and a consequence, such as a reward. As a result of this association, organisms learn to increase behaviors that are followed by rewards and to decrease behaviors that are followed by punishment. For example, children are likely to repeat their good manners if their parents reward them with candy after they have shown good manners. Also, if children’s bad manners provoke scolding words and harsh glances by parents, the children are less likely to repeat the bad manners. A child associates a doctor’s office (stimulus 1) with getting a painful injection (stimulus 2). In Performing well in a swimming competition (behavior) becomes associated with getting awards (consequences). 2. Observational learning: a. Occurs when a person observes and imitates another’s behavior. b. Unlike associative learning, it relies on mental processes: The learner has to pay attention, remember, and reproduce what the model did. c. Very prominent in children and how infants acquire skills. • Pavlov’s Studies (on Classical conditioning): o Neutral aspects of the environment can attain the capacity to evoke responses through pairing with other stimuli; and bodily processes can be influenced by environmental cues. o Pavlov discovered classical conditioning by accident when he noticed that his dogs would salivate to stimuli other than the food. Not only meat would cause salivation, but also the sight of a food dish, the sight of the individual who brought food in, or the sound of the door closing when the food arrived. This is because the dog associated these sights and sounds with meat. o Why? Pavlov noticed that dogs had both learned and unlearned components of behavior. Unconditioned Stimulus (US) Unconditioned response (UR) • • • The unlearned part of classical conditioning Based on the fact that some stimuli automatically produce Caused by US • • responses apart from any prior learning They are inborn (innate) Reflexes Food Spoiled food Cold temp Throat congestion Light Pain Salivation Nausea Shivering Coughing Pupil constriction Withdrawal In Pavlov’s experiment, the meat was the (US) and drooling in response to food was the (UR). In the case of Bob and the flushing toilet’s scenario, the hot water was the (US), and Bob’s panic was the (UR). Conditioned Stimulus (CS) • A previously neutral stimulus that eventually elicits a conditioned response after being paired with the unconditioned stimulus. Conditioned Response (CR) • • Learned response to the conditioned stimulus that occurs after CS–US pairing Conditioned responses are quite similar to unconditioned responses, but typically they are not as strong. ð The CS comes before the US!! In studying a dog’s response to various stimuli associated with meat powder, Pavlov rang a bell before giving meat powder to the dog. Until then, ringing the bell did not have an effect on the dog, except perhaps to wake the dog from a nap. The bell was a neutral stimulus, meaning that in the dog’s world, this stimulus did not have any signal value at all. Prior to being paired with the meat powder, the bell was meaningless. However, the dog began to associate the sound of the bell with the food and salivated when it heard the bell. The bell had become a conditioned (learned) stimulus (CS), and salivation was now a conditioned response (CR). In the case of Bob’s interrupted shower, the sound of the toilet flushing was the CS, and panicking was the CR after the scalding water (US) and the flushing sound (CS) were paired. • Acquisition: o The first part of classical conditioning o The initial learning of the connection between the US and CS when these two stimuli are paired (as with a bell and food). During acquisition, the CS is repeatedly presented followed by the US. Eventually, the CS will produce a response. o A type of learning that occurs without awareness or effort, based on the presentation of two stimuli together. For this pairing to work, however, two important factors must be present: contiguity and contingency. a. Contiguity: Means that the CS and US are presented very close together in time. In Pavlov’s work, if the bell had rung 20 minutes before the presentation of the food, the dog probably would not have associated the bell with the food because the bell would not have served as a timely signal that food was coming. b. Contingency: Means that the CS must be a reliable indicator that the US will take place. If a bell is rung prior to the delivery of food each time, this would have been a reliable indicator. However, if the bell was rung randomly, or with a lot of time passing before the US took place, this would not function as a reliable indicator. • Generalization: o The tendency of a new stimulus that is similar to the original conditioned stimulus to elicit a response that is similar to the conditioned response, even if not paired with the US itself. o Pavlov found that the dog salivated in response not only to the tone of the bell but also to other sounds, such as a whistle. These sounds had not been paired with the unconditioned stimulus of the food. Pavlov discovered that the more similar the noise was to the original sound of the bell, the stronger the dog’s salivary flow. Stimulus generalization is not always beneficial. It is important to also discriminate among stimuli. • Discrimination o The process of learning to respond to certain stimuli and not others. o To produce discrimination, Pavlov gave food to the dog only after ringing the bell and not after any other sounds. In this way, the dog learned to distinguish between the bell and other sounds. • Extinction: After conditioning the dog to salivate at the sound of a bell, Pavlov rang the bell repeatedly in a single session and did not give the dog any food. Eventually the dog stopped salivating. This result is extinction, which in classical conditioning is the weakening of the conditioned response when the unconditioned stimulus is absent. Without continued association with the unconditioned stimulus (US), the conditioned stimulus (CS) loses its power to produce the conditioned response (CR). Extinction is not always the end of a conditioned response. The day after Pavlov extinguished the conditioned salivation to the sound of a bell, he took the dog to the laboratory and rang the bell but still did not give the dog any meat powder. The dog salivated, indicating that an extinguished response can spontaneously recur. • Spontaneous recovery: o The process in classical conditioning by which a conditioned response can recur after a time delay, without further conditioning. o Spontaneous recovery can occur several times, but as long as the conditioned stimulus is presented alone (that is, without the unconditioned stimulus), spontaneous recovery becomes weaker and eventually ceases. • Renewal: the recovery of the conditioned response when the organism is placed in a novel context. Example: when a person leaves a drug treatment facility to return to his or her previous living situation. • Summarizing the concept using the Little Albert Experiment: We learn many of our fears through classical conditioning. We might develop fear of the dentist because of a painful experience, fear of driving after having been in a car crash, and fear of dogs after having been bitten by one. In the beginning, Albert had no response to the rat. The rat, then, is a neutral stimulus—the conditioned stimulus (or CS). The rat was then paired with a loud noise. The loud noise would startle Albert and make him cry: Loud noises upset babies. That makes the loud noise the unconditioned stimulus (US), because it evokes a response naturally without the need for learning. Albert’s reaction to the loud noise is the unconditioned response (or UR). Again, being upset by loud noises is something that babies just do. The white rat (CS) and loud noise (US) were paired together in time, and each time the loud noise would upset little Albert (UR). This pairing is the process of acquisition. Then, the rat (the CS) was presented to Albert without the loud noise (the US), and Albert became alarmed and afraid even without the noise. Poor Albert’s enduring fear of the rat is the conditioned response (CR). • Breaking Habits: o Counterconditioning is a procedure in which a person/therapist tries to break the connection between the CS and CR. Therapists have used counterconditioning to break apart the association between certain stimuli and positive feelings. o Aversive conditioning is the process of pairing an unpleasant stimulus with a CS to try to break the connection between the CS and CR à using a nauseating agent when drinking alcohol to try to induce an unpleasant response with alcohol. To reduce drinking, for example, every time a person drinks an alcoholic beverage, he or she also consumes a mixture that induces nausea. In classical conditioning terminology, the alcoholic beverage is the conditioned stimulus, and the nausea-inducing agent is the unconditioned stimulus. Through a repeated pairing of alcohol with the nausea-inducing agent, alcohol becomes the conditioned stimulus that elicits nausea, the conditioned response. As a consequence, alcohol no longer is associated with something pleasant but rather with something highly unpleasant. • Classical conditioning can help explain the placebo effect: In this case, the pill or syringe serves as a CS, and the actual drug is the US. After the experience of pain relief following the consumption of a drug, for instance, the pill or syringe might lead to a CR of reduced pain even in the absence of an actual painkiller. • A number of studies reveal that classical conditioning can produce immunosuppression, a decrease in the production of antibodies, which can lower a person’s ability to fight disease. A recent study showed that it is possible to use classical conditioning to reduce the immune response in individuals who have received a kidney transplant. In the study, a particular taste was paired with immunosuppressing drugs in 30 transplant patients. Results showed the taste alone did eventually lead to lowered immune response. This work suggests that classical conditioning might be leveraged to improve the lives of individuals who receive transplants, both by boosting the effectiveness of medications and by reducing dosages. • Taste aversion: o A special kind of classical conditioning involving the learned o o o o o association between a particular taste and nausea. Taste aversion is special because it typically requires only one pairing of a neutral stimulus (a taste) with the unconditioned response of nausea to seal that connection, often for a very long time. This is adaptive. It is notable, though, that taste aversion can occur even if the “taste” had nothing to do with getting sick; the reason behind the sickening might be another completely separate event In taste aversion, the taste or flavor is the CS; the agent that made the person sick (it could be a roller-coaster ride or salmonella, for example) is the US; nausea or vomiting is the UR; and taste aversion is the CR. CR because it is an association, the taste itself might not have been the actual reason behind the nausea. Taste aversion learning is particularly important in the context of the traditional treatment of some cancers. Chemotherapy for cancer can produce nausea in patients, with the result that individuals sometimes develop strong aversions to foods they ingest prior to treatment. Consequently, they may experience a general tendency to be turned off by food, a situation that can lead to nutritional deficits. Early studies demonstrated that giving children a “scapegoat” conditioned stimulus prior to chemotherapy would help contain the taste aversion to only one specific type of food or flavor. For example, children might be given a particular flavor of Life Savers® candy before receiving treatment. For these children, the nausea would be more strongly associated with the Life Savers flavor than with the foods they needed to eat for good nutrition. These results show discrimination in classical conditioning—the kids developed aversions only to the specific scapegoat flavor. • Classical conditioning is used in advertising: TV advertisers cunningly apply classical conditioning principles to consumers by showing ads that pair something positive—such as a beautiful woman (the US) producing pleasant feelings (the UR)—with a product (the CS) in hopes that you, the viewer, will experience those positive feelings toward the product (the CR). • Classical conditioning can explain drug habituation: Habituation refers to the decreased responsiveness to a stimulus after repeated presentations. A mind-altering drug is an unconditioned stimulus: It naturally produces a response in the person’s body. This unconditioned stimulus is often paired systematically with a previously neutral stimulus (CS). For instance, the physical appearance of the drug in a pill or syringe, and the room where the person takes the drugs, are conditioned stimuli that are paired with the unconditioned stimulus of the drug. These repeated pairings should produce a conditioned response, and they do—but it is different from those we have considered so far. The conditioned response to a drug can be the body’s way of preparing for the effects of a drug. In this case, the body braces itself for the effects of the drug with a CR that is the opposite of the UR. For instance, if the drug (the US) leads to an increase in heart rate (the UR), the CR might be a drop in heart rate. The CS serves as a warning that the drug is coming, and the conditioned response in this case is the body’s compensation for the drug’s effects. In this situation the conditioned response works to decrease the effects of the US, making the drug experience less intense. Some drug users try to prevent habituation by varying the physical location where they take the drug. As a result of conditioning, the drug user will need to take more of the drug to get the same effect as the person did before the conditioning. Moreover, if the user takes the drug without the usual conditioned stimulus or stimuli, overdosing is more likely. Classical conditioning explains how neutral stimuli become associated with unlearned, involuntary responses. Classical conditioning is not as effective, however, in explaining voluntary behaviors. Voluntary actions, such as a student studying hard for a test, a gambler playing slot machines in Las Vegas, or a service dog fetching his owner’s cell phone on command, are clearly not the product of associating a CS and US. Rather, they must be explained by a different kind of associative learning, operant conditioning. • Operant Conditioning: o Also called instrumental conditioning o An operant behavior occurs spontaneously. o The consequences that follow such spontaneous behaviors determine whether the behavior will be repeated. o For example, you spontaneously decide to take a different route while driving to campus one day. You are more likely to repeat that route on another day if you have a pleasant experience like arriving at school faster than if you have a lousy experience such as getting stuck in traffic. o Reward should be contingent on behavior. • Thorndike’s approach to Operant Conditioning: called Law of Effect o Thorndike established the power of consequences in determining voluntary behavior. o Law of effect or Thorndike’s Law states that behaviors followed by pleasant outcomes are strengthened and that behaviors followed by unpleasant outcomes are weakened. o The experiment: Thorndike put a hungry cat inside a box and placed a piece of fish outside. To escape from the box and obtain the food, the cat had to learn to open the latch inside the box. At first the cat made a number of ineffective responses. It clawed or bit at the bars and thrust its paw through the openings. Eventually the cat accidentally stepped on the lever that released the door bolt. When the cat returned to the box, it went through the same random activity until it stepped on the lever once more. On subsequent trials, the cat made fewer and fewer random movements until finally it immediately stepped on the lever to open the door. • Skinner’s approach to Operant Conditioning the neurotransmitter involved with reinforcement and operant learning is dopamine a. Shaping is a method of teaching/training that involves rewarding every successful step towards a goal. Example: When a rat is first placed in a Skinner box, it rarely presses the bar. Thus, the experimenter may start off by giving the rat a food pellet if it is in the same half of the cage as the bar. Then the experimenter might reward the rat’s behavior only when it is within 2 inches of the bar, then only when it touches the bar, and finally only when it presses the bar. b. Reinforcement: The process by which a stimulus or event (a reinforcer) following a particular behavior increases the probability that the behavior will happen again. These desirable (or rewarding) consequences of a behavior fall into two types, called positive reinforcement and negative reinforcement. Both of these types of consequences are experienced as pleasant, and both increase the frequency of a behavior. i. ii. • Positive reinforcement increases the frequency of a behavior by adding something pleasant à you study more because you get better grades. Negative reinforcement increases the frequency of a behavior because it removes something unpleasant à you take more aspirin (behavior) because it reduces your migraine (consequence), or you clean your room (behavior) because it stops your mother from complaining (consequence). More examples: smoking (since it alleviates cravings), giving employees a day off if they work well… Avoidance learning: Occurs when the organism learns that by making a particular response, a negative stimulus can be avoided. For instance, a student who receives one bad grade might thereafter always study hard to avoid the negative outcome of bad grades in the future. Even when the bad grade is no longer present, the behavior pattern sticks. Avoidance learning is very powerful in the sense that the behavior is maintained even in the absence of any aversive stimulus. For example, animals that have been trained to avoid a negative stimulus, such as an electrical shock, by jumping into a safe area may thereafter gravitate toward the safe area, even when the shock is no longer presented. • Learned helplessness occurs when an organism learns from experience that it has no control over a negative outcome and leads to a particular deficit in avoidance learning even when they could actually avoid the negative outcomes. • Type of Reinforcers: (Reinforcers are consequences that will make a behavior more likely to happen) a. Primary reinforcers are reinforcers that are innately satisfying o Sex o Food o Water b. Secondary reinforcers can be learned or conditioned reinforcers and are acquired through experience. While we might think the examples as innately pleasant, they are not; we learn that they are good. A child doesn't associate sweets with anything really until the stimulus (sweets) is associated with a reward or praise herein falls the learning. So a secondary reinforcer to contribute to learning must be associated with a primary reinforcer thus resulting in a conditioned response with enough pairings. o Getting an A o Pleasing a certain person o Getting your paycheck o Candy o Money • Generalization, discrimination, and extinction are also important in operant conditioning: o Generalization: In operant conditioning, generalization means performing a reinforced behavior in a different situation. For example, if pigeons were reinforced for pecking at a disk of a particular color, presenting the pigeons with disks of varying colors is an example of generalization. The pigeons were most likely to peck at disks closest in color to the original. When a student who gets excellent grades in a calculus class by studying the course material every night starts to study psychology and history every night as well, generalization is at work. o Discrimination: Responding appropriately to specific stimuli that signal whether a behavior will or will not be reinforced. Like pulling out your ID when you see that there is a student discount at a restaurant. Without the sign, showing your ID might get you only a puzzled look, not cheap food. o Selective disobedience: In addition to obeying commands from her human partner, the service dog must at times override such commands if the context provides cues that obedience is not the appropriate response. So, if a guide dog is standing at the street corner with her visually impaired owner, and the person commands her to move forward, the dog might refuse if she sees the “Don’t Walk” sign flashing. Stimuli in the environment serve as cues, informing the organism if a particular reinforcement contingency is in effect. o Extinction: In operant conditioning, extinction occurs when a behavior is no longer reinforced and decreases in frequency. If, for example, a soda machine that you frequently use starts “eating” your coins without dispensing soda, you quickly stop inserting more coins. • Reinforcement: a. Continuous reinforcement, in which a behavior is reinforced every time it occurs. When continuous reinforcement takes place, organisms learn rapidly. However, when reinforcement stops, extinction takes place quickly. b. Partial (intermittent) reinforcement: A reinforcer follows a behavior only a portion of the time. Partial reinforcement characterizes most life experiences. For instance, a golfer does not win every tournament she enters, nor does a chess whiz win every match she plays. Within partial reinforcements are, Schedules of reinforcement: Specific patterns that determine when a behavior will be reinforced: o Ratio schedules: Pertain to the number of times a behavior must occur in order for reinforcement to take place. - Fixed ratio schedules: a behavior will be reinforced after it occurs a specific amount of times à get three As and you get to play for one hour, or work around the house for five days and you will get $15. - Variable ratio schedules: a behavior will be reinforced, but the individual does not know how many repetitions are required. Example: casinos, gambling, slot machines. These are the most resistant to extinction because they are both unpredictable and require a set number of behaviors to occur. For example, a slot machine might pay off at an average of every 20th time, but the gambler does not know when this payoff will be. The slot machine might pay off twice in a row and then not again until after 58 coins have been inserted. o Interval schedules: Pertain to the amount of time that must pass before a behavior is rewarded. Determined by the time elapsed since the last behavior was rewarded - Fixed interval schedules: after a specific amount of time has passed, the behavior will be reinforced. When the time of the reinforcement approaches, the rate of behavior usually increases. Example: When the time of an exam approaches, we tend to study more. Behavior is characterized by cramming and procrastination. - Variable interval schedule: the behavior is reinforced, but only after a variable (random) amount of time has passed. Example: Drop quizzes, fishing, the next text message from your crush. Behavior is slow and consistent, which is why we study continuously in case of drop quizzes Highest response ratio On the fixed-ratio schedule, notice the dropoff in responding after each response; on the variable-ratio schedule, note the high, steady rate of responding. On the fixed-interval schedule, notice the immediate dropoff in responding after reinforcement and the increase in responding just before reinforcement (resulting in a scalloped curve); and on the variable-interval schedule, note the slow, steady rate of responding. • While reinforcers increase behaviors, punishment decreases them. Punishments: Consequences that will make a behavior less likely to occur. o Positive punishment à a behavior decreases when a stimulus is introduced. Beating, scolding, discouragement, pepper, etc. o Negative punishment à a behavior decreases when a stimulus is removed. Time-out, grounding, stopping the habit of keeping your room a mess so you don’t lose your phone again • The process of learning is more efficient when the reinforcer is presented with less delay in time after the behavior. This is the case for both classical and operant conditioning. While it is virtually impossible for animals to understand delayed reinforcements, humans can. We can, for instance, study hard knowing that a test is a few weeks away (as there is the reward of the grade we will earn). Sometimes important life decisions involve whether to seek and enjoy a small, immediate reinforcers or to wait for a delayed but more highly valued reinforcer. • Same applies for punishment: Immediate punishment is more effective than delayed punishment. • How does receiving immediate small reinforcement versus delayed strong punishment affect human behavior? One reason that obesity is such a major health problem is that eating is a behavior with immediate positive consequences—food tastes great and quickly provides a pleasurable, satisfied feeling. Although the potential delayed consequences of overeating are negative (obesity and other possible health risks), the immediate consequences are difficult to override. When the delayed consequences of behavior are punishing and the immediate consequences are reinforcing, the immediate consequences usually win, even when the immediate consequences are minor reinforcers and the delayed consequences are major punishers. Smoking and drinking follow a similar pattern. • Other things might cause immediate small punishments and delayed strong reinforcements. Example: it may take a long time to become a good enough golfer or a good enough dancer to enjoy these activities, but persevering through the rough patches just might be worth it. Learning new skills often involves minor punishing consequences, such as initially looking and feeling stupid, not knowing what to do, and having to put up with sarcastic comments from others. • Applied behavior analysis (behavior modification): o Uses the principles of operant conditioning to change human behavior. o If we can figure out what rewards and punishers are controlling a person’s behavior, we can change them—and eventually change the behavior itself. o Can be employed in workplace and classroom. o Examples: A manager who rewards staff members with a casual-dress day or a half day off if they meet a particular work goal, a teacher who notices that a troublesome student seems to enjoy the attention he receives—even when that attention is scolding—might ignore the student (an example of negative punishment). • Observational Learning: o Concerns voluntary behavior. o Observational learning is also called imitation or modeling. o Not based on trial and error o Takes less time than operant conditioning o It may explain why we indulge in conventional habits such as waving our hands, saying hello, or eating a certain food and in a specific manner. o Involves four main processes (formulated by Bandura 1961): 1. Attention: To reproduce a model’s actions, the individual must attend to what the model is doing or saying. Some models have characteristics that help maintain attention such as being outgoing, warm, and powerful, causing the learner to pay more attention to him/her compared to a weak, cold, and boring model. 2. Retention: Memorizing or remembering what the model did. You must encode the information and keep it in memory so that you can retrieve it. 3. Motor reproduction: Imitating what the model has done. Someone might pay attention, retain information, but not be able to reproduce the model’s actions; for example, a 13-year-old cannot dunk a basketball hoop even if he/she pays attention and retains. 4. Reinforcement: The question is whether the model’s behavior is followed by a consequence. a. Vicarious reinforcement: seeing the model get rewarded and this makes the observer more likely to try out the behavior of the model. b. Vicarious punishment: seeing the model get punished and this makes the observer less likely to try out the behavior of the model. Scenario with all 4 processes: If you are learning to ski, you need to attend to the instructor’s words and demonstrations. You need to remember what the instructor did and said about how to avoid disasters. You also need the motor abilities to reproduce what the instructor has shown you. Praise from the instructor after you have completed a few moves on the slopes should improve your motivation to continue skiing. • Cognitive factors in learning: o E.C. Tolman o Emphasized the purposiveness of behavior—the idea that much of behavior is goal-directed. o People do not only act based on reward or reinforcement, but because of goals they’ve set. o In studying the purposiveness of behavior, Tolman went beyond the stimuli and responses of Pavlov and Skinner to focus on cognitive mechanisms. o Tolman said that when classical conditioning and operant conditioning occur, the organism acquires certain expectations. In classical conditioning, the young boy fears the rabbit because he expects it will o o o o hurt him. In operant conditioning, a woman works hard all week because she expects a paycheck. Expectancies influence a variety of human experiences. We set the goals we do because we believe that we can reach them. Expectancies also play a role in the placebo effect. Many painkillers have been shown to be more effective in reducing pain if patients can see the intravenous injection sites. If patients can observe that they are getting a drug, they can harness their own expectations for pain reduction. Tolman emphasized that conditioned stimulus is a signal or an expectation that an unconditioned stimulus will follow. Conditioning is governed by the subject’s history and informational value of the stimuli encountered rather than the contiguity of the CS and US. Example: A rat was conditioned by repeatedly pairing a tone (CS) and a shock (US) until the tone alone produced fear (CR). Then he continued to pair the tone with the shock, but he turned on a light (a second CS) each time the tone sounded. Even though he repeatedly paired the light (CS) and the shock (US), the rat showed no conditioning to the light (the light by itself produced no CR). Conditioning to the light was blocked, almost as if the rat had not paid attention. The rat apparently used the tone as a signal to predict that a shock would be coming; information about the light’s pairing with the shock was redundant with the information already learned about the tone’s pairing with the shock. The rat already possessed a good signal for the shock; the additional CS was not useful. a. Latent learning or Implicit learning: Latent behavior is behavior stored cognitively in memory but not yet expressed behaviorally. Example: When you walk around a new setting to get “the lay of the land.” The first time you visited your college campus, you may have wandered about without a specific destination in mind. Exploring the environment made you better prepared when the time came to find that 8 A.M. class. b. Insight learning (Kohler’s monkey experiments): o Solving problems does not involve trial and error or simple connections between stimuli and responses. Rather, when the ape realizes that its customary actions are not going to help it get the bananas which are either very high or out of reach outside the cage, it often sits for a period of time and appears to ponder how to solve the problem. Then it quickly rises, as if it has had a sudden flash of insight and problem’s solution, piles the boxes on top of one another, and gets the fruit. Insight learning requires that we think “outside the box,” setting aside previous expectations and assumptions. One way to enhance insight learning and creativity in human beings is through multicultural experiences o Frontal lobe (higher intelligence and is the location of the brain responsible for much of our higher mental processing, including insight and complex thinking) o Gray matter connects different parts of the brain, which allows for unique combinations of known facts and ideas. ð Both biological and cultural factors contribute to learning. • Biological factors: a. Instinctive drift: The tendency of animals to return to their instinctive (natural) behavior, which interferes with learning. b. Preparedness: is a species biological predisposition to learn in a certain way, but not another. Monkeys cannot learn to speak like us and mammals exhibit preparedness in associating fear with snakes (mammals fearing snakes has to do with amygdala’s role in emotions). • Cultural influences: Influences how much we use certain learning processes, and what we learn. We cannot learn about something we do not experience. • Psychological constraints: Some people believe that humans have particular learning styles that make it easier for them to learn in some ways but not others. For example, you may have heard that someone can be a visual learner (he or she learns by seeing), an aural learner (the person learns by listening), or a kinesthetic learner (the individual learns through hands-on experience). Although these labels may be popular, there is no evidence that teaching people in a way that matches their learning style leads to better learning. However, our beliefs about learning can affect whether we learn. - Mindset: The way our beliefs about ability dictate what goals we set for ourselves, what we think we can learn, and ultimately what we do learn. Individuals have one of two mindsets: a fixed mindset, in which they believe that their qualities are carved in stone and cannot change; or a growth mindset, in which they believe their qualities can change and improve through their effort. • Intelligence and thinking skills are not fixed but can change. Health and Wellness section: • Stress: Organism’s response to a threat in the environment. • Stress is reduced by: o Predictability: For example: When a warning is given before a shock, you experience less stress after the trauma o Following a schedule: For example, when you receive a gift on your birthday or a holiday, the experience feels good. However, if someone surprises you with a present out of the blue, you might feel some stress as you wonder, “What is this person up to?” o Taking control of circumstances o Having an outlet to reduce frustration o Seeing improvements (even in negative and bad circumstances): Imagine that you have two rats, both of which are receiving mild electrical shocks. One of them, Jerry, receives 50 shocks every hour, and the other, Chuck-E, receives 10 shocks every hour. The next day both rats are switched to 25 shocks every hour. Which one is more stressed out at the end of the second day? The answer is that even though Jerry has experienced more shocks in general, Chuck-E is more likely to show the wear and tear of stress. In Jerry’s world, even with 25 shocks an hour, things are better. Chapter 7: Memory • Memory: o The retention of information or experience over time. o Memory occurs through three important processes: Edward Sips Rauch 1. Encoding: Taking in information 2. Storage: Store it or represent it in some manner in some “mental storehouse” 3. Retrieval: Recall it for a later purpose 1. Encoding memory: (left frontal lobe) The first step in memory; the process by which information gets into memory storage. Some information gets into memory virtually automatically, whereas encoding other information takes effort. 4 factors that affect memory encoded effortfully are: A. Attention: • Selective attention: focusing on a specific aspect of experience while ignoring others. Attention is selective because the brain’s resources are limited—they cannot attend to everything à if you’re in a restaurant with lots of people talking, you selectively attend to what the person opposite you is saying rather than everyone else. • Sustained attention (vigilance): while selective attention is the ability to be able to focus on one thing when a variety of experiences are presented, sustained attention is the ability to maintain attention to a selected stimulus for a prolonged period of time à paying close attention to your notes while studying for an exam • Divided attention: involves concentrating on more than one thing at the same time à chatting on WhatsApp while reading a Psych chapter. Divided attention and multitasking is a very bad way to encode memory. B. Levels of processing: by Craik and Lockhart Whether we engage with information superficially (shallow processing) or really get into it (deep processing). It is a continuum of memory processing from shallow to intermediate to deep, with deeper processing producing better memory. ð The deeper the level of processing, the better the memory will be • Laptops and tablets can interfere with learning because they distract attention even when used only for notes while pen and paper outperform laptops in student memory for material. This is because most of us can type far faster than we can write by hand. When we take notes on a keyboard, we prevent active, deep encoding unlike writing by hand, whereby we have to think about what we are writing down—interacting actively with the material so that we can jot down the most important points. C. Elaboration: formation of a number of different connections between a stimulus at any given level of memory encoding and what we already know. When we elaborate on a topic during encoding, we are laying a pathway to help us retrieve the information. The more paths we create, the more likely it is that we will remember the information. Elaboration is linked with higher neural activity of the left prefrontal cortex (responsible for language, speech, rationalism, logic) If you were studying the tensile strength of metals and you had previous knowledge of suspension bridges, you would elaborate by associating tensile strength and the material used to the rope in the suspension bridge. ð The more elaborate the processing, the better the memory will be ð Understanding new material leads to higher frequency of recalling. Selfreference leads to an even higher frequency of recalling than understanding the meaning of the material. • Self-reference: relating material to your own experience is another highly effective way to elaborate deeply on information and understand the material by drawing mental links between aspects of your own life and new information D. Imagery: developed by Paivio • Are a very good way of remembering things. Associating images with things that need to be remembered can help a person encode a memory better and faster. • Memory can be stored either as a verbal code (words or labels) or an image code. • Dual code hypothesis believes image code produces better memory than verbal code because pictures —at least those that can be named—are stored as both image codes and verbal codes, thus when we use imagery to remember, we have two potential avenues by which we can retrieve information, in picture form or word form. 2. Memory Storage: • Storage is how information is retained over time and represented in memory. • The Atkinson-Shiffrin theory states that memory storage involves 3 distinct systems. Sensory input goes into sensory memory. Through the process of attention, information moves into short-term memory, where it remains for only up to 30 seconds unless it is rehearsed. When the information goes into long-term memory storage, it can be retrieved over a lifetime Few seconds Up to 30 sec Up to lifetime a. Sensory memory: Very rich (no limited capacity) but holds information from the world in its original sensory form for only an instant, not much longer than the brief time it is exposed to the visual, auditory, and other senses unless we use certain techniques to transfer it to short and long- term memory. Think about the sights and sounds you encounter as you walk to class on a typical morning. Literally thousands of stimuli come into your field of vision and hearing—cracks in the sidewalk, chirping birds, the blue sky, faces and voices of hundreds of people. You do not process all of these stimuli, but you do process a number of them. Generally, you process more stimuli at the sensory level than you consciously notice. Sensory memory retains this information from your senses (including a large portion of what you think you ignore), but only for a few moments. • Echoic memory is sensory memory that has to do with hearing and is retained up to several seconds • Iconic memory is sensory memory that has to do with seeing and is retained only for about ¼ of a second. (Sperling’s flashing letter study). Ex: the residual iconic memory is what makes a moving point of light appear to be a line b. Short-term memory is a limited-capacity memory system in which information is usually retained for only as long as 30 seconds unless we use strategies to retain it longer. Compared with sensory memory, shortterm memory is limited in capacity, but it can store information for a longer time. Much information goes no further than sensory memory stage, retained for only a brief instant. However, some information, especially that to which we pay attention, proceeds into short-term memory. • Memory span refers to the number of items/digits an individual can remember after a single presentation. Usually, the limit is in the range of 7 ± 2 items. If you think of important numbers in your life (such as phone numbers, student ID numbers, and your Social Security number), you will probably find that they fit into the 7 ± 2 range. If you rely on short-term memory to retain longer lists, you probably will make errors. • Some ways to improve short-term memory: o Chunking: grouping or “packing” information that exceeds the 7 ± 2 memory span into higher-order units that can be remembered as single units that fall within this range. Example: Consider this list: hot, city, book, forget, tomorrow, and smile. Hold these words in memory for a moment; then write them down. If you recalled the words, you succeeded in holding 30 letters, grouped into six chunks, in memory (6 falls within [5 ,9]). The 7 +/- 2 rule was developed by George Miller o Rehearsal: involves repeating the information over and over in your head to keep it in memory. One reason rehearsal does not work well for retaining information over the long term is that rehearsal does not give meaning to the information, it is just mechanically repeating information. Over the long term, we remember information best when we add meaning to it, demonstrating the importance of deep, elaborate processing. ð Attention + chunking + rehearsal -> short term memory ð Deep processing + elaboration -> long term memory • Working memory: A combination of components, including short-term memory and attention, that allow individuals to hold information temporarily as they perform cognitive tasks; a kind of mental workbench on which the brain manipulates and assembles information to guide understanding, decision making, and problem solving. Working memory is not the same as short-term memory, because short-term memory is a passive storehouse that stores information until it moves to long-term memory while working memory is an active memory system. In working memory, if the chunks are relatively complex, most young adults can only remember 4 ± 1, that is, 3 to 5 chunks. It is separable from short-term memory A three-part model of working memory was proposed: Phonological loop, the visuo-spatial sketchpad, and the central executive. You can think of them as two assistants or workers (the phonological loop and the visuo-spatial sketchpad) who work for the same boss (the central executive) - The phonological loop is specialized to briefly store speech-based information (limited capacity) - The visuo-spatial sketchpad stores visual and spatial information, including visual imagery (limited capacity) - The central executive integrates information not only from the phonological loop and the visuo-spatial sketchpad but also from long-term memory. It monitors which information deserves our attention and which we should ignore. If working memory is like the files you have open on your computer, the central executive is you. c. Long-term memory refers to memory that stores information for a very long time. ð If, say, all of the information on the hard drive of your computer is like long-term memory, then working memory is comparable to what you actually have open and active at any given moment. Working memory has a limited capacity What, when, who how Long-term memory divisions: • • • • • • a. Explicit (declarative) memory: It is memory that involves the conscious recollection of information à facts, events, and ideas that can be verbally communicated. Includes much of the information we learn in coursework According to Bahrick, for how long explicit content lasts with us depends on how early we began learning and how well we did in it. For example: Learning Spanish; if you learned at a younger age and got higher grades in the course in college, you will maintain knowledge about it for a longer time that had you learned Spanish later and gotten a C. The first few years after college, individuals had a steep decline in memory for vocabulary learned in Spanish classes. However, there was little drop-off in memory for Spanish vocabulary from 3 years after taking Spanish classes to 50 years after taking them. Even 50 years after taking Spanish classes, individuals still remembered almost 50 percent of the vocabulary Permastore content: Bahrick’s term that refers to content that is retained (almost forever) for a long time and does not need to be rehearsed. 2 subgroups: o Episodic memory is the retention of information about the where, when, and what of life’s happenings—basically, how we remember life’s episodes. Episodic memory is autobiographical. For example, episodic memory includes the details of where you were when your younger brother or sister was born, what happened on your first date, and what you ate for breakfast this morning. o Semantic memory is a type of explicit memory pertaining to a person’s knowledge about the world. It includes your areas of expertise, general knowledge of the sort you are learning in school, and everyday knowledge about the meanings of words, famous individuals, important places, and common things. For example, semantic memory is involved in a person’s knowledge of chess, of geometry, and of who Melania Trump, LeBron James, and Lady Gaga are. • Memory tends to be a mix of semantic and episodic memory. Example: In many cases of explicit, or declarative, memory are neither purely episodic nor purely semantic but fall in between. Consider your memory for what you studied last night. You probably added knowledge to your semantic memory—that was, after all, the reason you were studying. You probably remember where you were studying, as well as about when you started and when you stopped. b. Implicit (non-declarative) memory: • Unconsciously remembering skills and sensory perceptions rather than consciously remembering facts • 3 subtypes of implicit memory process: o Procedural memory involves memory that pertains to skills such as playing tennis, painting, boxing, and swimming. Example, once you have learned to drive a car, you remember how to go about it; you do not have to remember consciously how to drive the car o Classical conditioning: A form of learning that involves the automatic learning of associations between stimuli, so that one comes to evoke the same response as the other. o Priming is a process which involves the activation of previous memories in order to learn better and faster. Example: In a common demonstration of priming, individuals study a list of words (such as hope, walk, and cake). Then they are given a standard recognition task to assess explicit memory. They must select all of the words that appeared in the list—for example, “Did you see the word hope? Did you see the word form?” Then participants perform a stem-completion task, which assesses implicit memory. In this task, they view a list of incomplete words (for example, ho__, wa__, ca__), called word stems, and must fill in the blanks with whatever word comes to mind. The results show that individuals more often fill in the blanks with the previously studied words than would be expected if they were filling in the blanks randomly. For example, they are more likely to complete the stem ho__ with hope than with hole. This result occurs even when individuals do not recognize the words on the earlier recognition task. Because priming takes place even when explicit memory for previous information is not required, it is assumed to be an unconscious process. Another example: Researchers asked students to perform a word-search puzzle. Embedded in the puzzle were either neutral words (shampoo, robin) or achievement-related words (compete, win, achieve). Participants who were exposed to the achievement-related words did better on a later puzzle task, finding an average of 26 words in other puzzles, whereas those with the neutral primes found only 21.5. The researchers concluded that these implicit cues to achievement led participants to work harder. Priming a term or concept makes it more available in memory. • The organization of memory: Long term memory (virtually unlimited) organization is explained by 2 theories: o Schema: A preexisting mental framework that helps us organize and interpret information. à even though you have never gone to a particular restaurant, you know how it works because you have had experience in other restaurants. o Long-term memory is not very accurate, so we cannot always retrieve what we want, and not always the entirety of what we want. Thus, we use schemas to reconstruct the rest of the memory. o A script is a kind of schema that is related to events and is involved in the recognition of faces and people. It is useful in letting us know what is happening around us à the teacher put a paper on your desk, your script tells you that you have a drop quiz. o Connectionist networks or Parallel distributed processing (PDP): - Tries to explain the organization of memory in-terms of neural connections and electric signaling amongst various neurons in the brain. Memories are organized sets of neurons that are routinely activated together because of synapses that interconnect the various locations of neural activity (called nodes). - Just as perception involves specific neurons responding to bits and pieces of experience, single neurons may fire in response to faces, eyes, or hair color. Yet, in order for you to recognize your Uncle Albert, individual neurons that provide information about hair color, size, and other characteristics must act together. • Where memories are stored: o Memories are not stored in a specific part of the brain, rather the entire brain functions together in memory storage. o Long-term potentiation: neurons that fire together develop a connection. The connection between them—and thus the memory— is strengthened. We automatically organize our memory o Explicit memory is stored in the hippocampus, amygdala, and temporal lobes in the cerebral cortex. In many aspects of explicit memory, information is transmitted from the hippocampus to the frontal lobes, which are involved in both retrospective memory (remembering things from the past) and prospective memory (remembering things that you need to do in the future). The left frontal lobe is especially active when we encode new information into memory; the right frontal lobe is more active when we subsequently retrieve it. o Implicit memory is stored in the hippocampus, temporal lobes (especially priming), and cerebellum (given its role in coordination and balance, it is not surprising that the cerebellum is active in the implicit memory required to perform skills). 3. Memory retrieval: (right frontal lobe) o Retrieval is, simply, retrieving memory from “storage”. o The serial position effect: It is the tendency to recall things at the beginning and at the end of a list more readily than those in the middle. o The primacy effect: remembering information at the beginning better. For the primacy effect, the first few items in the list are easily remembered because they are rehearsed more or because they receive more elaborate processing than do words later in the list. Working memory is relatively empty when the items enter, so there is little competition for rehearsal time. Moreover, because the items get more rehearsal, they stay in working memory longer and are more likely to be encoded into long-term memory. o The recency effect: recalling information towards the end better. First, when the last few items on a list are recalled, they might still be in working memory. Second, even if these items are not in working memory, the fact that they were just encountered makes them easier to recall. o Retrieval cues and retrieval tasks may help with retrieval: § Cues: Stimuli that help us remember or trigger memories. For instance, the smell of apple pie baking may remind you of a family dinner. If effective cues for what you are trying to remember are not available, you can create them. For example, write down the names of as many of your classmates as you can remember. When you run out of names, think about the activities you were involved in during those school years. § Tasks: Your success in retrieving information also depends on the retrieval task you set for yourself. For instance, if you are simply trying to decide whether something seems familiar, retrieval is often a snap. Let’s say that you see a short, darkhaired woman walking toward you on the street. You quickly decide that she is someone who shops at the same market as you do. However, remembering her name or a precise detail, such as when you met her, can be harder. Another factor in retrieval is the demands of the task - Recall involves retrieving previously learned information. Example: essay writing. Recall tests have poor retrieval cues because unlike MCQ, an essay topic does not provide any stimulus that will remind you of previous info you already have. - Recognition involves identifying previously learned information. Example: MCQ exams. ð Recalling a face is more difficult than recognizing a face from a drawing because recalling is “from scratch” o The encoding specificity principle states that information present at the time of encoding or learning tends to be effective as a retrieval cue. For instance, you know your instructors when they are in the classroom setting—you see them there all the time. If, however, you run into one of them in an unexpected setting and in more casual attire, such as at the gym in workout clothes, the person’s name might escape you. Your memory might fail because the cues you encoded are not available for use. o This can create problems when the contexts in which the information is encoded are different. Being in the same context as when the information was being encoded may aid in recollection- this is referred to as context-dependent memory: Improved recall of specific episodes or information when contextual cues relating to the environment are the same during encoding and retrieval o False memories occur when we remember something that is not true. These memories are due to the failure of being able

Use Quizgecko on...
Browser
Browser