General Psychology 1st Semester 1st Year PDF

Summary

This document provides an overview of general psychology, focusing on topics like the nature of psychology, its scientific beginnings, various schools of thought (structuralism, functionalism, behaviorism, Gestalt psychology, psychoanalysis), and the nature-nurture debate.

Full Transcript

GENERAL PSYCHOLOGY FIRST UNIT - week 1 NATURE OF PSYCHOLOGY A primary aim of psychology is to predict behaviour by understanding its underlying causes. This task is challenging due to individual differences, as people vary and respond differently in various situations. Individual differences refer t...

GENERAL PSYCHOLOGY FIRST UNIT - week 1 NATURE OF PSYCHOLOGY A primary aim of psychology is to predict behaviour by understanding its underlying causes. This task is challenging due to individual differences, as people vary and respond differently in various situations. Individual differences refer to the variations among people on physical or psychological dimensions. The origins of psychological reasoning date back to Ancient Greece, where philosophers like Socrates, Plato, and Aristotele explored the essence of consciousness, rationality, and human thought. The term "psychology" is derived from the Greek words "psyche" meaning life, and "logos" meaning explanation. The roots of psychology trace back to these ancient Greek philosophers, who asked fundamental questions about mental life: What is consciousness? Are people inherently rational or irrational? Is there truly free choice? These questions remain as relevant today as they were millennia ago, addressing the nature of the mind and mental processes, which are central to the cognitive perspective in psychology. Additionally, other psychological inquiries focus on the nature of the body and human behaviour, with a similarly long history. Hippocrates, often regarded as the "father of medicine", lived around the same time as Socrates and was deeply interested in physiology, the study of the functions of living organisms and their parts. He made significant observations about how the brain controls various body organs, laying the groundwork for what would become the biological perspective in psychology. BEGINNING OF SCIENTIFIC PSYCHOLOGY Although philosophers and scholars maintained an interest in the workings of the mind and body over the centuries, scientific psychology is generally considered to have begun in the late nineteenth century. This milestone was marked by Wilhelm Wundt establishing the first psychological laboratory at the University of Leipzig in Germany in 1879. Wundt's lab was founded on the belief that the mind and behaviour, like planets, chemicals, or human organs, could be scientifically analysed. His research primarily focused on the senses, particularly vision, but he and his colleagues also studied attention, emotion, and memory. Wundt utilised introspection to study mental processes, which involved observing and recording one's own perceptions, thoughts, and feelings. Examples of introspection included reports on the perceived weight of an object or the brightness of a flash of light. Although the introspective method was derived from philosophy, Wundt introduced a new aspect by combining self-observation with experiments. His experiments systematically varied the physical dimensions of stimuli, such as intensity, and used introspection to determine how these changes affected participants' conscious experiences. However, relying on introspection, especially for rapid mental events, proved problematic. Even with extensive training, different individuals produced varying introspective reports on simple sensory experiences, leading to inconclusive results. Consequently, introspection is not a central method in the current cognitive perspective. Additionally, reactions to the limitations of introspection contributed to the development of other modern psychological perspectives. STRUCTURALISM AND FUNCTIONALISM During the nineteenth century, significant advancements in chemistry and physics were achieved by breaking down complex compounds (molecules) into their basic elements (atoms). Inspired by these successes, psychologists sought to identify the mental elements that combined to form more complex experiences. Analogous to how chemists analysed water into hydrogen and oxygen, psychologists aimed to deconstruct the perception of lemonade into sensations like sweet, bitter, and cold. E. B. Titchener, a psychologist at Cornell University trained by Wundt, was a leading advocate of this approach in the United States and introduced the term structuralism, which refers to the analysis of mental structures. However, some psychologists opposed the purely analytical focus of structuralism. William James, a prominent psychologist at Harvard University, argued that understanding the fluid and personal nature of consciousness was more important than analysing its individual elements. His approach, known as functionalism, emphasised studying how the mind functions to enable an organism to adapt to and operate within its environment. The interest in adaptation among nineteenth-century psychologists was influenced by Charles Darwin's theory of evolution. Some suggested that consciousness evolved because it served a purpose in guiding an individual's activities. Functionalists believed that to understand how an organism adapts to its environment, psychologists must observe actual behaviour. Despite their differences, both structuralists and functionalists considered psychology to be the science of conscious experience. BEHAVIORISM Structuralism and functionalism were pivotal in the early development of twentieth-century psychology, each providing a systematic approach and serving as competing schools of thought. However, by 1920, these approaches were being overshadowed by three emerging schools: behaviourism, Gestalt psychology, and psychoanalysis. Among these, behaviourism had the most significant impact on scientific psychology in North America. Founded by John B. Watson, behaviourism rejected the focus on conscious experience in psychology. Watson conducted studies on the behaviour of animals and infants without making assertions about consciousness, suggesting that animal and child psychology could be independent sciences and could also serve as models for adult psychology. Watson argued that for psychology to be considered a science, its data must be observable and open to public inspection, unlike the private nature of consciousness. Since behaviour is public, science should focus on public facts. This perspective resonated with psychologists who were becoming discontented with introspection, leading to the rapid adoption of behaviourism. Many younger psychologists in the United States identified as behaviourists. Although Ivan Pavlov's research on conditioned responses was influential, it was Watson who popularised behaviourism. Behaviourists, including Watson, contended that most behaviour results from conditioning and that the environment shapes behaviour by reinforcing specific habits. For instance, giving children cookies to stop them from whining reinforces the habit of whining. The conditioned response was seen as the basic unit of behaviour from which more complex behaviours could be formed. Complex behaviour patterns resulting from training or education were viewed as interconnected conditioned responses. Behaviourists typically explained psychological phenomena in terms of stimuli and responses, leading to the term stimulus–response (S–R) psychology. While S-R psychology itself is not a theory or perspective, it provides a framework for discussing psychological information and is still occasionally used in modern psychology. GESTALT PSYCHOLOGY Around 1912, while behaviourism was gaining traction in the United States, Gestalt psychology was emerging in Germany. The term "Gestalt," meaning "form" or "configuration" in German, encapsulates the approach of Max Wertheimer and his colleagues Kurt Koffka and Wolfgang Köhler, who later emigrated to the United States. Gestalt psychologists focused primarily on perception, positing that perceptual experiences depend on the patterns formed by stimuli and the organisation of experience. They believed that what we see is influenced by the background against which an object appears and other aspects of the overall pattern of stimulation. They famously stated that "the whole is different from the sum of its parts", emphasising that the whole relies on the relationships among the parts. For example, when looking at a specific figure, we perceive it as a single large triangle rather than three smaller angles. Their key interests included the perception of motion, size judgement, and how colours appear under different lighting conditions. These perceptual insights led to various interpretations of learning, memory, and problem-solving, which contributed to the foundation of modern cognitive psychology. Gestalt psychology also influenced the founders of modern social psychology, such as Kurt Lewin, Solomon Asch, and Fritz Heider, who expanded Gestalt principles to understand interpersonal phenomena. For instance, Asch extended the Gestalt idea that people perceive holes rather than isolated parts from object perception to person perception. Additionally, they viewed the process of imposing meaning and structure on incoming stimuli as automatic and outside conscious awareness, a perspective that continues to shape contemporary research on social cognition. PSYCHOANALYSIS Psychoanalysis, developed by Sigmund Freud around the turn of the twentieth century, is both a theory of personality and a method of psychotherapy. Central to Freud's theory is the concept of the unconscious—comprising thoughts, attitudes, impulses, wishes, motivations, and emotions that we are unaware of. Freud believed that unacceptable childhood wishes, often punished or forbidden, are repressed into the unconscious and continue to influence our thoughts, feelings, and actions. These unconscious thoughts manifest in dreams, slips of the tongue, and physical mannerisms. During therapy, Freud employed the method of free association, where patients were encouraged to say whatever came to mind, to bring unconscious wishes into awareness. Dream analysis also served this purpose. In classical Freudian theory, the motivations behind unconscious wishes were often related to sex or aggression. Due to this, Freud's theory was not widely accepted initially. Although contemporary psychologists do not fully accept Freud's theory, they generally agree that people's ideas, goals, and motives can sometimes operate outside of conscious awareness. By comparing human minds to an iceberg, Freud described three core elements of his theory. As shown in Figure 2, the small part of the iceberg that is above the surface of water represents the conscious, the current awareness. Instead, the section of the iceberg that lays under the surface and is not very deep represents the preconscious, all the information that we are not currently aware of but that can be easily recalled. The much bigger section of the iceberg that lays deep into the water represents the unconscious, a store of impulses, wishes and inaccessible memories that have, however, an influence on our behaviour. THE NATURE-NURTURE DEBATE One of the earliest and still ongoing debates in human psychology is the nature–nurture debate, which questions whether human capabilities are innate or acquired through experience. The nature perspective posits that humans are born with an inherent store of knowledge and understanding of reality. Early philosophers believed this knowledge could be accessed through reasoning and introspection. In the seventeenth century, Descartes supported this view, arguing that certain ideas (such as God, the self, geometric axioms, perfection, and infinity) are innate. Descartes also conceived of the body as a machine that could be studied like other machines, laying the groundwork for modern information-processing perspectives on the mind. Conversely, the nurture perspective asserts that knowledge is gained through experiences and interactions with the world. While some early Greek philosophers shared this view, it is most strongly associated with the seventeenth-century English philosopher John Locke. Locke described the human mind at birth as a tabula rasa, a blank slate on which experience inscribes knowledge and understanding as the individual matures. This view led to associationist psychology, which denied the existence of inborn ideas or capabilities, arguing instead that the mind is filled with ideas that enter through the senses and become associated through principles such as similarity and contrast. Current research on memory and learning is connected to early association theory. The classic nature–nurture debate has become more nuanced in recent decades. Although some psychologists still argue that human thought and behaviour result primarily from biology or experience, most adopt a more integrated approach. They recognize that biological processes (such as heredity or brain functions) influence thoughts, feelings, and behaviour, but also acknowledge the significant impact of experience. Therefore, the current question is not whether nature or nurture shapes human psychology, but rather how they interact to do so. unit 2 CONTEMPORARY PSYCHOLOGY What defines a psychological perspective? Essentially, it represents an approach - a particular lens through which topics within psychology are examined. Any aspect of psychology can be viewed from various perspectives, much like how any human action can be analysed from different angles. For instance, if you punch someone after being insulted, a biological perspective would focus on the brain areas involved, nerve firings, and muscle movements. In contrast, a behavioural perspective would describe the punch as a response learned through past reinforcement to the insult stimulus, without delving into internal physiological processes. A cognitive perspective would analyse the mental processes behind your action, considering your goals and plans—such as defending your honour. From a psychoanalytic perspective, the punch might be seen as an expression of an unconscious aggressive drive. Lastly, a subjectivist perspective would interpret your action as a reaction to perceiving the insult as a personal affront. These five perspectives - biological, behavioural, cognitive, psychoanalytic, and subjectivist - are the primary approaches in contemporary psychology. Although each offers a distinct viewpoint, they are not mutually exclusive; rather, they illuminate different facets of complex psychological phenomena. Many psychological topics benefit from an eclectic approach that integrates insights from multiple perspectives to provide a comprehensive understanding. Throughout this Week, these perspectives are explored in detail, each contributing unique insights into the diverse workings of the human mind and behaviour. BIOLOGICAL PERSPECTIVE The human brain houses more than 10 billion nerve cells and an incredibly vast network of connections between them, making it perhaps the most intricate structure known in the universe. Essentially, all psychological phenomena can be linked to the brain and nervous system's activity. The biological approach in studying humans and other species aims to connect observable behaviours with the electrical and chemical processes occurring internally. Research from this perspective seeks to define the neurobiological mechanisms underlying behaviours and mental processes. For example, the biological approach to understanding depression investigates how abnormal changes in neurotransmitter levels—chemicals facilitating communication between nerve cells in the brain—might contribute to the disorder. To illustrate this perspective, consider the study of face recognition in patients with brain injuries, revealing specialised brain regions dedicated to this function. Typically, face recognition centres are predominantly located in the right hemisphere of the human brain, which, along with the left hemisphere, shows significant specialisation. In most right-handed individuals, the left hemisphere specialises in language comprehension, while the right hemisphere excels in spatial interpretation. Moreover, the biological perspective enhances the study of memory by emphasising specific brain structures like the hippocampus, crucial for memory consolidation. For instance, childhood amnesia, the inability to recall early memories, may stem from an immature hippocampus, which typically matures fully about a year or two after birth. This perspective underscores how understanding brain structures and their functions provides insights into complex psychological phenomena, illustrating the intricate relationship between neural processes and cognitive functions. BEHAVIOURAL PERSPECTIVE The behavioural perspective centres on observable stimuli and responses, viewing behaviour primarily as a product of conditioning and reinforcement. For instance, in analysing your social interactions from a behavioural standpoint, the focus would be on the people you interact with (social stimuli), the responses you elicit (rewarding, punishing, or neutral), the responses you receive in return (rewarding, punishing, or neutral), and how these exchanges maintain or disrupt the interaction dynamics. Using our previous examples can further illustrate this approach. In the context of obesity, some individuals may overeat (a specific response) only in specific situations (such as while watching television), and part of weight-control programs involves learning to avoid these triggering stimuli. Concerning aggression, children are more likely to exhibit aggressive behaviours like hitting another child when such behaviours are rewarded (the other child withdraws) rather than when they are punished (the other child retaliates). Historically, the strict behavioural approach disregarded individuals' mental processes entirely, and even contemporary behaviourists typically refrain from speculating about the mental processes that occur between stimulus and response. Nonetheless, psychologists who are not strict behaviourists often document individuals' verbal self-reports about their conscious experiences and infer mental activity from these subjective data. While few psychologists today strictly identify as behaviourists, many modern developments in psychology have evolved from the foundational work laid by earlier behaviourists. THE COGNITIVE PERSPECTIVE The contemporary cognitive perspective represents a return to psychology's cognitive origins and a response to behaviourism's limitations in addressing complex human activities such as reasoning, planning, decision-making, and communication. Similar to its nineteenth-century predecessor, the modern cognitive approach focuses on mental processes like perception, memory, reasoning, decision-making, and problem-solving. However, unlike the earlier version, contemporary cognitive psychology does not rely on introspection. Instead, it asserts two key principles: 1. that studying mental processes is essential for a comprehensive understanding of human behaviour; 2. that these processes can be objectively studied by observing specific behaviours and interpreting them in terms of underlying mental activities. Cognitive psychologists often use the analogy of a computer to explain how incoming information is processed: selected, compared, integrated with existing memories, transformed, and reorganised. Consider childhood amnesia, discussed earlier in this Unit. The phenomenon suggests that our inability to recall events from our early years may stem from significant developmental changes in how we organise and store memories. These changes may be particularly pronounced around age three, a time when language skills undergo substantial development, providing a new framework for organising our experiences into coherent memories. NEUROCONSTRUCTIVISM The neuroconstructivist approach offers a dynamic perspective on development, emphasising the interplay between genetic activity and environmental signals. Rather than following a rigid, predetermined path, this approach posits that development is an ongoing, adaptive process. Gene activity is seen as highly responsive to both internal and external cues, creating a complex feedback loop where the environment shapes genetic expression and, in turn, genetic expression influences how an individual interacts with their environment. This perspective diverges from traditional views that regard genes as the primary drivers of development, suggesting instead that the brain is a highly plastic organ, continually moulded by experience. Central to neuroconstructivism is the idea of progressive specialisation. As individuals grow, their neural structures become increasingly specialised and fine-tuned in response to environmental demands. This process is not linear but involves constant adjustments and reorganisations. Early experiences, therefore, play a crucial role in shaping neural architecture, with critical periods marking times when the brain is particularly sensitive to specific types of input. During these periods, the environment can have profound impacts, fostering the development of certain skills and abilities while potentially limiting others. The neuroconstructivist approach also highlights the significance of embodied cognition—the notion that cognitive processes are deeply rooted in the body's interactions with the world. Sensorimotor experiences are integral to cognitive development, with physical actions and perceptions influencing neural development. For example, the development of spatial reasoning is closely linked to a child’s ability to move and interact with their surroundings. Thus, cognitive development cannot be fully understood without considering the embodied nature of human experience. Moreover, neuroconstructivism recognizes the role of social context in development. Social interactions provide rich, varied stimuli that drive cognitive and neural development. Language acquisition, for instance, is heavily influenced by social communication, where children learn through listening, imitation, and feedback from others. The social environment, therefore, acts as a critical source of external signals that shape neural pathways and cognitive skills. In essence, the neuroconstructivist approach presents a holistic view of development, where the brain's growth is an emergent property arising from the continuous interaction between genes and the environment. This perspective underscores the importance of considering the dynamic, reciprocal nature of development, moving away from reductionist views that isolate genetic and environmental influences. By appreciating the complex, adaptive nature of development, neuroconstructivism provides a comprehensive framework for understanding how individuals develop unique cognitive and neural profiles in response to their specific experiences and environments. unit 3 WHY PSYCHOLOGIST RELY ON EMPIRICAL METHODS All scientists, whether they are physicists, chemists, biologists, sociologists, or psychologists, use empirical methods to study the topics that interest them. Empirical methods include the processes of collecting and organising data and drawing conclusions about those data. The empirical methods used by scientists have developed over many years and provide a basis for collecting, analysing, and interpreting data within a common framework in which information can be shared. We can label the scientific method as the set of assumptions, rules, and procedures that scientists use to conduct empirical research. THE CHALLENGES OF STUDYING PSYCHOLOGY A major goal of psychology is to predict behaviour by understanding its causes. Making predictions is difficult in part because people vary and respond differently in different situations. Individual differences are the variations among people on physical or psychological dimensions. For instance, although many people experience at least some symptoms of depression at some times in their lives, the experience varies dramatically among people. Some people experience major negative events, such as severe physical injuries or the loss of significant others, without experiencing much depression, whereas other people experience severe depression for no apparent reason. Other important individual differences include differences in extraversion, intelligence, self-esteem, anxiety, aggression, and conformity. Because of the many individual differences that influence behaviour, we cannot always predict who will become aggressive or who will perform best in graduate school or on the job. The predictions made by psychologists (and most other scientists) are only probabilistic. We can say, for instance, that people who score higher on an intelligence test will, on average, do better than people who score lower on the same test, but we cannot make very accurate predictions about exactly how any one person will perform. Another reason that it is difficult to predict behaviour is that almost all behaviour is multiply determined, or produced by many factors. And these factors occur at different levels of explanation. We have seen, for instance, that depression is caused by lower-level genetic factors, by medium-level personal factors, and by higher level social and cultural factors. You should always be skeptical about people who attempt to explain important human behaviors, such as violence, child abuse, poverty, anxiety, or depression, in terms of a single cause. Furthermore, these multiple causes are not independent of one another; they are associated such that when one cause is present other causes tend to be present as well. This overlap makes it difficult to pinpoint which cause or causes are operating. For instance, some people may be depressed because of biological imbalances in neurotransmitters in their brain. The resulting depression may lead them to act more negatively toward other people around them, which then leads those other people to respond more negatively to them, which then increases their depression. As a result, the biological determinants of depression become intertwined with the social responses of other people, making it difficult to disentangle the effects of each cause. Another difficulty in studying psychology is that much human behaviour is caused by factors that are outside our conscious awareness, making it impossible for us, as individuals, to really understand them. DESCRIBING BEHAVIOUR Simply describing the behaviour of humans and other animals helps psychologists understand the motivations behind it. Such descriptions also serve as behavioural benchmarks that help psychologists gauge what is considered normal and abnormal. Psychology researchers use a range of research methods to help describe behaviour including naturalistic observation, case studies, correlational studies, surveys, and self-report inventories. 1. Naturalistic observation: Naturalistic observation is a research method that involves observing subjects in their natural environment. This approach is often used by psychologists and other social scientists. It is a form of qualitative research, which focuses on collecting, evaluating, and describing non-numerical data. It can be useful if conducting lab research would be unrealistic, cost-prohibitive, or would unduly affect the subject's behaviour. The goal of naturalistic observation is to observe behaviour as it occurs in a natural setting without interference or attempts to manipulate variables. ○ Pros: An advantage of naturalistic observation is that it allows the investigators to directly observe the subject in a natural setting. The method gives scientists a first-hand look at social behaviour and can help them notice things that they might never have encountered in a lab setting. ○ Cons: lack of validity, lack of controls. 2. Case studies: A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. 3. Correlational studies: A correlational study is a type of research design that looks at the relationships between two or more variables. Correlational studies are non-experimental, which means that the experimenter does not manipulate or control any of the variables. 4. Surveys and self-report inventories: A self-report inventory is a type of psychological test often used to assess attitudes, characteristics, and other personality traits. This type of test is often presented in a paper-and-pencil format or may even be administered on a computer. A typical self-report inventory presents a number of questions or statements that may or may not describe certain qualities or characteristics of the test subject. This type of survey can be used to look at your current behaviours, past behaviours, and possible behaviours in hypothetical situations. There are many different self-report inventories. The following are just a few well-known examples. DIFFERENT BRANCHES Psychology is such a broad field that conveying its depth and breadth can be difficult. As a result, a number of unique and distinctive branches of psychology have emerged, each one dealing with specific psychological areas within the study of the mind, brain, and behaviour. Understanding what these subtopics represent can help you decide where your interests may lie. Psychology can be roughly divided into two major areas: 1. Research, which seeks to increase our knowledge base; 2. Practice, through which our knowledge is applied to solving problems in the real world. Clinical Psychology → It is the branch of psychology concerned with the assessment and treatment of mental illness, abnormal behaviour, and psychiatric disorders. Clinicians often work in private practices, but many also work in community centres or at universities and colleges. You can even find clinical psychology professionals in hospital settings and mental health clinics. In these organisations, they often work as part of a collaborative team that may include physicians, psychiatrists, and other mental health professionals. Counseling Psychology → It is one of the largest areas of psychology. It is centred on treating clients in mental distress who may be experiencing a wide variety of psychological symptoms. The Society of Counseling Psychology explains that professionals working in this type of psychology can improve their clients' interpersonal functioning throughout life. They do this by improving the client's social and emotional health, as well as addressing concerns about health, work, family, marriage, and more. Health Psychology → Also sometimes called medical psychology or behavioural medicine, it focuses on how biology, psychology, behaviour, and social factors influence health and illness. This area of psychology involves the promotion of health across a wide variety of domains, as well as the prevention and treatment of disease and illness. Health psychologists often deal with health-related issues such as weight management, smoking cessation, stress management, and nutrition. They might also research how people cope with illnesses, helping patients learn more effective coping strategies. Some professionals in this branch of psychology assist with the design of disease prevention and public awareness programs, while others work within the government to improve health care policies. Industrial-Organisational Psychology → It applies psychological principles to workplace issues. This psychological area, often referred to as I/O psychology, seeks to improve productivity and efficiency in the workplace while maximising the well-being of employees. It includes areas such as human factors. Human factors psychology focuses on human error, product design, ergonomics, human capability, and human-computer interaction. Its goal is to improve how people interact with products and machines. This might involve helping to design products intended to minimise injury or creating workplaces that promote greater accuracy and safety. Research in I/O psychology is known as applied research because it seeks to solve real-world problems. These types of psychologists study topics such as worker attitudes, employee behaviours, organisational processes, and leadership. Personality Psychology → It is the branch of psychology that focuses on the study of thought patterns, feelings, and behaviours that make each individual unique. Classic theories of personality include Freud's psychoanalytic theory of personality and Erikson's theory of psychosocial development. Personality psychologists might look at how different factors (such as genetics, parenting, and social experiences) influence personality development and change. They may also be involved in the creation or administration of personality tests. Also known as behaviourism, it is a theory of learning based on the idea that all behaviours are acquired through conditioning. Behavioural strategies such as classical conditioning and operant conditioning are often utilised to teach or modify behaviours. While this type of psychology dominated the field during the first part of the twentieth century, it became less prominent during the 1950s. However, behavioural techniques remain a mainstay in therapy, education, and many other areas. Biopsychology → It is a psychological area focused on how the brain, neurons, and nervous system influence thoughts, feelings, and behaviours. The psychology field draws upon many different disciplines, including basic psychology, cognitive psychology, experimental psychology, biology, physiology, and neuroscience. People who work in this type of psychology often study how brain injuries and brain diseases impact human behaviour. Biopsychology is also sometimes referred to as physiological psychology, behavioural neuroscience, or psychobiology. Cognitive Psychology → It is a psychological area that focuses on internal mental states. This area has continued to grow since it emerged in the 1960s and is centred on the science of how people think, learn, and remember. Professionals who work in this type of psychology typically study cognitive functions such as perception, motivation, emotion, language, learning, memory, attention, decision-making, and problem-solving. Cognitive psychologists often use an information-processing model to describe how the mind works, suggesting that the brain stores and processes information much like a computer. Comparative psychology → It is the branch of psychology concerned with the study of animal behavior. This is important because the study of how animals behave can lead to a deeper and broader understanding of human psychology. This psychology subtype has its roots in the work of researchers such as Charles Darwin and George Romanes and has grown into a highly multidisciplinary subject. In addition to psychologists contributing to this field, so do biologists, anthropologists, ecologists, geneticists, and several others. Cross-Cultural Psychology → It is a branch of psychology that looks at how cultural factors influence human behavior. This may involve looking at differences between collective and individualist cultures, for instance. Cross-cultural psychologists might also look at how cultures vary in terms of emotion, personality, or child development. The International Association of Cross-Cultural Psychology (IACCP) was established in 1972. This type of psychology has continued to grow and develop since that time, with increasing numbers of psychologists investigating how behavior differs among cultures throughout the world. Experimental Psychology → It is the psychological area that utilizes scientific methods to research the brain and behavior. Many of these techniques are also used in other psychology areas to study everything from childhood development to social issues. This type of psychology is often viewed as a distinct subfield, but experimental techniques and methods are used extensively throughout every branch. Experimental psychologists work in a wide variety of settings, including colleges, universities, research centers, government, and private businesses. They utilize the scientific method to study a range of human behaviors and psychological phenomena. Forensic Psychology → It deals with issues related to psychology and the law. Those who work in this branch apply psychological principles to legal issues. This may involve studying criminal behavior and treatment or working directly in the court system. Forensic psychologists perform a wide variety of duties, including providing testimony in court cases, assessing children in suspected child abuse cases, preparing children to give testimony, and evaluating the mental competence of criminal suspects. In many cases, people working in forensic psychology aren't necessarily "forensic psychologists." These individuals might be clinical psychologists, school psychologists, neurologists, or counselors who lend their psychological expertise to provide testimony, analysis, or recommendations in legal or criminal cases. Developmental Psychology → It focuses on how people change and grow throughout life. This area of psychology seeks to understand and explain how and why people change. Developmental psychologists study physical growth, intellectual development, emotional changes, social growth, and perceptual changes that occur throughout the lifespan. Some professionals may specialize in infant, child, adolescent, or geriatric development, while others might primarily study the effects of developmental delays. This psychology branch covers a huge range of topics, ranging from prenatal development to Alzheimer's disease. Educational Psychology → It is the branch of psychology concerned with schools, teaching psychology, educational issues, and student concerns. Educational psychologists often study how students learn. They may also work directly with students, parents, teachers, and administrators to improve student outcomes. Professionals in this type of psychology sometimes study how different variables influence individual students. They may also study learning disabilities, giftedness, and the instructional process. School Psychology → It is a type of psychology that involves working in schools to help kids deal with academic, emotional, and social issues. School psychologists also collaborate with teachers, students, and parents to help create a healthy learning environment. Most school psychologists work in elementary and secondary schools, but others can be found in private clinics, hospitals, state agencies, and universities. Some go into private practice and serve as consultants—especially those with a doctoral degree in school psychology.Social Psychology → It seeks to understand and explain social behavior. It looks at diverse topics including group behavior, social interactions and perceptions, leadership, nonverbal communication, and social influences on decision-making. Social influences on behavior are a major interest in social psychology, but these types of psychologists are also focused on how people perceive and interact with others. This branch of psychology also includes topics such as conformity, aggression, and prejudice. Sports Psychology → It is the study of how psychology influences sports, athletic performance, exercise, and physical activity. Individuals may work with a sports psychologist to improve their focus, develop mental toughness, increase motivation, or reduce sports-related anxiety. Some sports psychologists work with professional athletes such as pro sports players and top Olympians. Others utilize exercise and sports to enhance the health and well-being of non-athletes throughout their lifespan. week 2 PSYCHOLOGICAL KNOWLEDGE IS BASED ON SCIENTIFIC EVIDENCE Psychology aims to describe behavior, understand the causes of it, and develop hypotheses and models that allow us to predict it - not to predict it with absolute certainty (it is impossible), but with reasonable precision, or at least… above chance! That’s why statistics matter. Philosophy, religion, politics, economics, and other fields of human thought also try to explain human behavior. Unlike philosophy or religion (but ideally like economics), however, psychology should strictly base its conclusions on evidence collected using the scientific method. Therefore, our ideas and theories must be corroborated by empirical data. But there is more, and this is crucial: we must also accept that our ideas and theories might, at any moment, be challenged and replaced by others that fit the data better. Unlike other social sciences such as economics, psychology has its own methods and areas of investigation. For example, most contemporary psychology describes human behavior in terms of the underlying cognitive processes. It has not always been so. Most psychological research is quantitative, meaning that it translates psychological phenomena into numerical quantities, and it reaches conclusions based on statistical analysis. Qualitative research also exists. It focuses on in-depth investigations of individuals, thematic analysis, and other non-numerical approaches. However, in this course we will focus only on quantitative psychology, as it has indisputably provided the vast majority of the current relevant findings. (For this Week, you are highly invited to study these further materials: Research Designs, by Christie Napa Scollon - Singapore Management University Psychology as a Science) NOT EVERYTHING CAN BE INVESTIGATED As argued in this open online chapter, not all issues concerning human behavior are within the domain of scientific inquiry. Only statements that can be objectively and empirically determined as either true or false are. More specifically, according to the famous philosopher of science, Karl Popper, the most important feature of a scientific theory is its being “falsifiable”, that is demonstrably false if it is false. Therefore, it must make "risky" predictions that are specific enough to be possibly disproved via empirical investigation. Several statements potentially relevant for one’s life cannot be investigated scientifically. These include political decisions, values (e.g., "all human beings must have the same rights", “citizenship should be given to anyone who was born in the country”), but also existential and philosophical convictions that ultimately cannot be proved false (e.g., “divine providence guides our lives”, “the pursuit of happiness is the goal of human life”). Here’s why philosophy, religion, politics, and the rest, may still be relevant for many! Despite some statements being outside the domain of scientific inquiry, psychological science may aid decisions. For example, if politicians pass a law vs another based on considerations such as “humans follow the rules only for fear of punishment” or “true mental health issues are negligible in the population”, these are considerations that can be examined empirically. In fact, psychology might have produced fewer important results than physics or biology (so far), but psychological reasoning is frequently involved in relevant decisions in society. So, it is important that they are scientifically informed. WHAT IS A GOOD RESEARCH QUESTION? Scientific investigation starts with a research question. The latter may arise just out of curiosity, but in practice it will likely reflect a hypothesis derived from a coherent theoretical framework which has already been proved adequate to explain a variety of phenomena. A good research question in psychology should be precise enough to receive a specific quantitative answer. For instance, how much does vocabulary increase in children each year during the school age, on average? How much does this treatment reduce anxiety symptoms, on average? A good research question should also be broad enough to be relevant to many (e.g., I must generalize the results to all children, or at least to all children within a cultural context... not just to my own daughter or to a specific group of children that I just investigated). Finally, research questions in psychology are generally probabilistic: I have repeatedly specified “on average” in the examples above, because I know that there are virtually no phenomena in psychology that are identical in everyone and in every condition. So, we generally provide quantitative answers about average phenomena (e.g., the average efficacy of a treatment) - but describing the variability of phenomena across individuals is also an important area of empirical investigation. Research questions are probabilistic also in another sense: our results are always uncertain to some degree. They emerge from statistical analysis on subsets of the human population. We may always find out that our previous results were valid only under certain conditions, that the phenomenon has changed over time, or even that the original observations were a freak occurrence that just fails to replicate in a new study. THEORIES AND HYPOTHESIS As said in the previous slide, any single result may be uncertain. Nevertheless, our confidence is given by the fact that many results converge to form a coherent picture, which can be summarized by a larger theoretical framework. Within a theoretical framework, specific theories can be formulated. As said in the previous slide, research questions generally stem from theories, not just from curiosity. A theory is a set of coherent and integrated principles that explain a variety of related phenomena. Although circumscribed to a class of phenomena, a theory is still too general to be entirely tested with a single experiment. Therefore, specific hypotheses must be deduced from the theory and translated into specific, quantitative research questions. EXAMPLE: The cognitive framework (as a theoretical framework) states that our behavior is largely caused by our cognitions (i.e., thoughts, knowledge, interpretations of stimuli). Within it, the cognitive theory of emotions proposed by Schachter and Singer states that emotions start with an automatic physiological arousal in response to a stimulus. We then cognitively interpret this arousal based on the situation, label it as an emotion, and regulate our behavior accordingly. A specific hypothesis is that if a person feels physiologically aroused for a reason that is unknown to her (e.g., she has secretly received a shot of an adrenaline precursor) and the surrounding context gives her a cue that she might have a reason to be angry (e.g., she sees another person complaining for a reason that she might possibly share), she will feel angry and behave accordingly. And this was the hypothesis that was successfully tested by Schacter and Singer! DIFFERENT RESEARCH METHODS In fact, most but not all research questions are hypothesis-driven. If there is a well-defined hypothesis, then the ideal research method is experimental. An experiment can clearly and almost definitively confirm or disconfirm a hypothesis (within the limits of the statistics, and the validity of the experiment itself). For practical or ethical reasons, however, an experiment may be impossible to conduct, so a researcher may opt for a correlational method. The latter provides less compelling evidence, but this might be inevitable. This will be explained more in detail in the next two units. Research may also be exploratory if there are no well-defined hypotheses. For example, we may want to see whether a behavior correlates with any in a large set of personality traits and facets, without having a strong theory that helps us make a very specific prediction. Exploratory research can be informative, but it may require larger samples to provide adequate statistical evidence, and it may leave us more uncertainty about possible confounding variables (i.e., without a theory, we are left very unsure about the real nature of the observed correlations). Finally, there are descriptive research methods, including single case studies (that should be assumed to provide generalizable evidence, however), surveys (to get a picture of a situation, for example opinion polls), and naturalistic observations (when the researchers just observe the phenomenon as it naturally occurs, without interfering at all with it). WHAT IS EXPERIMENTAL RESEARCH? With experimental research we investigate “cause - effect” links between phenomena. To do so, we directly manipulate the (presumable) cause and observe the consequences. It is a major approach within psychological research. As we will see in the next unit, its main limitation is that it cannot always be used, generally because of ethical or practical reasons. Experimental research has its own lexicon, and it implies several crucial considerations, which I will present in this unit. EXAMPLE: Let’s use a “toy example” for the sake of simplicity. Imagine that you want to see whether consuming sugar increases hyperactivity levels and the speed of cognitive processing in children. This may be your hypothesis because you have previously noticed that children at parties eat a lot of sugar and then appear to be psycho-physiologically aroused. So you have started from anecdotal evidence, but of course... This is not enough, as it leaves several alternatives open: children may become aroused just because they are in a group, or at a party, or because of the general caloric intake (rather than sugar specifically), or they may not be aroused at all—your observation may have misled you! There is nothing wrong with anecdotal evidence, if it is only the basis for a more serious investigation! So, what do you do to investigate your hypothesis further? You administer sugar to children and see what happens! In this unit, I will present experimental research referring to this example. Go see the online chapter by Scollon (2021) on NOBA project for alternative examples if you want additional insight! CLARITY OF THE RESEARCH QUESTION In the “sugar example” (previous page) what is the research question, precisely? Is it to see whether sugar causes hyperactivity? In simple terms, yes. However, psychological science should make its predictions more precise than this. Do we really expect that every single child will certainly increase their hyperactivity after consuming any amount of sugar? Clearly NOT! In psychology any effect is probabilistic. There are too many contingent factors and individual differences for any psychological effect to be deterministic. So, what is the research question? Any of the following are reasonable: To see whether consuming a large enough amount of sugar increases the probability of children displaying hyperactivity and enhancing cognitive processing speed, compared to other children in the same context who have NOT consumed sugar / compared to themselves in a similar context when they have NOT consumed sugar; Or perhaps even better: to see whether consuming a large enough amount of sugar increases the average level of hyperactivity and [etcetera]. INDEPENDENT AND DEPENDENT VARIABLES The core of experimental research in psychology is the definition of the independent and the dependent variable. These two labels may sound tricky, but you must remember them! The independent variable is the one directly manipulated by the experimenter, while the “dependent variable” is the one that is observed to see if any relevant change has occurred. To help remember, note that the dependent variable is called so because it presumably “depends on” the researcher’s manipulation of the independent variable. Note that independent and dependent variables might be many in the same experiment. Complex designs with multiple manipulated factors have many independent variables (which may interact with each other!), and multivariate designs have many dependent variables. With reference to the “sugar example” (previous pages), the independent variable is the amount of sugar intake. It is manipulated by the experimenters because they are those who administer sugar to children, in a totally controlled way. In any experimental setting, this independent variable would probably be dichotomized for simplicity to become a “group” factor (Sugar VS No sugar administered). The dependent variables would be at least two: 1. psychophysiological arousal (or hyperactivity level); 2. speed of cognitive processing. They are “dependent” because they are supposed to “depend on” the sugar intake. OPERATIONALISING THE VARIABLES Most experimental research in psychology is quantitative. Therefore, an important issue is: how do we measure variables? For the sugar intake (either as a continuous amount of mass or a dichotomous sugar/no-sugar condition), measurement is very easy. But measuring psychological constructs is harder. They cannot be directly observed in the way physical quantities are observed. Defining psychological constructs (e.g., happiness, intelligence, attention, speed of processing, introversion) into measurable variables is called operationalization. It may involve cognitive tests, questionnaires, implicit measures, or anything that allows us to “translate” the unobserved (also known as "latent") construct into a measurable amount, a number. (Of course, something gets lost in the process... but this is necessary). What about the dependent variables in the “sugar example”? Well, we could still try to measure hyperactivity levels directly by tracking the amount of movement (e.g., via actigraphy), or indirectly using adequate observer rating scales filled by independent observers. For speed of cognitive processing, there are several validated tasks. They generally require children to perform a series of simple cognitive operations as fast as possible (a few examples can be found among the subtests of the famous Wechsler scales of intelligence; see Week 11). SAMPLING AND GENERALISABILITY Sometimes even single cases can be informative in Psychology. Generally, however, you want to reach conclusions that you can confidently generalize to an entire reference population, for example to all human children, to all older adults in Japan, to all people with a particular clinical condition, and so on. Unfortunately, even if the reference population is limited (e.g., all children with dyslexia), it is practically impossible to investigate ALL existing cases! Thus, research relies on samples that are (hopefully) representative of a larger population. Ideally, participants should be selected randomly from the entire target population. This is often nearly impossible, however. In practice, researchers often rely on “convenience samples”, that is samples that are easy for them to recruit. For example, undergraduate students in their own course. Sure, this is not optimal, as there is no guarantee that the sample is truly representative of the whole population... but that’s it! Representativeness could be improved by ensuring that the sample reflects the characteristics of the population (e.g., including participants from different geographical areas, different age groups, different education levels; “cluster” and “stratified” sampling ensure random sampling based on these characteristics, for example, but there is no room to go in detail here). If the sample can be assumed to be adequately representative, then the experimental results can be generalized to the population. THE IMPORTANCE OF CONTROL CONDITIONS In the “sugar experiment” (as in nearly any experiment) you need a control condition. Without it, you could even observe a huge increase in hyperactivity and cognitive speed after administering sugar, but this may be merely the effect of children familiarizing with the situation, or familiarizing with you, or them becoming bored, or anything else. The simplest way to go is having two groups: one who is administered sugar, and one who is not. But wait… The “Sugar group” may become more aroused than the other just because they consume food (without sugar having any special role), or even just because you are paying more attention to them! In medical trials, this is solved with the “Placebo condition”. With placebo, participants are now aware of being controlled, and they receive a "fake" treatment that looks just like the real one, but without the active ingredient. In Psychology, the “placebo” can be called the “active control” condition. For example, children in the above example may receive a sugar-free sweetener. Or patients may carry out psychological activities that resemble the treatment but are not intended to have a real therapeutic value (for ethical reasons, these patients must be offered the real treatment afterward!). Albeit suboptimal, “passive control” conditions are also used for convenience (e.g., “waiting lists” for treatments), but as said above this may pose some interpretive issues we must be aware of. GROUP ASSIGNMENT A key feature of experimental research is that the independent variable is randomized across participants. Generally, this means that participants must be randomly assigned to groups. You are conducting the “sugar experiment” and now you have recruited your representative sample. How do you choose what children you administer sugar and what you do not? The worst mistake you could make would be letting children choose! If you do so, you can no longer establish whether the cause of behavioral variation is sugar per se, rather than anything related to the children's (or their parents’) choice. For example, hyperactivity could be associated with a preference for stronger sensations and tasty food, or with more (or less) proactive behaviors, or motivation. These would be called confounding factors. The independent variable is under your exclusive control, and you must use that control! Flip a coin, roll a dice, generate random numbers, use even and uneven numbers in the order of recruitment, but never let any external confounding factor mess with the group assignment, or your research can no longer be called “experimental”! BETWEEN-VS WITHIN- PARTICIPANT DESIGNS Dividing participants into two (or more) distinct groups is a classical solution. However, you could even assign the same participants both to the experimental (e.g., sugar) and to the control (e.g., no-sugar) condition. Of course, in different moments, for example on two different days. To rule out any confounding factor, you should counterbalance: you administer sugar to half participants on the first day and no-sugar on the second day, while you do the opposite to the other half of the participants. If you still prefer to have a sugar VS a no-sugar group, you are running a between-participant design. That is, the independent variable varies between different participants (as each is either in one condition or in the other). If you administer both the experimental and the control conditions to all participants, you are running a within-participant design. What is best? If you do not have any strong reason to avoid it, a within-participant design is preferable. Why? Because it enhances statistical power! You need fewer participants in a within- than in a between-participant design. Not only do you halve the number of participants needed… but you also get more robust results. This is because, in a within-participant design, you know exactly the individual “baseline” of everyone, so everyone can be compared against themselves, which helps reveal even subtle differences. STATISTICAL ANALYSIS Even if you leave the independent variable untouched, there will be some variability in the dependent variable. There is always variability due to individual differences, contingent factors, even just measurement error! The issue is whether the variability you observe is more likely due to your manipulation of the independent variable or to the mere chance. Settling the above issue is arguably the most famous goal of statistical analysis in Psychology, although not the only one. For example, estimating the magnitude of the effects and their variability across individuals is equally important. In the “sugar experiment”, you may observe that the hyperactivity level is slightly higher in the sugar than in the no-sugar condition. But is it enough? Statistical analysis helps you establish whether your result can be reliably attributed to your manipulation, or it is most probably accidental. A famous index is the“p-value”: it tells you the probability to get an effect of (at least) the magnitude you have observed in your sample if the “null hypothesis” were true (i.e., if the effect you hypothesized did not exist at all in the population). A conventional cut-off is a probability (p-value) of less than 5% (i.e., p <.05), which is generally interpreted as sufficient to “reject the null hypothesis”. A parallel and more direct approach is estimating the effects with uncertainty (see Figure 2), with a goal of minimizing the “confidence intervals” (i.e., bounds of uncertainty). POWER ANALYSIS Statistical analysis should begin before carrying out the experiment!Researchers in psychology are becoming increasingly aware of the importance of “power/design analysis”, which means a priori examining whether you are putting yourself in a condition of reliably addressing your research question, before starting any data collection. EXAMPLE - You suspect that girls are slightly (but only slightly!) better than boys, on average, in verbal abilities (which is true, indeed). Do you think that you could reliably test this hypothesis by comparing only 3 girls vs 3 boys on verbal abilities? Certainly not! But beware... your statistical analysis may occasionally suggest that girls are better than boys on verbal abilities even with only 3 vs 3 cases! This would be accidental, and it should not mislead you into believing that the real effect is so huge (just because you observed it with a sample so small), because it is obviously just a "false positive result" that future researchers will fail to replicate. With only 3 vs 3 you could barely test very large effects such as boys being taller than girls (in fact, not even that!). For the example above, you will probably need something like 100 girls vs 100 boys! (The precise number is determined via statistical calculation, but intuitively: for a smaller plausible effect, you will need a larger sample.) Replicability and power analysis have become an increasingly important topic in the scientific debate over the last 15 years. Power analysis generally focuses on how many participants you will need to reliably detect an effect that might be small, medium, or large (again: the smaller the effect, the more participants you will need). However, other parameters matter, for example, the reliability of your measures and the number of repeated measurements per participant in within-participant designs. lesson 3 QUESTIONS THAT WE CANNOT INVESTIGATE EXPERIMENTALLY Administering sugar (or not) to children might be under our control. To address many other very important research questions, however, we must deal with variables that we cannot control. For example: Is damage to the left temporal lobe of the brain followed by an acquired language deficit? Is intelligence associated with depression? Is childhood abuse associated with more psychopathology later in life? Is darker hair associated with more conscientiousness? (yes, absurd, but still a question). Do optimists achieve more than pessimists in life? Some of these variables simply cannot be manipulated, or it would be unethical to do so. So, do we give up experimental research? In brief: Yes, we do. But we do not give up science! The alternatives are correlational and quasi-experimental research. In these alternatives, we do not directly manipulate the independent variable. We simply observe variations as they naturally occur. The consequence is that we can no longer establish strong cause-effect links, but only co-variations. Apart from that, several aspects shown in the previous unit (operationalization of the variables, sampling and generalizability, observation of control conditions, statistical analyses, power analysis) apply to non-experimental research as well, virtually in the same way. QUASI-EXPERIMENTAL AND CORRELATIONAL RESEARCH Quasi-experimental research mimics experimental research but without random assignment of participants to groups. For example, we may want to see whether people with a specific clinical condition react differently from the others to emotionally charged stimuli. Correlational research just examines associations among variables. For example, it examines whether test anxiety is negatively associated with academic performance (perhaps controlling for other relevant “covariates” such as general intelligence or socio-economic status). Quasi-experimental and correlational research are ultimately alike, because they do not allow you to establish cause-effect links with absolute certainty. For example, you may see that people with a clinical condition react differently from others to emotional stimuli, but you do not know precisely why: it may be due to genetic traits associated with the condition, or to the peculiar experiences that those people have gone through in their lives due to the condition itself, or other. Any conclusion about causality is based on assumptions and deductions, but it is ultimately tentative. WARNING: In my experience, students tend to believe that non-experimental research is less rigorous or even “unscientific” as compared to experimental research. It is not. Quasi-experimental and correlational studies are scientific. They can and must be as rigorous as experimental studies. They just have specific limitations, that you must be aware of, in the interpretation of the results, specifically in the determination of causality. CORRELATION COEFFICIENT Correlational studies might employ complex analytical strategies such as multiple linear regressions, generalized and mixed-effects linear models, and other sophisticated techniques of statistical modelling. Correlation, however, is generally the basis. Correlation coefficients quantify the strength of the covariation between two variables on a standardized scale ranging between -1 and +1. Negative correlations indicate that higher values in one variable tend to be associated with lower values in the other (for example, depression and social interactions). Positive correlations indicate that higher values in one variable tend to be associated with higher values in the other (for example, age and height of children). Finally, correlations around zero indicate that the values of one variable fail to inform any expectation about the values in the other (for example, reading speed and introversion). In fact, correlations (either positive or negative) are generally modest in Psychology, but this largely depends on the variables involved. For example, correlations involving personality traits rarely exceed ± 0.20 in the overall population, while correlations among any pair of variables regarding intelligence or cognitive performance may easily reach ± 0.50. THE CORRELATION COEFFICIENT Here, again, we have a bit of statistics. I have found that asking the following three questions (in order) may help students understand and speculate about plausible correlation coefficients between variables: 1. Is it plausible to hypothesize any association between the two variables, or are they likely to be totally unrelated? In the latter case, the correlation is zero. 2. If there is an association, is it positive or negative (see the previous slide)? 3. How strong is it? That is, how frequent are the “exceptions” to the association? If they are frequent (generally the case in Psychology) the correlation is weak, thus different from zero but not far from it. If exceptions are very rare, the correlation is very strong, thus closer to -1 or +1. CORRELATION IS NOT CAUSATION Non-experimental research cannot establish cause-effect links empirically, but only through deduction and theory. This is very important, because you may often see freak correlations (or even plausible ones) and be tempted to conclude that “X causes Y”, while there is no causality whatsoever. Funny (but real) examples are the number of diagnoses of autism strongly correlating with the sales of organic food, the sales of ice-creams positively correlating with murders in New York, or poverty of a country correlating with penis size. X correlating with Y might just be accidental, but often it is “real”. The fact that there is a real covariation between two variables, however, does not inform you about what causes what. There are several different possibilities: X causes Y; the other way around: Y causes X; there is a third unobserved variable Z that causes both X and Y; X causes Z which in turns causes Y (so, there is an “indirect effect” of X on Y); The correlation observed in the sample is a freak observation and it does not represent what happens in the population (so, the result is simply not replicable). In all probability, autism correlating with organic food sales is simply due to repeated observations over time: year after year, autism diagnoses have increased (due to the increased awareness of the problem) and organic food sales have increased too (people increasingly prefer to eat food considered healthy). So, this is the case in which we have a third variable Z (time) that causes both X (number of diagnoses being made) and Y (sales of organic food). ICE CREAM AND MURDERS This is real. Over long enough periods of time, researchers have observed that ice-cream sales positively correlated with murders in New York. Does consuming ice-creams increase your tendency to assassinate others? Or does violence stimulate ice-cream consumption? None of these. The third hidden variable is weather, as it varies across the year. As explained by the NYT, murders increase in the summer months, and in general when the climate is hotter, as people hit the streets and there are more social interactions. Hotter climate, at the same time, also causes more ice-cream sales. The figure shows the plausible underlying “model” that explains the true pattern of causal relationships leading ice-cream sales to correlate with the rate of murders. The goal of a researcher is to try to define plausible and more and more exhaustive models. But beware! The causality of any relationship must always be assumed, and it will never be definitively demonstrated without experimental research. SLEEP AND PERFORMANCE AT EXAMS Now another example. I insist on these examples because thinking about the reasons behind possible associations between variables is of utmost importance in research, as well as for anyone who wants to study a scientific discipline. It may help you design true experimental research to finally establish causal links. Or it may help you improve your future non-experimental research by making you consider new possible factors that may intervene and clarify the relationship of interest. Or it may just help you understand the nature of phenomena. Ultimately, this is the core of scientific thinking. Now, for practice, let’s say that researchers found that the number of hours of sleep of students and their grades at exams correlate positively, albeit not strongly (let’s say only r = +0.25). Let’s also say that it is not accidental, because it was found in a very large sample (N > 1,000). How could you explain it? Here I suggest a few plausible (and not necessarily mutually-exclusive) explanations, but you can try to think of other alternative hypotheses: Better sleep improves your memory and concentration, which in turn improve your performance; Students who face easy exams are more relaxed, so they sleep better (they do not have to study so hard), and they get better grades, but just because their exams are easy! Anxious students tend to sleep badly, and their anxiety also tend to impair their performance at exams, so anxiety could be a third hidden variable affecting both sleep and grades simultaneously. Power analysis generally focuses on how many participants you will need to reliably detect an effect that might be small, medium, or large (again: the smaller the effect, the more participants you will need). However, other parameters matter, for example, the reliability of your measures and the number of repeated measurements per participant in within-participant designs. week 3 SENSATIONS VS PERCEPTION The ability to detect and interpret events happening around us enables us to respond appropriately to stimuli (Gibson & Pick, 2000). In this Week, we will explore sensation - the awareness resulting from the stimulation of a sense organ -and perception - the organisation and interpretation of these sensations. At the psychological level, sensations are basic, raw experiences associated with stimuli, while perception involves integrating and meaningfully interpreting these sensory experiences. Biologically, sensory processes involve the sense organs and their neural pathways, responsible for the initial acquisition of stimulus information. Perceptual processes involve higher cortical levels, which are more related to assigning meaning. Sensation and perception work together seamlessly to allow us to experience the world through our eyes, ears, nose, tongue, and skin. They also help us combine new information from the environment with our existing knowledge to make judgments and choose appropriate behaviors. The study of sensation and perception is crucial for our daily lives because the insights generated by psychologists are widely applied to benefit many people. Psychologists collaborate closely with mechanical and electrical engineers, defense and military experts, as well as clinical, health, and sports psychologists to apply this knowledge in their practices. Sensation and Perception, by Adam John Privitera - Chemeketa Community College; We Experience Our World through Sensation BOTTOM-UP AND TOP-DOWN PROCESSING Bottom-up processing is also known as data-driven processing because perception begins with the stimulus itself. Processing is carried out in one direction from the retina to the visual cortex, with each successive stage in the visual pathway carrying out ever more complex analysis of the input. Top-down processing refers to the use of contextual information in pattern recognition. For example, understanding difficult handwriting is easier when reading complete sentences than when reading single and isolated words. This is because the meaning of the surrounding words provides a context to aid understanding. To delve deeper into this topic, you are strongly encouraged to watch the following Youtube video: Top Down Processing vs Bottom Up Processing(Examples!) THRESHOLD SENSITIVITY If we consider visual perception and light detection, it is intuitive that the more intense is some stimulus, the more strongly it will affect the relevant sense organ: a high-amplitude light will affect the visual system more than a dimmer light; a high-volume sound will affect the auditory system more than a soft sound, and so on. Sensory psychologists have long sought to quantify the relation between physical stimulus intensity and the resulting sensation magnitude. Absolute thresholds: detecting minimum intensities A basic way of assessing the sensitivity of a sensory modality is to determine the absolute threshold: the minimum magnitude of a stimulus that can be reliably discriminated from no stimulus at all – for example, the weakest light that can be reliably discriminated from darkness. One of the most striking aspects of our sensory modalities is that they are extremely sensitive to the presence of, or a change in, an object or event. By definition stimuli that are below one’s threshold will be detected less than 50 percent of the time Difference thresholds: detecting changes in intensity Measuring absolute threshold entails determining by how much stimulus intensity must be raised from zero in order to be distinguishable from zero. More generally, we can ask: By how much must stimulus intensity be raised from some arbitrary level (called a standard) in order that the new, higher-level be distinguishable from the base level. This is a measurement of change detection. Consider Figure 2 that represents a change-detection study. The standard stimulus is presented first and it is always the same. The comparison stimulus comes after the standard and participants are instructed to judge if the colour is darker or lighter. In some cases, the difference is evident in other cases more subtle. What is being measured is the difference threshold or just noticeable difference (JND), the minimum difference in stimulus magnitude necessary to tell two stimuli apart. performance is measured in terms of accuracy and reaction times, when the stimuli are similar accuracy will be lower and reaction times will be higher. WEBER-FECHNER LAW Research on thresholds and discrimination was first conducted about 150 years ago by two German scientists: physiologist Ernst Heinrich Weber and physicist Gustav Fechner. Their key discovery was that the larger the value of the standard stimulus, the less sensitive the sensory system becomes to changes in intensity. Specifically, they found that the increase in intensity needed for a change to be noticed is proportional to the intensity of the standard stimulus. For example, if a room has 25 lit candles and you can just detect the addition of two more candles (an 8% increase), then in a room with 100 candles, it would require an additional 8 candles (also 8%) for the change to be noticed. This proportional relationship is known as the Weber-Fechner law, with the constant of proportionality (8% in this case) called the Weber fraction. A classical method to study light intensity is the bisection task in which participants are first exposed to two standards (learning phase), then new stimuli are presented and participants are asked to judge the new stimuli if they are similar to one of the standards previously learned (testing phase). Consider the following figure. The two standards are the lighter circle on the left (standard light) and the darker circle on the right (standard dark). In the testing phase, participants will be exposed to new circles (one at a time) between the two standards and instructed to judge if the one presented is similar to the standard light or to the standard dark. Figure 3: We have plotted the percentage of times in which each comparison stimulus was judged to be ‘brighter’ than the standard. To determine the JND, two points are estimated, one at 75 % and the other at 25 % on the ‘percent brighter’ axis. Psychologists have agreed that half of this distance in stimulus intensity units will be considered to be the just noticeable difference. If an individual’s sensitivity to change is high, meaning that he or she can notice tiny differences between stimuli, the estimated value of the JND will be small. On the other hand, if sensitivity is not as high, the estimated JND’s will be larger. Weber/Fechner law posits that the larger the value of the standard stimulus, the less sensitive the sensory system is to changes in intensity. Actually, under a wide range of circumstances, the relation is more precise and this is this: The intensity by which the standard must be increased to be noticed is proportional to the intensity of the standard. WAVES AND WAVELENGTHS Visual and auditory stimuli both manifest as waves. Despite their different compositions, waveforms share characteristics crucial to our visual and auditory perceptions. In this section, we will explore the physical properties of these waves and the perceptual experiences they produce. Amplitude and wavelength Two physical characteristics of a wave are amplitude and wavelength (Figure 4). The amplitude is the height of a wave measured from its highest point (peak or crest) to its lowest point (trough). Wavelength is the distance between successive peaks of a wave. Wavelength is directly related to the frequency of a waveform, which is the number of waves passing a given point within a specific time period, typically expressed in hertz (Hz), or cycles per second. Longer wavelengths correspond to lower frequencies, while shorter wavelengths correspond to higher frequencies. Light waves The visible spectrum is the part of the larger electromagnetic spectrum that we can see. As shown in Figure 5, the electromagnetic spectrum includes all types of electromagnetic radiation present in our environment, such as gamma rays, x-rays, ultraviolet light, visible light, infrared light, microwaves, and radio waves. In humans, the visible spectrum corresponds to wavelengths ranging from 380 to 740 nm, a very short distance since a nanometer (nm) is one billionth of a meter. Other species can detect different parts of the electromagnetic spectrum. For example, honeybees can see ultraviolet light, and some snakes can detect infrared radiation in addition to traditional visual light cues. In humans, light wavelength is linked to the perception of color. Within the visible spectrum, longer wavelengths are perceived as red, intermediate wavelengths as green, and shorter wavelengths as blue and violet. A helpful mnemonic to remember this is ROYGBIV: red, orange, yellow, green, blue, indigo, violet. The amplitude of light waves relates to our perception of brightness or intensity, with larger amplitudes appearing brighter. Sound waves Like light waves, the physical properties of sound waves are linked to different aspects of our sound perception. The frequency of a sound wave determines the pitch we perceive. High-frequency sound waves are perceived as high-pitched sounds, while low-frequency sound waves are perceived as low-pitched sounds. Humans can hear sound frequencies ranging from 20 to 20,000 Hz, with the greatest sensitivity to frequencies in the middle of this range. Similar to the visible spectrum, other species have different audible ranges. For example, chickens can hear sounds between 125 and 2,000 Hz, mice between 1,000 and 91,000 Hz, and beluga whales between 1,000 and 123,000 Hz. Our pet dogs and cats have audible ranges of about 70-45,000 Hz and 45-64,000 Hz, respectively. The loudness of a sound is closely related to the amplitude of the sound wave, with higher amplitudes resulting in louder sounds. Loudness is measured in decibels (dB), a logarithmic unit of sound intensity. A typical conversation is around 60 dB, while a rock concert can reach 120 dB. Sounds at the lower end of our hearing range include a whisper 5 feet away or rustling leaves. Tolerable sounds include those from a window air conditioner, normal conversation, heavy traffic, or a vacuum cleaner. However, sounds between 80 dB and 130 dB, such as those from a food processor, power lawn mower, heavy truck (25 feet away), subway train (20 feet away), live rock music, and a jackhammer, can potentially cause hearing damage. The pain threshold is around 130 dB, equivalent to a jet plane taking off or a revolver firing at close range. SIGNAL DETECTION THEORY At first glance, it may seem that a sensory system's job is straightforward: if something important, like a malignant tumor in a lung, is present, it should register its presence through the sensory information provided so the observer can take appropriate action, such as considering potential treatments. In reality, the task is much more complex. Any communications engineer will tell you that information consists of both signal and noise. The term 'noise' in common language often refers to unwanted auditory input (e.g., "There’s a lot of unpleasant noise coming from that party across the street!"). However, in scientific terms, 'signal' refers to the important, relevant part of the information, while 'noise' refers to the unimportant, irrelevant part. Noise is an inherent part of any kind of information. In any sensory modality, the challenge is to distinguish the desired signal from the noise that can obscure or disguise it. This is where signal detection theory (SDT) comes into play. SDT provides a framework for understanding how decisions are made under conditions of uncertainty. It posits that the ability to discern between signal and noise is influenced by both the sensitivity of the sensory system and the decision criterion of the observer. Sensitivity refers to how well the sensory system can detect the presence of a signal amidst noise. This is often measured by the hit rate (correctly identifying the presence of the signal) and the false alarm rate (incorrectly identifying noise as the signal). A highly sensitive system has a high hit rate and a low false alarm rate. The decision criterion is the threshold at which the observer decides whether a signal is present. This criterion can be adjusted based on the consequences of different types of errors (misses versus false alarms). For example, in medical diagnoses, a lower threshold may be set to reduce the chance of missing a malignant tumor, even if it increases the likelihood of false alarms. Signal detection theory thus highlights the dynamic interplay between the sensory system's sensitivity and the observer's decision-making process, illustrating the complexity behind what might initially seem like a simple task of detecting important information. METHODS In psychophysics, experiments seek to determine whether the subject can detect a stimulus, identify it, differentiate between it and another stimulus, or describe the magnitude or nature of this difference. CLASSICAL METHODS The Method of Constant Stimuli The method of constant stimuli involves presenting the observer with a set of stimuli with varying intensities in a random order. Each stimulus is presented multiple times, and the observer reports whether they detect it or not. Process: A range of stimuli with different intensities is selected, including some that are definitely detectable, some that are definitely not detectable, and some that are close to the threshold of detection. These stimuli are presented to the observer in a random sequence to prevent the observer from anticipating the intensity of the next stimulus. The observer indicates whether or not they detected each stimulus. Advantages: Provides accurate and reliable threshold estimates. Minimizes biases since stimuli are presented randomly. Disadvantages: Time-consuming and requires many trials. Can be tedious for the observer. Method of Limits The method of limits involves gradually increasing or decreasing the intensity of a stimulus until the observer reports a change in their perception, such as the point at which they can just detect or no longer detect the stimulus. Ascending and Descending Series: Stimuli are presented in either ascending order (from undetectable to detectable) or descending order (from detectable to undetectable). In the ascending series, the intensity is increased step by step until the observer detects the stimulus. In the descending series, the intensity is decreased step by step until the observer no longer detects the stimulus. Multiple series are conducted, alternating between ascending and descending, to average out the results and determine the threshold. Advantages: Efficient and requires fewer trials than the method of constant stimuli. Provides a quick estimate of the threshold. Disadvantages: Can be influenced by the observer’s anticipation or adaptation to the stimulus. The threshold may be biased due to the order of presentation. Method of Adjustment The method of adjustment allows the observer to control the intensity of the stimulus themselves and adjust it until it reaches the threshold level where they can just detect it or match it to a reference stimulus. Process: The observer is given control over the stimulus intensity and adjusts it until they can just detect it or match it to a standard stimulus. The adjustment is repeated multiple times to obtain an average threshold. The average intensity at which the observer can just detect the stimulus or the point at which it matches the reference stimulus is taken as the threshold. Advantages: Fast and intuitive for the observer. Can be more engaging and less tedious compared to other methods. Disadvantages: Less precise and reliable than the method of constant stimuli or the method of limits. Subject to observer biases, such as the tendency to overshoot or undershoot the threshold. Each of these methods has its own strengths and weaknesses, and the choice of method depends on the specific requirements of the study, such as the desired balance between accuracy and efficiency. ADAPTIVE METHODS Staircase Procedures: Up-and-Down Designs Staircase procedures are adaptive methods used to estimate sensory thresholds efficiently. The intensity of the stimulus is adjusted based on the observer's responses, typically in a stepwise manner. Process: Start with a stimulus intensity that is either detectable or undetectable by the observer. If the observer detects the stimulus, the intensity is decreased in the next trial. If the observer does not detect the stimulus, the intensity is increased. This process continues, and the points where the observer's response changes (from detection to non-detection or vice versa) are called reversals. The staircase typically converges around the threshold level. The average of the reversal points is taken as the threshold estimate. Advantages: Efficient in converging to the threshold level. Requires fewer trials compared to fixed-step methods. Disadvantages: Can be affected by the observer's adaptation or fatigue. The choice of step size can influence the precision of the threshold estimate. Bayesian and Maximum-Likelihood Procedures These procedures are sophisticated adaptive methods that use statistical models to estimate the threshold. They adjust the stimulus intensity based on the probability of responses given the previous data. Process: Start with an initial guess of the threshold and a statistical model (prior distribution) describing the expected distribution of thresholds. After each trial, update the model based on the observer's response. Bayesian methods use Bayes' theorem to update the probability distribution of the threshold, while maximum-likelihood methods adjust the stimulus to maximize the likelihood of the observed responses. The procedure continues until a convergence criterion is met, such as a predefined number of trials or a specific confidence level in the threshold estimate. The final threshold estimate is derived from the updated model (e.g., the mean or mode of the posterior distribution). Advantages: Highly efficient and can provide precise threshold estimates with fewer trials. Can incorporate prior knowledge about the threshold distribution. Disadvantages: Computationally more complex and requires specialized software. The accuracy of the estimate depends on the appropriateness of the chosen model and prior distribution. Magnitude Estimation Magnitude estimation involves having observers assign numerical values to the perceived intensity of stimuli, allowing for the measurement of perceived intensity on a ratio scale. Process: Present a standard stimulus with an assigned arbitrary numerical value. Present various test stimuli, and the observer assigns numerical values to these stimuli relative to the standard stimulus. The assigned values reflect the perceived intensity ratio compared to the standard stimulus. For example, if a stimulus is perceived to be twice as intense as the standard, the observer might assign it a value twice that of the standard. The collected numerical values are analyzed to understand the relationship between stimulus intensity and perceived intensity. Advantages: Allows for direct measurement of perceived intensity on a ratio scale. Can provide detailed information about the perception of different stimulus intensities. Disadvantages: Subject to individual differences in how numerical values are assigned. Can be influenced by the observer's subjective interpretation of the task. VISION Light is a form of electromagnetic energy that represents the physical stimulus for vision. Electromagnetic energy is best conceptualized as traveling in waves, with wavelengths (the distance from one crest of a wave to the next) varying tremendously from the shortest cosmic rays (4 trillionths of a centimeter) to the longest radio waves (several kilometers). Our eyes are sensitive to only a tiny portion of this continuum: wavelengths of approximately 400 to 700 nanometers, where a nanometer is a billionth of a meter. (For this Week, you are highly invited to study these further materials: Vision, by Simona Buetti and Alejandro Lleras - University of Illinois at Urbana-Champaign Seeing) VISUAL SYSTEM The human visual system consists of the eyes, several parts of the brain, and the pathways connecting them. Figure 1 illustrates the eye and the first stage of vision. The eye contains two systems: one for forming the image and the other for transducing the image into electrical impulses. Before learning about the eye and its components, it is important to remember that the right half of the visual world is initially processed by the left side of the brain and vice-versa. The retina is a thin layer of tissue at the back of the eyeball. The image-forming system itself consists of the cornea, the pupil, and the lens. The retina also contains a network of other neurons, along with support cells and blood vessels. When we want to see the details of an object, we routinely move our eyes so that the object is projected onto a small region at the centre of the retina called the fovea. The reason we do this has to do with the distribution of receptors across the retina. In the fovea, the receptors are plentiful and closely packed; outside the fovea, on the periphery of the retina, there are fewer receptors. The high-density fovea is therefore the highest-resolution region of the fovea, the part that is best at seeing details. The cornea is the transparent front surface of the eye: Light enters here, and rays are bent inward by it to begin the formation of the image. The lens completes the process of focusing the light on the retina. To focus on objects at different distances, the lens changes shape. It becomes more spherical for near objects and flatter for far ones. The pupil, the third component of the image forming system, is a circular opening between the cornea and the lens whose diameter varies in response to the level of light present. It is largest in dim light and smallest in bright light, thereby helping to ensure that enough light passes through the lens to maintain image quality at different light levels. All of these components focus the image on the retina. There the transduction system takes over. This system begins with various types of neural receptors which are spread over the retina, somewhat analogously to the way in which photodetectors are spread over the imaging surface of a digital camera. There are two types of receptor cells, rods and cones, so called because of their distinctive shapes. The two kinds of receptors are specialized for different purposes. Rods are specialized for seeing at night; they operate at low intensities and lead to low-resolution, colorless sensations. Cones are specialized for seeing during the day; they respond to high intensities and result in high-resolution sensations that include color. (The eye and its components) Please remember: Vision begins to take place at the retina, where light energy is transduced into neural energy. There are two types of photoreceptor cells for vision, the rods and the cones. Rods are photosensitive cells of the retina that are most active in low levels of illumination and do not respond differentially to various wavelengths of light. Cones are photosensitive cells of the retina that operate best at high levels of illumination and that are responsible for color vision Fovea is a small area of the retina where there are few layers of cells between the entering light and the cone cells that pack the area. There are no rods in the fovea, only cones. Visual acuity is best at the fovea. The blind spot is where the nerve impulses from the rods and cones leave the eye. MYOPIA AND HYPEROPIA In some eyes, the lens does not become flat enough to bring far objects into focus, although it focuses well on near objects; people with eyes of this type are said to be myopic (nearsighted). In other eyes, the lens does not become spherical enough to focus on near objects, although it focuses well on far objects; people with eyes of this type are said to be hyperopic (farsighted). As otherwise normal people get older (into their 40s) the lens loses much of its ability to change shape or focus at all. Such optical defects can of course, generally be corrected with eyeglasses or contact lenses. How does the receptor transduce the light into electrical impulses? The rods and cones contain chemicals, called photopigments, that absorb light. Absorption of light by the photopigments starts a process that eventuate in a neural impulse. Once this transduction step is completed, the electrical impulses must make their way to the brain via connecting neurons. The responses of the rods and cones are first transmitted to bipolar cells and from there to other neurons called ganglion cells. The long axons of the ganglion cells extend out of the eye to form the optic nerve to the brain. At the place where the optic nerve leaves the eye, there are no receptors; we are therefore blind to a stimulus in this region, this is the blind spot. (How to "see" your blind spot.) SEEING PATTERNS Visual acuity refers to the eye’s ability to resolve details. There are several ways of measuring visual acuity, but the most common measure is the familiar eye chart found in optometrists’ offices. This chart was devised by Herman Snellen in 1862. Snellen acuity is measured relative to a viewer who does not need to wear glasses. Thus, an acuity of 20/20 indicates that the viewer is able

Use Quizgecko on...
Browser
Browser