Neuroscience and Cognitive Science PDF
Document Details
![AstoundedComprehension6772](https://quizgecko.com/images/avatars/avatar-14.webp)
Uploaded by AstoundedComprehension6772
IU International University of Applied Sciences
Tags
Summary
This document introduces the topics of neuroscience and cognitive science, including study goals and an overview of the human brain and its functions. It details concepts such as neural networks, brain anatomy, and cognitive processes.
Full Transcript
UNIT 3 NEUROSCIENCE AND COGNITIVE SCIENCE STUDY GOALS On completion of this unit, you will have learned … – how neuroscience describes the anatomical and physiological composition of the brain. – how cognitive science unites different scientific disciplines in the search for models of cogni...
UNIT 3 NEUROSCIENCE AND COGNITIVE SCIENCE STUDY GOALS On completion of this unit, you will have learned … – how neuroscience describes the anatomical and physiological composition of the brain. – how cognitive science unites different scientific disciplines in the search for models of cognitive processes. – some of the most salient relations and connections between neuroscience, cognitive science, and artificial intelligence, together with their implications for human and machine intelligence. 3. NEUROSCIENCE AND COGNITIVE SCIENCE Introduction The goal of artificial intelligence (AI) can be defined as the mechanical reproduction of intelligent behavior. This evidently poses an enormous scientific and engineering chal- lenge. It should, therefore, come as no surprise that throughout the history of artificial intelligence, researchers and engineers have sought inspiration from the study of natural systems that exhibit the traits and characteristics that artificial intelligence tries to emu- late. The only known working examples of such systems can be found in human and ani- mal brains and their associated cognitive functions and abilities. Therefore, the goal of this unit is to familiarize you with the basic tenets of neuroscience and cognitive science, which deal with the study of human and animal nervous systems and the broader scien- tific endeavor aiming to model and understand cognitive functions, respectively. 3.1 Neuroscience and the Human Brain As a scientific discipline, neuroscience tries to identify the relevant anatomical structures Nervous system that form nervous systems and their functional roles. As such, it belongs to the broader The sum total of all cells field of biology, combining anatomy, physiology, cytology, and the chemical and develop- in the body concerned with the forwarding and mental sciences. In the following section, the focus is primarily on the human brain, as processing of sensory and based on our current knowledge it constitutes the most complex and most capable brain control signals is referred specimen. to as the nervous system. Brain Anatomy and Physiology Anatomically, the brain is a lump of soft tissue that weighs between 1.2—1.4 kg in adults with considerable variation between individuals. Nevertheless, there is no evidence to suggest that brain size is connected with mental capacity. The outer layer of the brain is a highly wrinkled structure with a large surface area that fits closely within the available cra- nial volume. On a coarse scale, its constituent parts are the cerebrum, the cerebellum, and the brain stem. The latter functions as an interface or relay station between the brain and the spinal cord, which in turn branches out into the peripheral nervous system. It is the locus of control for basic perseverance and maintenance functions, such as heart-rate reg- ulation, breathing, regulation of body temperature, and the wake-sleep cycle. The cere- bellum is a structure adjacent to the brain stem underneath the cerebrum. Its main func- tion is motor control, such as steering movement, upholding balance, and maintaining body posture. The largest fraction of the brain is constituted by the cerebrum. All higher functions, such as the interpretation of sensory input, emotions and reasoning, and speech and language understanding, reside here. 42 Viewed from above, the cerebrum is split into two halves called hemispheres. An anatomi- cal structure called the corpus callosum forms the connection between these halves, ena- bling the exchange of signals and communication between the two hemispheres of the brain. The right hemisphere controls the left side of the body, and the left hemisphere con- trols the right side of the body. It thus follows that, generally, the brain halves are highly symmetrical with respect to their function. In contrast, one commonly finds broad claims of functional specialization, also called lateralization, of high-level cognitive functions in lateralization popular psychology. The notion that logical and analytical thinking resides in the left This refers to functional differentiation across the hemisphere, whereas creativity is situated in the right hemisphere, might serve as an central body plane—that often-encountered example. Such claims are inaccurate and misleading since most relia- is, left-right differentia- ble evidence for actual lateralization pertains to more low-level perceptual functions. One tion. notable example of a hemisphere asymmetry is given by Broca’s and Wernicke’s areas which play an important role in language processing. These brain regions are usually found in the hemisphere opposite to the dominant hand, which is the brain hemisphere that controls the dominant side of the body. Apart from the hemisphere structure, the brain can be compartmentalized into four main lobes, as indicated in the figure below. This division is based anatomically on the most visibly distinct fissures of the brain surface. Despite the fact that the vast majority of observable brain functions are based on the complex interaction of many of the brain’s constituent parts, one can justifiably attribute a certain functional specialization to these lobes. Figure 2: The Frontal, Parietal, Occipital, and Temporal Lobes of the Brain Source: Created on behalf of IU (2019), based on the Mayo Foundation for Medical Education and Research, all rights reserved. Used with permission, 2019. 43 The frontal lobe is responsible for many cognitive abilities that are commonly referred to as higher mental faculties, such as judgement, planning, problem-solving, intelligence, and self-awareness. It is also involved in complex motor control tasks and speech. The parietal lobe, by contrast, is mostly concerned with the interpretation of sensory input. As such, it constructs our spatial-visual perception and interprets vision, auditory, and touch signals. The temporal lobe is involved in the understanding of language, the formation of memory, and sequencing and organization. It is also involved in complex vision tasks, such as the recognition of objects and faces. The role of the occipital lobe lies in performing the early stages of visual signal processing and interpretation. On a cellular level, the human brain is, on average, composed of about 86 billion (0,86 · 1011) connected nerve cells called neurons, which are responsible for information process- ing, and an approximately tenfold higher number of glia cells, which are responsible for the protection, nourishment, and structural support of neurons. Figure 3: A Neuron Source: Created on behalf of IU (2019), based on Baillot, 2018. Above is a representation of a human nerve cell or neuron. The cell body is called the soma. Signals from other neurons reach the soma via branched structures called den- drites. The soma then processes the incoming information and produces a corresponding 44 output that is sent down the axon. The length of the axon can be 10 to 1000 times the diameter of the soma. At its end, it branches off into axon terminals that constitute the points of contact for dendrites from neurons further down the signal flow. The difference between soma and axons manifests itself in the distinction, visible to the naked eye, between the gray and white matter seen in brain cross-sections. Gray matter is formed by the cell bodies, whereas white matter is comprised of axons. Brain Function Summary The human brain functions as the regulator of all our bodily functions 24 hours a day, 7 days a week. On a basic level, an interconnected network of neurons controls the func- tioning of our body’s routine needs, such as breathing, blood pressure, and mobility. Com- munication between the brain and the body takes place along the spinal column. The brain is responsible for the processing of sensory input in the form of the following modalities: 1. Vision (sight) 2. Audition (hearing) 3. Gustation (taste) 4. Olfaction (smell) 5. Tactition (touch) 6. Thermoception (temperature) 7. Nociception (pain) 8. Equilibrioception (balance) 9. Proprioception (body awareness) Even more importantly for the subject of this course, the brain is responsible for motiva- tion—the promotion of behaviors that are considered beneficial for the organism, includ- ing attention, learning, memory, planning, problem-solving, understanding language, and the ability to form complex ideas. It is important to remember that the brain perceives inputs, processes these inputs based on what it already knows, and then initiates some sort of action. For example, most human brains will recognize the effect of touching a hot plate and react accordingly, at least most of the time. 45 Figure 4: The Perception of Inputs and the Cognition Process Source: Created on behalf of IU (2019). 3.2 Cognitive Science Whereas neuroscience focuses on the study of the anatomy and physiology of nervous Cognition systems, cognitive science, by contrast, takes a wider view and examines cognition and This is the mental process cognitive processes in their own right, abstracting from biological actualities to elucidate of gaining knowledge and understanding as a result corresponding functional relationships. Evolutionary and developmental aspects are also of thinking, experience, addressed. The typical cognitive processes studied in cognitive science are as follows: and the senses. behavior intelligence language memory perception emotion reasoning learning Approaches, History, and Methods Fitting its aspiration to be an encompassing study of the mind, the defining characteristic of cognitive science, as a field of scientific endeavor, is its interdisciplinary approach. It draws upon knowledge from a diverse set of disciplines: philosophy psychology neuroscience linguistics anthropology artificial intelligence From the earliest times, humans have been compelled to think about the origins and workings of the mind. Thus, like artificial intelligence, the intellectual history of cognitive science can be traced back to the dawn of philosophy in antiquity. However, current approaches and methods used in cognitive science derive from twentieth century devel- opments. Driving forces at the time were George Miller’s studies on mental representa- 46 tions and the limitations of short-term memory, Noam Chomsky’s work on formal gram- mars and his scathing critique of the psychological paradigm of radical behaviorism, and early efforts in artificial intelligence. However, it was not until 1975 that the term “cogni- tive science” was coined, and a common understanding of this discipline emerged across the various scientific fields. The scientific sub-disciplines of cognitive science are as diverse as its methodological approaches. Emprical data in the study of cognitive processes is commonly derived from typical experimental methods used in the various disciplines that concern themselves with the study of the mind. The three main approaches outlined below underly the major- ity of empirical findings. 1. Brain imaging: This is a tool commonly used in medicine and neuroscience that ena- bles the tracing of neural activity while the brain is performing complex mental tasks. 2. Behavioral experiments: Frequently used in psychology, behavioral experiments allow us to draw conclusions about the processing of stimuli. 3. Simulation via computational modeling: This technique allows us to verify theoretical ideas about the functional processes involved in mental activities by comparing simu- lated outcomes with real-world behavioral data. Key Concepts, Influences, and Critique The representational theory of mind is the prevalent paradigm uniting the majority of work in cognitive science. According to this framework, cognition is achieved by employ- ing computational procedures on mental constructs that can be likened to data structures in computer science. These mental objects or data structures could represent concrete objects in the sense of physical entities or abstractions that pertain to the mental domain alone, such as images, concepts, logical propositions, or analogies. The computational procedures are correspondingly variegated and include deduction, search and matching, and the like. Cognitive science as a field of research also includes contributions from numerous other specialized disciplines. Through its systemic approach, it also influences the thinking in many associated subject areas. It has thus made relevant contributions to behavioral eco- nomics (a newer branch of economics that studies how people actually behave in eco- nomic decision-making instead of postulating perfectly rational actors) and the study of cognitive biases and the judgement of risk. However, some of its most noteworthy contri- Cognitive bias butions relate to linguistics, the philosophy of language, and an understanding of the This refers to partiality in valuation or judgement functional roles and interplay of brain structures. that stands in the way of an objective considera- Despite its marked successes, conventional approaches to cognitive science have also tion of a given situation. It denotes an often system- been subject to critique. For example, cognitive science has only recently considered the atic—repeatedly occur- role of emotion in human thinking and the problem of consciousness. Additionally, as a ring—deviation from rationality. result of its focus on the individual mind, it has tended to neglect important aspects of cognition, such as its social dimension and issues to do with embodiment and the impact of the physical environment. 47 3.3 The Relationship Between Neuroscience, Cognitive Science, and Artificial Intelligence The preceding sections gave an overview of neuroscience and cognitive science. When the subject of this course, artificial intelligence, is considered in relation to these fields, excit- ing connections begin to emerge. Biological Neural Networks and the Mind While the question of whether the brain is the locus of cognition, or just an organ to cool the blood, was debated in Greek antiquity, today we know that the former hypothesis is correct. Not only do we have a wealth of documented cases where specific brain lesions due to accident or illness lead to specific functional impairments, we also know that marked changes in what we would colloquially term the character of a person result from measurable neurological damage. Thus, without resorting entirely, or at least partly, to metaphysical explanations of the origins of the mind, we now have to accept that the brain is the physical base of mental states. This does not mean that we can readily explain every aspect of the mind or cognition in terms of underlying neurological processes. If this were the case, the broad scope of cognitive science, as outlined above, would be superflu- ous. Paraphrasing the work of Siegel (2012), the human mind can be described as a human fac- ulty, which is an emerging and self-organizing relational process embodied in the human persona. It is also a facility regulating the flow of energy and information, which is com- plex, open, non-linear, and takes place simultaneously inside and outside the body. To clarify this definition, the following key terms are defined: Faculty “Faculty” refers to all the mental and physical abilities a person is endowed with, which can exhibit considerable variation between individuals. Self-organizing “Self-organizing” refers to a process of spontaneous ordering arising from local interac- tions. For example, clouds in the sky are considered to be self-organizing, as they move with the wind from warm to cold air, store and release moisture, and remain at a particular altitude for a certain period of time. Emerging “Emerging” means “to give rise to”. To stay with our previous example, that which gives rise to the formation of clouds in the sky is often a heat source on the ground, likely to be a plowed field’s dark soil absorbing heat from the sun. The resulting column of warm air ris- ing towards the sky forms a cloud. 48 Relational processes “Relational processes” signify that there is a significant relationship between the human persona and outside objects and processes, not in the least in the form of other minds. A sample of specific human faculties represented by the human mind is given below: Conscience: This is the human faculty that judges the difference between right and wrong based on an individual’s value system. Self-awareness: This is the conscious awareness of being and introspection. Judgement: This is the ability to consider evidence and other sources of knowledge in order to make decisions. Language: This is the ability to use languages to express ideas. Imagination: This is the ability to see possibilities beyond what is immediately being perceived. Memory: This is the ability to recall coded and stored information in the brain. Thinking: This is the faculty to search for possible reasons or causes. Neuroscience, Cognitive Science, and Artificial Neural Networks The relationship between our brain and the many manifestations of mental activities still requires further research, which is also true of the relationship between neural processes and their representation in the form of computational models. Since the beginning of the information technology and computer era, researchers have been fascinated with the prospect of reproducing mental faculties in computational machinery. This process has always been a two-way exchange. On the one hand, com- puter scientists, and in particular artificial intelligence researchers, have looked into philo- sophical, psychological, and neurological models of cognitive capabilities as inspiration for their endeavors. On the other hand, researchers in cognitive processes have built and employed computational models to gain insight into otherwise hard to test notions about the functioning of the mind or the neural circuitry found in organisms. One of the most prominent outflows of such research activities is the computational model of neural activity pioneered by Warren McCulloch and Walter Pitts in the 1940s, var- iants of which are still being used in connectionist machine learning models today. According to such models, a neuron’s function can be characterized in the following way: the cell receives input in the form of electrochemical signals from other neurons that are located upstream in the information processing flow. It then modulates the input accord- ing to how often two nerve cells are activated together. The more often this happens, the greater the upregulation of the connection between the neurons. If the total excitation exceeds some predefined threshold, each neuron takes the sum of all its inputs weighted in this manner and sends an impulse along the axon, its outgoing connection. This work- ing mechanism is depicted in the figure below based on the following steps: 1. The input is received via input connections that model connection strength via weight parameters wn. 2. The weighted inputs are summated. 49 3. The resulting sum S is then subjected to the activation function f(S). 4. Finally, the activation function value is distributed to output connections. Figure 5: Schematic Depiction of an Artificial Neuron Source: Created on behalf of IU (2019), based on Knowino, 2010. Information processing and the learning of input-output associations is, in a limited way, already possible with a single computational unit working according to the schema given above. Nevertheless, the analogy to biological neural systems is generally carried one step further by building networked structures that can be organized in a layered scheme. Figure 6: Schematic Diagram of a 1-Layer Neural Network Source: Created on behalf of IU (2019), based on Peixeiro, 2019. 50 Concerning the flow of information processing through the network, two approaches can be found in the literature. These are the feed forward and recurrent approaches. The feed forward approach In this type of network, processing, and thus the flow of information, only proceeds in one direction—upstream to downstream. Every node in the network receives inputs, does the processing based on its associated weights and transfer function, and passes the signal onto the connected neurons in the next layer without looping. One typically distinguishes three types of layers: (1) the input layer, (2) one or more hidden layers, and (3) the output layer that encodes the response of the network. The recurrent approach In recurrent networks, the flow of information follows a directed graph where the succes- sion of nodes along the graph encodes the temporal succession of processing steps per- formed by the network. This temporal aspect and the resulting dynamic behavior of the network make this network class particularly well suited to applications that have a time component, such as the processing of a time-series, speech, or handwriting recognition. Such networks also commonly contain memory units that can store information about previous states of the network or its constituent parts. Clearly, the aforementioned approaches to the creation of learning systems have been inspired by theories of neural information processing. However, this relationship is often over-emphasized in popular sources, such as magazine and news articles, and at times even sensationalized beyond what could be considered factually warranted. It is prudent to keep in mind that these artificial neural networks implement highly simplified models of neural activity that abstract away many of the complexities of biological neural activity. Thankfully, deep learning, the dominant paradigm of neural inspired machine learning models, emphasizes in its name the concrete property of its pertaining network models, i.e., depth of layering over vague allusions to the functioning of biological neural net- works. The Human Brain, Its Artificial Representations, and Computer Hardware Even without referring to computational schemes that explicitly draw inspiration from the neural architecture of our brains, one often finds comparisons between the complexity of current computing machinery to the brain in the popular science discourse. To this end, the complex mobile chip designs of today have transistor counts in the order of a magni- tude of 1010. Thus, the number of transistors in a modern central processing unit (CPU) already approaches the number of neurons in the human brain. Since a transistor is the most primitive switching unit imaginable and the representation of the function of a single neuron requires a sizeable numer of transistors, a more interest- ing comparison lies in the juxtaposition of the number of units in the largest artificial neu- ral network models existing today with biological counterparts. To this end, the largest 51 current artificial nets have unit counts of between 106 and 107; however, since the 1980s this number has been doubling roughly every 2.4 years. For reference, the number of neu- rons in humans is about 1011, in bees around 106, and in frogs 108. It has to be noted, however, that reducing general human intelligence to one number and comparing it to a representative number for machine intelligence is not very meaningful (see, for example, Russell and Norvig, 2022). Nor does comparing the neuron count of the human brain to transistor numbers in a CPU, or to unit counts in network models, lead to any profound conclusions about the state of artificial intelligence. It is no more than an interesting metric. Human and Machine Intelligence If we look at the history and current state of artificial intelligence, most research and development continues to focus on building systems that try to solve specific tasks, such as playing certain strategy games, identifying objects in images or videos, controlling par- ticular types of robots to achieve a certain goal, or translating written text. Nevertheless, since the beginning of artificial intelligence research, there has been a strong current of thought directed towards the construction of a system that matches or even exceeds human mental capacity in all its diversity. When comparing problem-solving with existing artificial intelligence models and the capabilties of the human mind, many striking differences are easily discernable. In the following, we look at three manifest dis- parities. Learning efficiency While there are current task specific artificial intelligence models that clearly exceed human ability in a particular application domain, they typically achieve this superior per- formance by processing vastly more training data than humans are ever able to use. As an example, consider DeepMind’s AlphaZero. Various versions of this artificial intelligence system have learned to play the games Go, chess, and shogi at a superhuman level simply by being given the rules of the game and having extensive opportunities for self-play. While learning, these systems have played millions of games against themselves in order to reach their final playing capacity—much more than even the most elite human players manage to play in their entire career. Put another way, the human brain seems to achieve an almost as high performance using much less data. Generalization and transfer While the general architecture of the artificial intelligence system mentioned in the previ- ous example was the same, no matter whether the game under consideration was Go, chess, or shogi, in each instance the system was only able to play the particular game it had been trained on. Yet there are numerous examples of human players achieving expert level proficiency in more that one of these games—often reporting interesting influences and inspiration in their strategic thinking in one game as compared to others. 52 Imagination Continuing with this example, master-level human players have noted that artificial intelli- gence occasionally comes up with moves that are purposeful, yet entirely novel and deeply surprising to human experts. While this could still be considered to be a certain type of creativity, it is still the case that, by and large, imaginativeness has not up until now been a strong point of artificial intelligence systems. Unsurprisingly, these and many other deficiencies of artificial intelligence systems with respect to the full spectrum of human capabilities have prompted researchers to attempt to close the gaps. Some noteworthy attempts are summarized below. Transfer learning The core idea of transfer learning is to take an existing model trained on a particular task and with a small amount of further training apply it to a different, yet related task. This technique is common in deep learning-based methods for object recognition in images or videos. In this domain, a system that has been trained to detect object “A” is repurposed to detect object “B”. This approach works because of the particular way in which objects are represented in such deep network models. The network constructs hierarchical repre- sentations of image properties in which early layers detect very general properties like edges or corners that are relevant for the recognition of many different object classes. Meta learning Meta learning takes one step back from the learning of concrete tasks and is concerned with the problem of learning to learn. To this end, it tries to abstract from individual learn- ing scenarios to find successful common strategies and approaches. Generative adversarial networks (GAN) This approach, developed by Ian Goodfellow at the University of Montreal (Goodfellow et al., 2014), constitutes another path by which artificial intelligence is approaching human imagination and creativity. A GAN is composed of two deep, multilayered neural networks that work in opposition to each other, hence the term adversarial. One of these networks tries to generate data from a certain category. Consequently, it is called the generator net- work. The other network is presented with generated, artificial data as well as real-world data from the same category. The task of the second network is to decide which data is real and which has been generated. Thus, the latter network is referred to as a discrimina- tor network. Both networks are then optimized simultaneously. The generator has to cre- ate ever more lifelike instances of synthetic data in order to keep up with the enhancing ability of the discriminator to discern instances of real and generated data. The following image shows a schematic representation of a GAN, using image processing as an example. 53 Figure 7: Generative Adversarial Network (GAN) Source: Created on behalf of IU (2019), based on Ahn, 2017. Super Intelligence Building on the notion of artificial intelligence as a human equivalent is the idea of super intelligence, which is the belief in the possibility of an artificially created intelligence which could exceed the capabilities of the human mind. Strikingly, adherents of this idea commonly think that the achievement of such a level of intelligence is not brought about by human scientists or engineers, but by intelligent machines themselves. In this line of thinking, a machine that has reached a versatile and open-ended level of intelligence equivalent to human level could use its capacities to acquire vast amounts of existing knowledge (because it has no memory capacity or retrieval limitations) and then use that knowledge together with its problem-solving capacity to improve itself. The resulting next generation artificial intelligence would then, in turn, use its superior resources and capaci- ties for self-improvement to create a subsequent version of ever-improved intelligence. Thus, a runaway evolution of ever increasingly intelligent machines could take place, quickly surpassing anything humanly imaginable. The point at which this exponential growth in machine intelligence kicks off is often referred to as the technological singular- ity. The two most influential thinkers behind the creation and subsequent popularization of the idea of a technological singularity, as described above, are Vernor Vinge and Ray Kurz- weil. In the early 1990s, Vinge (1993) predicted that greater than human intelligence would be achieved in the upcoming thirty years by either technological or biological means or a combination thereof. He also believed that a technological singularity is a process that gets triggered at a point in time when artificial intelligence systems become sufficiently developed. Subsequently, continuous improvements occur, which feed into themselves and thus keep accelerating. Ray Kurzweil is another renowned proponent of the concept of singularity. Early in his career, he gained recognition as an inventor and futurist by contributing to the fields of scanning and speech recognition, to name just a few. In his 2005 book, “The Singularity is Near”, he discussed the concept of singularity intensively, and thereby influenced the sci- entific community worldwide, as well as inspiring many books and films about the future 54 of AI. Kurzweil predicted that advances in intelligence would be non-biological and based on artificially created substrate rather than neurons, with the potential to become “a tril- lion times more powerful” (2005, p.25) than any intelligence that existed at the time. The successor of this famous book, “The Singularity is Nearer” has already been announced and should be published in 2024. However, there are other lines of argumentation that call into question whether such a a development is likely or even possible (see, for example, Walsh, 2017). A summary of some common counter-arguments to the concept of a technological singularity is listed below. Summary of common counter-arguments to the concept of a technological singular- ity An explosion of artificial intelligence cannot happen until machine intelligence sur- passes the human variety in all domains. This is not yet the case and we do not know whether this will be the case in the near future. Playing chess or the game of Go is not sufficient evidence for the development of an Artificial General Intelligence (AGI) which is at least at the same level as human intelligence. However, the recent advances of the language model GPT-3, which is able to produce human-like text in various cases, have fueled this discussion. Arguably, the generality of human intelligence is contingent on many mental factors, such as emotion, motivation, the feeling of autonomy and agency, and even, to some extent, our biases and seeming cognitive shortcomings. The logic of the artificial intelli- gence explosion seems to assume that a machine can achieve or mimic those while retaining control over its more mechanical and computerlike aspects, such as virtually unbounded memory and computational speed. In looking at the history of achievements in the field of artificial intelligence, practical successes, while steady, have also been rather modest. It is true that new discoveries in artificial intelligence are being made and have been demonstrated to be scientifically and economically successful. However, increased complexity is brought about by every new discovery. Taking this into consideration, the next discovery is likely to be more difficult. Walsh calls this the “diminishing returns” argument (2017, p. 61). Similarly, while forecasting large returns from artificial intelligence may be observable in day-to-day applications, trends may not continue forever given that no trend ever does. Eventually, exponential performance curves turn into s-curves, which taper off at the top. The argument in support of the theory of an artificial intelligence explosion exclusively focuses on the individual mind. However, much of what makes up the strength of the human mind is its social dimension. Times when the most gifted individuals could absorb the total amount of available knowledge are long gone. Our lives are shaped by the collective intelligence of our society and its division of labor. Even on their own, the brightest individual could not establish the quality of life and freedom to pursue the quest for all of human knowledge, at least in our developed societies. 55 THINKING EXERCISE Consider what a hypothetical society with advanced artificial intelligence might look like in thirty or more years from now. In what field will artificial intelligence play the greatest role? What effect will artificial intelligence have on this field and on society more generally? Do you think it is possible that rogue governments, corporations, or other criminal elements could highjack an advanced artificial intelligence system and exploit it for their own ends? Do you believe, as Kurzweil does, that the content of a human brain working in analog could in future be downloaded onto a digital storage device and preserved? SUMMARY In this unit, we focused on scientific disciplines closely related to and uniquely informing research in artificial intelligence. Focusing on neuroscience, this unit provided some basic anatomical facts about cells in the nervous system and the brain. The coarse scale structure of the brain—the division of the main lobes—was also descri- bed and the physiological relevance of the main constituent parts out- lined. Expanding upon neuroscience, cognitive science was introduced to give a broader point of view and a more general understanding of cognitive processes and phenomena utilizing contributions from diverse aca- demic fields, including philosophy, psychology, linguistics, and anthro- pology. The unit also explored the multitude of interrelations between these fields of study and their connection to artificial intelligence. 56