Artificial-Intelligence (1).pdf

Full Transcript

T. Y. B. SC. IT SEMESTER - V (CBCS) ARTIFICIAL INTELLIGENCE SUBJECT CODE : 53704 © UNIVERSITY OF MUMBAI Prof. Suhas Pednekar Vice Chancellor...

T. Y. B. SC. IT SEMESTER - V (CBCS) ARTIFICIAL INTELLIGENCE SUBJECT CODE : 53704 © UNIVERSITY OF MUMBAI Prof. Suhas Pednekar Vice Chancellor University of Mumbai, Mumbai. Prof. Ravindra D. Kulkarni Prof. Prakash Mahanwar Pro Vice-Chancellor, Director University of Mumbai. IDOL, University of Mumbai. Programme Co-ordinator : Prof. Mandar Bhanushe Head, Faculty of Science & Technology, IDOL, University of Mumbai - 400 098. Course Co-ordinator : Gouri S. Sawant Assistant Professor B.Sc.IT, IDOL, University of Mumbai- 400098. Editor : Dr. Rajeshri Shinkar SIES (NERUL) college of Arts, Science & Commerce. Course Writers : Mr. Rupesh Ganesh Bhoir Infosys Ltd. : Dr. Juita Tushar Raut Assistant Professor, Sonopant Dandekar College, Palghar(W) Pin Code. 401 404. : Ms. Vijaya Yogesh Rane Assistant Professor, AVM’s Karmaveer Bhaurao Patil Degree College, Thane. : Ms. Anam Ansari Assistant Professor, B.N.N. College, Bhiwandi. : Ms. Jayalalita B Iyer Lecturer, N.E.S Ratnam College of Arts, science and commerce. May 2022, Print I Published by Director Institute of Distance and Open learning , University of Mumbai,Vidyanagari, Mumbai - 400 098. DTP COMPOSED AND PRINTED BY Mumbai University Press Vidyanagari, Santacruz (E), Mumbai - 400098. CONTENT Chapter No. Title Page No. UNIT I 1. Introduction 1 2. Intelligent Agents 13 UNIT II 3. Solving Problems By Searching 29 4. Beyond Classical Search 53 UNIT III 5. Adversarial Search 70 6. Logical Agent 89 UNIT IV 7. First-Order Logic 109 8. Inference In First-Order Logic 133 UNIT V 9. Planning 162 10. Knowledge Representation 174 ***** Syllabus B. Sc. (Information Technology) Semester – V Course Name: Artificial Intelligence Course Code: 53704 (Elective I) Periods per week (1 Period is 50 minutes) 5 Credits 2 Hours Marks Evaluation System Theory TheoryExamination Examination Internal 2½ 75 Internal - 25 Unit Details Lectures I Introduction: What is Artificial Intelligence? Foundations of AI, history, the state of art AI today. 12 Intelligent Agents: agents and environment, good behavior, nature of environment, the structure of agents. II Solving Problems by Searching: Problem solving agents, examples problems, searching for solutions, uninformed search, informed search strategies, heuristic functions. Beyond Classical Search: local search algorithms, searching with non-deterministic action, searching with 12 partial observations, online search agents and unknown environments. III Adversarial Search: Games, optimal decisions in games, alpha-beta pruning, stochastic games, partially observable games, state-of-the-are game programs. 12 Logical Agents: Knowledge base agents, The Wumpus world, logic, propositional logic, propositional theorem proving, effective propositional model checking, agents based on propositional logic. IV First Order Logic: Syntax and semantics, using First Order Logic, Knowledge engineering in First Order Logic. 12 Inference in First Order Logic: propositional vs. First Order, unification and lifting, forward and backward chaining, resolution. V Planning: Definition of Classical Planning, Algorithms for planning as state space search, planning graphs, other classical planning approaches, analysis of planning approaches, Time, Schedules and resources, hierarchical planning, Planning and Acting in 12 Nondeterministic Domains, multiagent planning, Knowledge Representation: Categories and Objects, events, mental events and objects, reasoning systems for categories, reasoning with default information, Internet shopping world Books and References: Sr. Title Author/s Publisher Edition Year No. 1. Artificial Stuart Russel Pearson 3rd 2015 Intelligence: A and Peter Norvig Modern Approach 2. A First Course in Deepak TMH First 2017 Artificial Khemani Intelligence 3. Artificial Rahul Deva Shroff 1st 2018 Intelligence: A publishers Rational Approach 4. Artificial Elaine Rich, TMH 3rd 2009 Intelligence Kevin Knight and Shivashankar Nair 5. Artificial Anandita Das SPD 1st 2013 Intelligence & Soft Bhattacharjee Computing for Beginners ***** UNIT I 1 INTRODUCTION Unit Structure 1.1 Objectives 1.2 Introduction 1.2.1 Introduction to AI 1.2.2 Objectives of AI 1.3 What is Artificial Intelligence? 1.3.1 What is AI. 1.3.2 Definitions of AI 1.4 Foundations of AI 1.4.1 Introduction 1.4.2 Turing Test 1.4.3 Weak AI versus Strong AI 1.4.4 Philosophy 1.5 History 1.6 The state of art AI today. 1.6.1 Current AI Innovations 1.6.2 Practical Applications of AI 1.7 Summary 1.8 Unit End Questions 1.9 References 1.1 OBJECTIVES After going through this unit, you will be able to:  Define Artificial Intelligence  Understand foundations of AI  Explain how the AI was evolved  Explain history of AI.  Describe the Today’s AI, and Where and What research/ work is going in AI field. 1.2 INTRODUCTION 1.2.1 Introduction to Ai: Artificial Intelligence is to make computers intelligent so that they can act intelligently as humans. It is the study of making computers does things which at the moment people are better at. 1 AI has made great effort in building intelligent systems and understanding Introduction them. Another reason to understand AI is that these systems are interesting and useful in their own right. Many tools and techniques are used to construct AI systems. The AI encompasses a huge variety of fields like computer science, mathematics, logical reasoning, linguistics, neuro-science and psychology to perform specific tasks. AI can be used in many areas like playing chess, proving mathematical theorems, writing poetry, diagnosing diseases, creating expert systems, speech recognition etc. 1.2.2 Objectives of Ai: The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. AI strives to build intelligent entities as well as understand them. The study of intelligence is also one of the oldest disciplines. For over 2000 years, philosophers have tried to understand how seeing, learning, remembering, and reasoning could, or should, be done. AI currently encompasses a huge variety of subfields, from general- purpose areas such as perception and logical reasoning, to specific tasks such as playing chess, proving mathematical theorems, writing poetry, and diagnosing diseases. Often, scientists in other fields move gradually into artificial intelligence, where they find the tools and vocabulary to systematize and automate the intellectual tasks on which they have been working all their lives. Similarly, workers in AI can choose to apply their methods to any area of human intellectual endeavor. In this sense, it is truly a universal field. 1.3 ARTIFICIAL INTELLIGENCE/WHAT IS ARTIFICIAL INTELLIGENCE? 1.3.1 What Is Artificial Intelligence?: Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. Major AI textbooks define artificial intelligence as "the study and design of intelligent agents," where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines, especially intelligent computer programs." 2 Logical Agent Artificial Intelligence 1.3.2 Definitions of AI: Artificial intelligence is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Artificial Intelligence is making a computer, a computer-controlled robot, or a software think intelligently, as the intelligent humans think. AI is accomplished by studying how the human brain thinks and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems. Some Definitions of AI: There are various definitions of AI. They are basically categorized into four categories: i) Systems that think like humans ii) Systems that think rationally iii) Systems that act like humans iv) Systems that act rationally Category 1) Systems that think like humans: An electronic machine or computer thinks like a human being. “The exciting new effort to make computers think … machines with minds, in the full literal sense” (Haugeland, 1985) “The automation of activities that we associate with human thinking, activities such as decision making, problem solving, learning... ” (Bellman, 1978) Category 2) Systems that think rationally: An electronic machine or computer thinks rationally. If system thinks the right thing, the system is rational. "The study of mental faculties through the use of computational models".(Charniak and McDermott, 1985). "The study of the computations that make it possible to perceive, reason, and act". (Winston, 1992). Category 3) Systems that act like humans: An electronic machine or computer acts like human being. It acts same as human being. "The art of creating machines that performs functions that require intelligence when performed by people" (Kurzweil, 1990) "The study of how to make computers do things at which, at the moment, people are better" (Rich and Knight, 1991 ) 3 Category 4) Systems that act rationally: Introduction An electronic machine or computer acts rationally. "A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes" (Schalkoff, 1990) "The branch of computer science that is concerned with the automation of intelligent behavior" (Luger and Stubblefield, 1993) 1.4 FOUNDATIONS OF AI 1.4.1 Introduction: AI is a field which has inherited many ideas, viewpoints, and techniques from other disciplines like mathematics, formal theories of logic, probability, decision making, and computation. From over 2000 years of tradition in philosophy, theories of reasoning and learning have emerged, along with the viewpoint that the mind is constituted by the operation of a physical system. From over 400 years of mathematics, we have formal theories of logic, probability, decision making, and computation. From psychology, we have the tools with which to investigate the human mind, and a scientific language within which to express the resulting theories. From linguistics, we have theories of the structure and meaning of language. Finally, from computer science, we have the tools with which to make AI a reality. 1.4.2 Turing Test: Acting humanly: The Turing Test Approach  Test proposed by Alan Turing in 1950  The computer is asked questions by a human interrogator. The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or not. Programming a computer to pass , the computer need to possess the following capabilities : Natural language processing to enable it to communicate successfully in English. Knowledge representation to store what it knows or hears  Automated reasoning to use the stored information to answer questions and to draw new conclusions. Machine learning to adapt to new circumstances and to detect and extrapolate patterns To pass the complete Turing Test, the computer will need Computer vision to perceive the objects, and 4 Logical Agent Artificial Intelligence Robotics to manipulate objects and move about. Problem: Turing test is not reproducible, constructive, or amenable to mathematical analysis 1.4.3 Weak Ai Versus Strong Ai: The definition of AI is along two dimensions, human vs. ideal and thought vs. action. But there are other dimensions that are worth considering. One dimension is whether we are interested in theoretical results or in practical applications. Another is whether we intend our intelligent computers to be conscious or not.. The claim that machines can be conscious is called the strong AI claim; the WEAK AI position makes no such claim. Artificial intelligence is … a. "a collection of algorithms that are computationally tractable, adequate approximations of intractably specified problems" (Partridge, 1991) b. "the enterprise of constructing a physical symbol system that can reliably pass the Turing Test" (Ginsberg, 1993) c. "the field of computer science that studies how machines can be made to act intelligently" (Jackson, 1986) d. "a field of study that encompasses computational techniques for performing tasks that apparently require intelligence when performed by humans" (Tanimoto, 1990) e. "a very general investigation of the nature of intelligence and the principles and mechanisms required for understanding or replicating it" (Sharpies et ai, 1989) f. "the getting of computers to do things that seems to be intelligent" (Rowe, 1988) Philosophy (428 B.C.-Present): Philosophers made AI conceivable by considering the ideas that the mind is in some ways like a machine that operates on knowledge encoded in some internal language and that thought can be used to choose what actions to take. Philosophy is the implication of knowledge and understanding of intelligence , ethics, logic, methods of reasoning, mind as physical system, foundations of learning, language and rationality. Mathematics: Philosophers staked out most of the important ideas of AI, but to make the leap to a formal science required a level of mathematical formalization in three main areas: computation, logic, 5 Algorithm and Probability: Introduction With logic a connection was made between probabilistic reasoning and action.! DECISION THEORY Decision theory, pioneered by John Von Neumann and Oskar Morgenstern (1944), combines! probability theory with utility theory to give the first general theory that can distinguish good! actions from bad ones. Decision theory is the mathematical successor to utilitarianism, and provides the theoretical basis for many of the agent designs. Psychology (1879 – Present): Psychology is to adopt the idea that humans and animals can be considered information processing machines. It is all about adaptation, phenomena of perception and motor control, experimental techniques (psychophysics, etc.) Computer Engineering (1940-Present): For artificial intelligence to succeed, we need two things: intelligence and an artifact. The computer has been the artifact of choice. AI also owes a debt to the software side of computer science, which has supplied the operating systems, programming languages, and tools needed to write modern programs. Computer engineers provided the artifacts that make AI applications possible. To succeed in artificial intelligence, two things are required: intelligence and an artifact. A computer machine is the best artifact to exhibit the intelligence. During World War II, Some innovations are done in AI field. In 1940, Alan Turing’s team had developed Heath Robinson to decode the German messages. Then in 1943, Colossus was built from vacuum tubes to decode complex code. In 1941, the first operational programmable computer Z-3 was invented. In the US, ABC the first electronic computer was assembled between 1940 and 1942. Mark I, II, and III computers were developed at Harvard. ENIAC was developed; it is the first general- purpose, electronic, digital computer. Computing artillery firing tables is one of its application. In 1952, IBM 701 was built. This was the first computer to yield a profit for its manufacturer. IBM went on to become one of the world's largest corporations, and sales of computers have grown. Computer engineering has been remarkably successful, regularly doubling performance every two years. Linguistics (1957-Present): Linguists showed that language use fits into this model. Modern linguistics and AI, then, were "born" at about the same time, and grew up together, intersecting in a hybrid field called computational linguistics or natural language processing. 6 Logical Agent Artificial Intelligence In 1957. B. F. Skinner published Verbal Behavior. It is related to behaviorist approach to language learning. But curiously, a review of the book became as well-known as the book itself, and served to almost kill off interest in behaviorism. The author of the review was Noam Chomsky, who had just published a book on his own theory. Syntactic Structures. Chomsky showed how the behaviorist theory did not address the notion of creativity in language—it did not explain how a child could understand and make up sentences that he or she had never heard before. Chomsky's theory—based on syntactic models going back to the Indian linguist Panini (c. 350 B.C.)—could explain this, and unlike previous theories, it was formal enough that it could in principle be programmed. 1.5 HISTORY OF AI The Gestation of Artificial Intelligence (1943-1955): The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts (1943). They drew on three sources: knowledge of the basic physiology and function of neurons in the brain; the formal analysis of propositional logic due to Russell and Whitehead; and Turing's theory of computation. McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him bring together U.S. researchers interested in automata theory, neural nets, and the study of intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956. Perhaps the longest- lasting thing to come out of the workshop was an agreement to adopt McCarthy's new name for the field: artificial intelligence. Early Enthusiasm, Great Expectations (1952-1969): The early years of A1 were full of successes-in a limited way. General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and Allen Newell to build a universal problem solver machine. The order in which the program considered sub goals and possible actions was similar to that in which humans approached the same problems. Thus, GPS was probably the first program to embody the "thinking humanly" approach. At IBM, Nathaniel Rochester and his colleagues produced some of the first A1 programs. Herbert Gelernter (1959) constructed the Geometry Theorem Prover, which was able to prove theorems that many students of mathematics would find quite tricky. Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). In 1963, McCarthy started the AI lab at Stanford. Tom Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ tests. 7 A Dose of Reality (1966-1973): Introduction From the beginning, AI researchers were making predictions of their coming successes. The following statement by Herbert Simon in 1957 is often quoted: “It is not my aim to surprise or shock you-but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until-in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied. Knowledge-Based Systems: The Key To Power? (1969-1979): Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. A1 Becomes An Industry (1980-Present): In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build intelligent computers running Prolog. Overall, the A1 industry boomed from a few million dollars in 1980 to billions of dollars in 1988. The Return of Neural Networks (1986-Present): Although computer science had neglected the field of neural networks after Minsky and Papert's Perceptrons book, work had continued in other fields, particularly physics. Large collections of simple neurons could be understood in much the same way as large collections of atoms in solids. Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models of memory. Recent Events: In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the area. Speech technology and the related field of handwritten character recognition are already making the transition to widespread industrial and consumer applications. The Bayesian network formalism was invented to allow efficient representation of, and rigorous reasoning with, uncertain knowledge. 1.6 THE STATE OF ART AI TODAY 1.6.1 Current Ai Innovations: Chatbots, smart cars, IoT devices, healthcare, banking, and logistics all use artificial intelligence to provide a superior experience. One AI that is quickly finding its way into most consumer's homes is the voice assistant, 8 Logical Agent Artificial Intelligence such as Apple's Siri, Amazon's Alexa, Google's Assistant, and Microsoft's Cortana. Some of them are listed below in tabular format. Area Application/ Example/ Company Name Autonomous cars Tesla Model S, Pony.ai, Waymo, Apple, (Driverless car) Kia-Hyundai, Ford, Audi, Huawei. Speech Recognition Apple's Siri and Google's Alexa Chatbot Swelly, eBay, Lyft, Yes sire, 1-800-Flowers Web search engines ai.google Translator SYSTRAN Natural Language IBM Watson API, Processing Medical Diagnosis, Artificial Intelligence assistance in “keeping Imaging well” Pattern Detection RankBrain by Google 1.6.2 Practical Applications of Ai: Autonomous Planning And Scheduling: A hundred million miles from Earth, NASA's Remote Agent program became the first on-board autonomous planning program to control the scheduling of operations for a spacecraft (Jonsson et al., 2000). Remote Agent generated plans from high-level goals specified from the ground, and it monitored the operation of the spacecraft as the plans were executed-detecting, diagnosing, and recovering from problems as they occurred. Game Playing: IBM's Deep Blue became the first computer program to defeat the world champion in a chess match when it bested Garry Kasparov by a score of 3.5 to 2.5 in an exhibition match (Goodman and Keene, 1997). Autonomous Control: The ALVINN computer vision system was trained to steer a car to keep it following a lane. It was placed in CMU's NAVLAB computer-controlled minivan and used to navigate across the United States-for 2850 miles it was in control of steering the vehicle 98% of the time. Diagnosis: Medical diagnosis programs based on probabilistic analysis have been able to perform at the level of an expert physician in several areas of medicine. Logistics Planning: During the Persian Gulf crisis of 1991, U.S. forces deployed a Dynamic Analysis and Replanning Tool, DART (Cross and Walker, 1994), to do automated logistics planning and scheduling for transportation. This involved up to 50,000 vehicles, cargo, and people at a time, and had to 9 account for starting points, destinations, routes, and conflict resolution Introduction among all parameters. The AI planning techniques allowed a plan to be generated in hours that would have taken weeks with older methods. The Defense Advanced Research Project Agency (DARPA) stated that this single application more than paid back DARPA's 30-year investment in AI. Robotics: Many surgeons now use robot assistants in microsurgery. HipNav (DiGioia et al., 1996) is a system that uses computer vision techniques to create a three-dimensional model of a patient's internal anatomy and then uses robotic control to guide the insertion of a hip replacement prosthesis. Language Understanding And Problem Solving: PROVERB (Littman et al., 1999) is a computer program that solves crossword puzzles better than most humans, using constraints on possible word fillers, a large database of past puzzles, and a variety of information sources including dictionaries and online databases such as a list of movies and the actors that appear in them. 1.7 SUMMARY Artificial intelligence is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Turing test: To pass the complete Turing test, the computer will need Computer vision to perceive the objects, and Robotics to manipulate objects and move about. Artificial intelligence is to make computers intelligent so that they can act intelligently as humans. Categorical definitions of AI are: Systems that think like humans, Systems that think rationally, Systems that act like humans, Systems that act rationally. AI comprises of Philosophy, Mathematics, Algorithm and Probability, Psychology, Computer Engineering, Linguistics etc. Work on Machine Intelligence started early. But the period which is considered as History of AI started from The Gestation of Artificial Intelligence (1943-1955), till The Return of Neural Networks (1986- Present) and Recent Events. There are various current AI innovations such as Autonomous cars (Driverless car), Speech Recognition, Chatbot, Web search engines, Translator, Natural Language Processing, Medical Diagnosis, Imaging Pattern Detection etc. Various practical applications of AI such as Autonomous Planning And Scheduling, Game Playing, Autonomous Control, Diagnosis, Logistics 10 Logical Agent Artificial Intelligence Planning, Robotics, Language Understanding And Problem Solving etc. 1.8 UNIT END QUESTIONS 1.1 There are well-known classes of problems that are intractably difficult for computers, and other classes that are provably undecidable by any computer. Does this mean that AI is impossible? 1.2 Suppose we extend Evans's ANALOGY program so that it can score 200 on a standard IQ test. Would we then have a program more intelligent than a human? Explain. 1.3 Examine the AI literature to discover whether or not the following tasks can currently be solved by computers: a. Playing a decent game of table tennis (ping-pong). b. Driving in the center of Cairo. c. Playing a decent game of bridge at a competitive level. d. Discovering and proving new mathematical theorems. e. Writing an intentionally funny story. f. Giving competent legal advice in a specialized area of law. g. Translating spoken English into spoken Swedish in real time. 1.4 Find an article written by a lay person in a reputable newspaper or magazine claiming the achievement of some intelligent capacity by a machine, where the claim is either wildly exaggerated or false. 1.5 Fact, fiction, and forecast: a. Find a claim in print by a reputable philosopher or scientist to the effect that a certain capacity will never be exhibited by computers, where that capacity has now been exhibited. b. Find a claim by a reputable computer scientist to the effect that a certain capacity would be exhibited by a date that has since passed, without the appearance of that capacity. c. Compare the accuracy of these predictions to predictions in other fields such as biomedicine, fusion power, nanotechnology, transportation, or home electronics. 1.6 Some authors have claimed that perception and motor skills are the most important part of intelligence, and that "higher-level" capacities are necessarily parasitic—simple add-ons to these underlying facilities. Certainly, most of evolution and a large part of the brain have been devoted to perception and motor skills, whereas AI has found tasks such as game playing and logical inference to be easier, in many ways, than perceiving and acting in the real world. Do you think 11 that AI's traditional focus on higher-level cognitive abilities is Introduction misplaced? 1.7 "Surely computers cannot be intelligent—they can only do what their programmers tell them." Is the latter statement true, and does it imply the former? 1.8 "Surely animals cannot be intelligent—they can only do what their genes tell them." Is the latter statement true, and does it imply the former? 1.9 REFERENCES Artificial Intelligence: A Modern Approach, 4th US ed.by Stuart Russell and Peter Norvig. Deepak Khemani, “A first course in Artificial Intelligence”, McGraw Hill edition, 2013. Patrick Henry Winston , “Artificial Intelligence”, Addison-Wesley, Third Edition ***** 12 2 INTELLIGENT AGENTS Unit Structure 2.1 Objectives 2.2 Introduction 2.3 Agents and environment 2.4 Good Behavior 2.4.1 Rational Agent 2.4.2 Mapping from percept sequences to actions 2.4.3 Performance Measure 2.4.4 PEAS 2.5 Nature of environment 2.6 The structure of agents 2.6.1 Structure 2.6.2 Softbots 2.6.3 PAGE 2.6.4 Types of Agent 2.7 Summary 2.8 Unit End Questions 2.9 References 2.1 OBJECTIVES After going through this unit, you will be able to:  Define AI Agent,  Understand the Agent and its Environment  Understand the Rationality of agent, Performance measure  Identify the PEAS, and PAGE  Understand the nature of Environment  Explain the Structure of Agents 2.2 INTRODUCTION An agent is anything that can be viewed as capturing its environment through cameras, sensors or some other input devices and acting upon that environment through effectors. A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body parts for effectors. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. A software agent has 13 encoded bit strings as its percepts and actions. The focus is on design agents that do a good job of acting on their Intelligent Agents environment, What we mean by a good job, rationality, performance measure, then different designs for successful agents, which things should be known to the agent and several kinds of environments. 2.3 AGENT AND ENVIRONMENT Definition: An agent is anything that perceives its environment through sensors and acting upon that environment through effectors. An agent is split into architecture and an agent program. An intelligent agent is an agent capable of making decisions about how it acts based on experiences. Sensors are used to receive percept signals. Percept signals can be anything like audio, video, image, etc. Effectors are used to make action. We can also call effectors as actuators. The following are the sensors and actuators for Robot / Robotic agent Sensors: Cameras, infrared range finders, scanners, etc. Actuators: Various motors, screen, printing devices. A robotic agent substitutes cameras and infrared range finders for the sensors and various motors for the effectors. Environment is the surrounding of the robotic agent. It is the area in which agent does the work, performs some actions. In a vacuum cleaner robotic agent, a room will be an environment. There are various types of environment fully observable or partially observable, Static or dynamic, Accessible or inaccessible, Known or unknown, Single agent or multi agent etc. 14 Artificial Intelligence Agent perceives the signals of other robotic agent actions, temperature, humidity, pressure, air type, atmospheric conditions, climate, other surrounding conditions and many more. In addition to this, in taxi driving example – road conditions, traffic conditions, weather conditions, rain or smoke fog, wind, driving rules etc. 2.4 GOOD BEHAVIOR Here, the behavior of the AI agent is checked. For this rationality, performance measure is considered. 2.4.1 Rational agent: A rational agent is the one that does ―right‖ things and acts rationally so as to achieve the best outcome even when there is uncertainty in knowledge. Rational agent can be defined as an agent who makes use of its percept sequence, experience and knowledge to maximize the performance measures of an agent for every possible action. Performance measures can be any like Accuracy, Speed, Safety, total work done with respect to time, efficiency, saving money and time, following Legal rules, etc. What is rational at any given time depends on four things: 1) The performance measure defines the degree of success. Performance is the measure of successful completion of any task. 2) Complete perceptual history is the percept sequence. This percept sequence is a sequence of signals or inputs that the agent has perceived so far. 3) What the agent knows about the environment. Especially about the environment dimensions, structure, basic laws, task, user, other agents, etc. 4) The actions that the agent can perform. It shows the capability of the agent. 15 Ideal rational agent: Intelligent Agents For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure, on the basis of the evidence provided by the percept sequence and whatever built-in knowledge the agent has. 2.4.2 The ideal mapping from percept sequences to actions: For each percept signal what will be the associated action of the agent. This information can be described in tabular format. Let’s Consider the Vacuum Cleaner robotic agent. It cleans the room. For simplicity two tiles are considered. The Vacuum cleaner agent is present on Tile A and observing the environment i.e. Tile A and Tile B both. Both the tiles have some dirt. This agent has to clean the dirt by sucking it. The following table shows the percept sequence and its associated action. 16 Artificial Intelligence Autonomous Agent: An agent is autonomous to the extent that its action choices depend on its own experience, rather than on knowledge of the environment that has been built-in by the designer. There are various autonomous agent are in development stage. Driverless car- Waymo, Google’s Alexa, etc. 2.4.3 How and When to evaluate the agent’s success: Performance measure: Performance measure is one of the major criteria for measuring success of an agent’s performance. As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave. Example: The performance measure of Vacuum-cleaner Robotic agent can depend upon various factors like it’s dirt cleaning ability, time taken to clean that dirt, consumption of electricity, etc. 2.4.4 PEAS Properties of Agent: What is PEAS? PEAS: Performance, Environment, Actuators, Sensors. Performance Measures - used to evaluate how well an agent solves the task at hand Environment - surroundings beyond the control of the agent Actuators - determine the actions the agent can perform Sensors - provide information about the current state of the environment Examples: 1) Automated Car Driving agent/Driverless Car: Performance measures: Safety, Optimum speed, Comfortable journey, Maximize profits. Environment: Roads, Traffic conditions, Clients. Actuators: Steering wheel, Accelerator, Gear, Brake, Light signal, Horn. Sensors: Cameras, Sonar system, Speedometer, GPS, Engine sensors, etc. 2) Medical Diagnosis system: Performance measures: Healthy patient (Sterilized instruments), minimize costs. 17 Environment: Patients, Doctors, Hospital environment. Intelligent Agents Actuators: Camera, Scanner (to scan patient’s report). Sensors: Body scanner, Organs scanner e.g. MRI, CT SCAN etc, Screen, Printer. 3) Soccer Player Robot: Performance measures: Number of goals, speed, legal game. Environment: Team players, opponent team players, playing ground, goal net. Actuators: Joint angles, motors. Sensors: Camera, proximity sensors, infrared sensors. 2.5 NATURE OF ENVIRONMENT OR PROPERTIES OF ENVIRONMENT 1) Accessible vs. inaccessible: If an agent can obtain complete and accurate information about the state's environment, then such an environment is called an Accessible environment else it is called inaccessible. Example: Accessible environment- An empty closed room whose state can be defined by its temperature, air pressure, humidity. Inaccessible environment- Information about an event occurs in the atmosphere of the earth. 2) Deterministic vs. non deterministic: If an agent's current state and selected action can completely determine the next state of the environment, then such an environment is called a deterministic environment. In an accessible, deterministic environment an agent does not need to worry about uncertainty. If the environment is inaccessible, however, then it may appear to be nondeterministic. This is particularly true if the environment is complex, making it hard to keep track of all the inaccessible aspects. Thus, it is often better to think of an environment as deterministic or nondeterministic from the point of view of the agent. 3) Episodic vs. non episodic: In an episodic environment, the agent's experience is divided into "episodes." Each episode consists of the agent perception and its corresponding action. The quality of its action depends just on the episode itself, because subsequent episodes do not depend on what actions occur in 18 Artificial Intelligence previous episodes. Episodic environments are much simpler because the agent does not need to think ahead. However, in a non-episodic environment, an agent requires memory of past actions to determine the next best actions. 4) Static vs. dynamic: If the environment can change while an agent is deciding on an action, then we say the environment is dynamic for that agent; otherwise it is static. Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on an action, nor need it worry about the passage of time. If the environment does not change with the passage of time but the agent's performance score does, then we say the environment is semi dynamic. Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of a static environment. 5) Discrete vs. continuous: If there are a limited number of distinct, clearly defined percepts and actions we say that the environment is discrete. Chess is discrete—there are a fixed number of possible moves on each turn. Taxi driving is continuous—the speed and location of the taxi and the other vehicles sweep through a range of continuous values. Examples of environments and their nature or characteristics are as follows: 2.6 STRUCTURE OF INTELLIGENT AGENT 2.6.1 Structure: Structure of the agent describes how the agent works. The job of AI is to design the agent program. Agent program is a function that implements the agent mapping from percepts to actions. An architecture or hardware is used to run this agent program. 19 The relationship among agents, architectures, and programs can be Intelligent Agents summed up as follows: agent = architecture + program 2.6.2 Software Agents: It is also referred to as softbots. It lives in artificial environments where computers and networks provide the infrastructure. It may be very complex with strong requirements on the agent e.g. softbot designed to fly a flight simulator, softbot designed to scan online news sources and show the interesting items to its customers, World Wide Web, real-time constraints, Natural and artificial environments may be merged user interaction sensors and actuators in the real world - camera, temperature, arms, wheels, etc. 2.6.3 Page: PAGE is used for high-level characterization of agents. P- Percepts - Information is acquired through the agent’s sensory system. A- Actions - Operations are performed by the agent on the environment through its actuators. G- Goals - Desired outcome of the task with a measurable performance. E- Environment - Surroundings beyond the control of the agent. Following are some examples of agents and their PAGE descriptions. Sr. Agent Percepts Actions Goals Environm No. Type ent 1. Medical Symptoms Questions, Healthy Patient, diagnosis , findings, tests, patient, hospital system patient's treatments minimize answers costs 2. VacBot – tile pick up desired surroundin Vacuum properties dirt, move outcome of gs beyond Cleaner like the task with the control Bot clean/dirt, a of the empty/occ measurable agent upied performance movement and orientation 3. StudentB images comments, mastery of classroom ot (text, questions, the material 20 Artificial Intelligence pictures, gestures performance instructor, note- measure: classmates taking (?) grade ) sound (language) 4. Satellite Pixels of Print a Correct Images image varying categorizat categorizatio from analysis intensity, ion of n orbiting system color scene satellite 5. Part- Pixels of Pick up Place parts in Conveyor picking varying parts and correct bins belt robot intensity sort into with parts bins 6. Refinery Temperatu Open, Maximize Refinery controlle re, close purity, r pressure valves; yield, safety readings adjust temperatur e 7. Interacti Typed Print Maximize Set of ve words exercises, student's students English suggestion score on tutor s, test correction s 2.6.4 Different types of Agent program: Every agent program has some skeleton. Real agent program implements the mapping from percepts to action. There are four types of agent program as follows: 1) Simple reflex agents 2) Agents that keep track of the world 3) Goal-based agents 4) Utility-based agents 1) Type 1: Simple Reflex Agent: Reflex agents respond immediately to percepts. They choose actions only based on the current percept. They are rational only if a correct decision is made only on the basis of current precept. Their environment is completely observable. In the simple reflex agent, the correct decision can be made on the basis of the current percept. 21 Let’s consider the driverless car – Car A. If the car B is running in front of Intelligent Agents car A, and suddenly if car B has applied brakes, and car B brake lights come ON, then driverless car agent should notice that and initiate breaking. Thus it can be specified in condition action rule and can be represented in the form as ―If condition Then Action‖. ―if car-in-front-is-breaking then initiate-braking‖. Schematic Diagram: Simple Reflex Agent Following is the agent program basically it is a function for a simple reflex agent. It works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule. Function Simple_Reflex_Agent( percept ) returns action Static: rules, a set of condition-action rules state  Interpret_Input (percept ) rule  Rule_Match ( state, rules) action  Rule_Action [ rule ] return action This agent program calls the Interpret_Input() function that generates an abstracted description of the current state from the percept input, and the Rule_Match() function returns the first rule in the set of rules that matches the given state description. Such agents can be implemented very efficiently, but their range of applicability is very less. 22 Artificial Intelligence 2) Type 2: Agents that keep track of the world: It is a reflex agent with internal state. Internal State is a representation of unobserved aspects of current state depending on percept history. Consider an example of a driverless car. If the car in front is a recent model, and has the centrally mounted brake light, then it will be possible to tell if it is braking from a single image. Unfortunately, older models have different configurations of tail lights, brake lights, and turn-signal lights, and it is not always possible to tell if the car is braking. Thus, even for the simple braking rule, our driver will have to maintain some sort of internal state in order to choose an action. Here, the internal state is not too extensive—it just needs the previous frame from the camera to detect when two red lights at the edge of the vehicle go on or off simultaneously. Consider one case in car driving: from time to time, the driver looks in the rear-view mirror to check on the locations of nearby vehicles. When the driver is not looking in the mirror, the vehicles in the next lane are invisible (i.e., the states in which they are present and absent are indistinguishable); but in order to decide on a lane-change maneuver, the driver needs to know whether or not they are there. It happens because the sensors do not provide access to the complete state of the world. In such cases, the agent may need to maintain some internal state information in order to distinguish between world states that generate the same perceptual input but nonetheless are significantly different. Here, "significantly different" means that different actions are appropriate in the two states. As time passes internal state information has to be updated. For this two kinds of knowledge have to be encoded in the agent program. First, we need some information about how the world evolves independently of the agent—for example, that an overtaking car generally will be closer behind than it was a moment ago. Second, we need some information about how the agent's own actions affect the world—for example, that when the agent changes lanes to the right, there is a gap (at least temporarily) in the lane it was in before. Following Figure shows how the current percept is combined with the old internal state to generate the updated description of the current state. 23 Intelligent Agents Diagram: Schematic Diagram for Agent that keep track of the world/ Reflex Agent with Internal State Agent Program: The interesting part is the function Update-State, which is responsible for creating the new internal state description. As well as interpreting the new percept in the light of existing knowledge about the state, it uses information about how the world evolves to keep track of the unseen parts of the world, and also must know about what the agent's actions do to the state of the world. Following is the reflex agent with internal state. It is an agent that keeps track of the world. It works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state) and then doing the action associated with that rule. function Reflex-Agent-With-State(percept) returns action static: state, a description of the current world state rules, a set of condition-action rules state  Update-State (state, percept) rule  Rule-Match (state, rules) action  Rule-Action [rule] state  Update-State (state, action) return action. 3) Type 3: Goal-based agents: Goal-based agents act so that they will achieve their goal If just the current state of the environment is known then it is not always enough to decide what to do. 24 Artificial Intelligence The agent tries to reach a desirable state, the goal may be provided from the outside (user, designer, environment), or inherent to the agent itself The agent performs the action by considering the Goal. To achieve the goal, agent has to consider long sequences of actions. Here, Search & Planning is required. Decision making is required. It involves consideration of the future—both "What will happen if I do such-and- such?" and "Will that make me happy? Goal based agents can only differentiate between goal states and non goal states. Hence, their performance can be 100% or zero. Goal-based agent is more flexible than reflex agent since the knowledge supporting a decision is explicitly modeled, thereby allowing for modifications. Limitation: Once the goal is fixed, all the actions are taken to fulfill it. And the agent loses flexibility to change its actions according to the current situation. Example: Vacuum cleaning Robot agent whose goal is to keep the house clean all the time. This agent will keep searching for dirt in the house and will keep the house clean all the time. Diagram: Schematic Diagram for Goal Based Agent 4) Type 4: Utility-based agents: Utility-based agents try to maximize their own "happiness." There are conflicting goals, out of which only few can be achieved. Goals have some uncertainty of being achieved and you need to weigh likelihood of success against the importance of a goal. Goals alone are not really enough to generate high-quality behavior. Here, the degree of happiness is considered. For example, there are many action sequences that will get the taxi to its destination, thereby achieving the goal, but some are quicker, safer, more 25 reliable, or cheaper than others. Goals just provide a crude distinction Intelligent Agents between "happy" and "unhappy" states, whereas a more general performance measure should allow a comparison of different world states (or sequences of states) according to exactly how happy they would make the agent if they could be achieved. Because "happy" does not sound very scientific, the customary terminology is to say that if one world state is preferred to another, then it has higher utility for the agent. Utility is therefore a function that maps a state onto a real number, which describes the associated degree of happiness. A complete specification of the utility function allows rational decisions in two kinds of cases where goals have trouble. First, when there are conflicting goals, only some of which can be achieved (for example, speed and safety), the utility function specifies the appropriate trade-off. Second, when there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed up against the importance of the goals Utility function is used to map a state to a measure of utility of that state. We can define a measure for determining how advantageous a particular state is for an agent. To obtain this measure a utility function can be used. The term utility is used to depict how ―HAPPY‖ the agent is to find out a generalization. Examples: 1) Google Maps – A route which requires Least possible time. 1) Automatic Car: Should reach to the target location within least possible time safely and without consuming much fuel. Diagram: Schematic Diagram for Utility Based Agent 2.7 SUMMARY An agent is anything that perceives its environment through sensors and acting upon that environment through effectors. A rational agent is the one that does ―right‖ things and acts rationally so as to achieve the best outcome even when there is uncertainty in knowledge. 26 Artificial Intelligence PEAS- Performance, Environment, Actuators, Sensors determines the behavior of an agent. There are various nature of environment or properties of environment, they are as Accessible vs. inaccessible, Deterministic vs. non deterministic, Episodic vs. non episodic, Static vs. dynamic, Discrete vs. continuous, etc. Structure of Intelligent Agent: agent = architecture + program. PAGE is used for high-level characterization of agents. Different types of Agent program are as Simple reflex agents, Agents that keep track of the world, Goal-based agents, Utility-based agents. 2.8 UNIT END QUESTIONS 1. Describe is an agent, rational Agent, autonomous agent. 2. Which are the four things that is considered to check the agent is rational at any given time? 3. What is the good behavior of agent. How performance measure is the key factor for 4. Which are the four things that is considered to check the rationality of agent at any given time. 5. How and When to evaluate the agent’s success? 6. Describe Performance measure is one of the major criteria for measuring success of an agent’s performance. 7. What is PEAS Properties of Agent? Explain with example the PEAS for any four agents. 8. Explain Nature of Environment or Properties of Environment 9. What is softbot? 10. What is PAGE? Describe with any five examples. 11. Explain the structure of Agent. 12. Which are the types of agent? Explain all the types with schematic diagram. 13. Write the Agent program for Simplex Reflex Agent and Reflex Agent with Internal State. 27 2.9 REFERENCES Intelligent Agents Artificial Intelligence: A Modern Approach, 4th US ed.by Stuart Russell and Peter Norvig. Deepak Khemani, ―A first course in Artificial Intelligence‖, McGraw Hill edition, 2013. Patrick Henry Winston , ―Artificial Intelligence‖, Addison-Wesley, Third Edition. ***** 28 UNIT II 3 SOLVING PROBLEMS BY SEARCHING Unit Structure 3.1 Objective 3.2 Introduction 3.3 Problem Solving Agents 3.4 Examples problems 3.5 Searching for solutions 3.6 Uninformed search 3.7 Informed search strategies 3.8 Heuristic functions 3.9 Summary 3.10 Exercises 3.11 Bibliography 3.1 OBJECTIVES After this chapter, you should be able to understand the following concepts: 1. Understand how to formulate a problem description. 2. Understand and solve some problems of Artificial Intelligence. 3. Know about uninformed search and based algorithms. 4. Understand informed search and based algorithms. 5. Know about heuristic functions and strategies. 3.2 INTRODUCTION A problem-solving agent firstly formulates a goal and a problem to solve. Then the agent calls a search procedure to solve it. It then uses the solution to guide its actions. It will do whatever the solution recommends as the next thing to do and then remove that step from the sequence. Once the solution has been executed, the agent will formulate a new goal. Searching techniques like uninformed and informed or heuristic use in many areas like theorem proving, game playing, expert systems, natural language processing etc. In search technique, firstly select one option and leave other option if this option is our final goal, else we continue selecting, testing and expanding until either solution is found or there are no more states to be expanded. 29 Solving Problems by 3.3 PROBLEM SOLVING AGENTS Searching It is very important task of problem domain formulation. Problems are always dealt with an Artificial Intelligence. It is commonly known as state. For solving any type of task (problem) in real world one needs formal description of the problem. Each action changes the state and the goal is to find sequence of actions and states that lead from the initial start state to a final goal state. Goals help organize behavior by limiting the objectives that the agent is trying to achieve and hence the actions it needs to consider. Goal formulation, based on the current situation and the agent‘s performance measure, is the first step in problem solving. Problem formulation is the process of deciding what actions and states to consider, given a goal. the agent will not know which of its possible actions is best, because it does not yet know enough about the state that results from taking each action. If the agent has no additional information—i.e., if the environment is unknown. An agent with several immediate options of unknown value can decide what to do by first examining future actions that eventually lead to states of known value. The agent‘s task is to find out how to act, now and in the future so that is reaches a goal state. Before it can do this, it needs to decide what sorts of actions and states it should consider. 3.3.1 Well Defined problem: A problem can be defined formally by four components. 1) The initial state that the agent starts in: For example, Consider a agent program Indian Traveller developed for travelling Hyderabad to Mumbai travelling through different states. The initial state for this agent can be described as In (Hyderabad). 2) A description of the possible actions available to the agent: Given a particular state s, ACTIONS(s) returns the set of actions that can be executed in s. We say that each of these actions is applicable in s. For example, from state In (Mumbai), the applicable actions are {Go (Solapur), Go (Osmanabad), Go (Vijayawada)} 3) The goal test: Which determines whether a given state is a goal state. Sometimes there is an explicit set of possible goal states, and the test simply checks whether the given state is one of them. For example, In Indian traveller problem the goal is to reach Mumbai i.e it is a singleton set. {In (Mumbai)} 30 Artificial Intelligence Sometimes the goal is specified by an abstract property rather than an explicitly enumerated set of states. For example, in chess, the goal is to reach a state called ―checkmate,‖ where the opponent‘s king is under attack and can‘t escape. 4) A path cost function that assigns a numeric cost to each path. The problem-solving agent chooses a cost function that reflects its own performance measure. For the agent trying to get to Mumbai, time is of the essence, so the cost of a path might be its length in kilometres. The step cost of taking action a in state x to reach state y is denoted by c (x, a, y). A solution to a problem is an action sequence that leads from the initial state to a goal state. Solution quality is measured by the path cost function, and an optimal solution has the lowest path cost among all solutions. Fig. 3.2.1 Indian traveller map 3.4 EXAMPLES PROBLEMS The problem-solving approach has been applied to a vast array of task environments. We list some known problems like Toy problem and real- world problem. A toy problem is intended to illustrate or exercise various problem-solving methods. This problem gives exact description so it is suitable for researchers to compare the performance of algorithms. A real- world problem is one whose solution people actually care about. 3.4.1 Toy Problems: The first example we examine is the vacuum world. This can be formulated as problem as follows: 31 1) States: Solving Problems by Searching The state is determined by both the agent location and the dirt locations. The agent is in one of two locations, each of which might or might not contain dirt. 2) Initial state: Our vacuum can be in any state of the 8 states shown in the figure 3.2. 3) Actions: Each state has just three actions: Left, Right, and Suck 4) Transition Model: The actions have their expected effects, except that moving Left in the leftmost square, moving Right in the rightmost square, and Sucking in a clean square have no effect. 5) Goal test: No dirt at all locations. 6) Path cost: Each cost step is 1. Fig. 3.2 The state space for the vacuum world. Links denote actions: L = Left, R= Right, S= Suck The second example we examine is 8 puzzle problem. The 8-puzzle consists of eight numbered, movable tiles set in a 3x3 frame. Each tile has a number on it. A tile that is adjacent to the blank space can slide into the space. A game consists of a starting position and a specified goal position. The goal is to transform the starting position into the goal position by sliding the tiles around. 32 Artificial Intelligence This can be formulated as problem as follows: 1) States: Location of eight tiles and blank. 2) Initial state: Any state can be designed as the initial state. 3) Actions: Move blank left, right, up or down. 4) Goal test: This checks whether the states match the goal configuration. 5) Path cost: Each step cost is 1. Fig.3.3 An instance of 8-puzzle problem The third example is 8-Queen problem: The goal of the 8-queens problem is to place eight queens on a chessboard such that no queen attacks any other. (A queen attacks any piece in the same row, column or diagonal.) This can be formulated as problem as follows: 1) States: Any arrangement of 0 to 8 queens on the board is a state 2) Initial state: No queens on board. 3) Actions: Add a queen to any empty square. 33 4) Transition Model: Solving Problems by Searching Returns the board with a queen added to the specified square. 5) Goal Test: 8 queens are on the board, none attacked. Fig. 3.4 Solution to the 8-queen problem The fourth example is Water Jug problem: You are given two jugs, a 4-gallon one and a 3-gallon one, a pump which has unlimited water which you can use to fill the jug, and the ground on which water may be poured. Neither jug has any measuring markings on it. How can you get exactly 2 gallons of water in the 4-gallon jug? State Representation and Initial State – we will represent a state of the problem as a tuple (x, y) where x represents the amount of water in the 4- gallon jug and y represents the amount of water in the 3-gallon jug. Note 0 d‖ x d‖ 4, and 0 d‖ y d‖ 3. Our initial state: (0,0). The goal state is à (2, n). Rules: 1. Fill the 4-Gallon Jug (x,y) (4,y) 2. Fill the 3-Gallon Jug. (x,y) (x,3) 3. Pour some water out of 4 Gallon jug (x,y)(x-d,y) 4. Pour some water out of 3 Gallon jug (x,y)(x,y-d) 5. Empty 4 Gallon jug on the ground. (x,y)(0,y) 6. Empty 3 Gallon jug on the ground (x,y)(x,0) 34 Artificial Intelligence 7. Pour some water from 3 Gallon jug into the 4 Gallon jug until the 4 gallon jug is full. (x,y)(4, y - (4 - x)) 8. Pour some water from 4 Gallon jug into the 3 Gallon jug until the 3 gallon jug is full. (x,y)(x - (3-y), 3) 9. Pour all water from 3 to 4 gallon. (x,y)(x+y, 0) 10. Pour all water from 4 to 3 Gallon jug. (x,y)(0, x+y) 11. Pour 2 gallon from 3 G to 4 G. (0,2)(2,0) if x A, 4), (S-->G, 10)} Iteration2: {(S--> A-->C, 4), (S--> A-->B, 7), (S-->G, 10)} Iteration3: {(S--> A-->C--->G, 6), (S--> A-->C--->D, 11), (S--> A-->B, 7), (S-->G, 10)} Iteration 4 will give the final result, as S--->A--->C--->G it provides the optimal path with cost 6. Complete: A* algorithm is complete as long as: Branching factor is finite. Cost at every action is fixed. Time complexity: It depends upon heuristic function. the time complexity is O(b^d), where b is the branching factor. Space Complexity: The space complexity of A* search algorithm is O(b^d). 49 Optimal: A* search algorithm is optimal. Solving Problems by Searching 1 2 3 8 4 7 6 5 3.8 HEURISTIC FUNCTIONS Heuristic function estimates the cost of an optimal path between a pair of states in a single agent path-finding problem. Heuristic helps in solving problems, even though there is no guarantee that is will never lead in the wrong direction. A good heuristic function is determined by its efficiency. More is the information about the problem, more is the processing time. Some toy problems, such as 8-puzzle, 8-queen, tic-tac-toe, etc., can be solved more efficiently with the help of a heuristic function. Consider the following 8-puzzle problem where we have a start state and a goal state. Our task is to slide the tiles of the current/start state and place it in an order followed in the goal state. There can be four moves either left, right, up, or down. 1 2 3 8 6 7 5 4 Start state Goal state Fig. 3.13 8-puzzle solution 50 Artificial Intelligence 3.9 SUMMARY To solve Artificial Intelligence problem, we need to follow certain steps. Firstly, define the problem precisely it means specify the problem space, the operators for moving within the space and the starting and goal state. Secondly, analyse the problem and finally, select one or more techniques for representing knowledge and for problem solving and apply it to the problem. A problem consists of five parts: the initial state, a set of actions, a transition model describing the results of those actions, a goal test function, and a path cost function. The environment of the problem is represented by a state space. A path through the state space from the initial state to a goal state is a solution Uninformed search methods have access only to the problem definition. The basic algorithms are as follows:  Breadth-first search expands the shallowest nodes first; it is complete, optimal for unit step costs, but has exponential space complexity.  Uniform-cost search expands the node with lowest path cost, g(n), and is optimal for general step costs.  Depth-first search expands the deepest unexpanded node first. It is neither complete nor optimal, but has linear space complexity. Depth- limited search adds a depth bound.  Iterative deepening search calls depth-first search with increasing depth limits until a goal is found. It is complete, optimal for unit step costs, has time complexity comparable to breadth-first search, and has linear space complexity.  Bidirectional search can enormously reduce time complexity, but it is not always applicable and may require too much space.  Informed search methods may have access to a heuristic function h(n) that estimates the cost of a solution from n.  The generic best-first search algorithm selects a node for expansion according to an evaluation function.  Greedy best-first search expands nodes with minimal h(n). It is not optimal but is often efficient.  A∗ search expands nodes with minimal f(n) = g(n) + h(n). A∗ is complete and optimal, provided that h(n) is admissible (for TREE- SEARCH) or consistent (for GRAPH-SEARCH). 3.10 EXERCISES 1. Compare uniformed search with informed search. 2. What are the problems with Hill climbing and how can they be solved 51 3. State the best first search algorithm. Solving Problems by Searching 4. Explain advantages and disadvantages of DFS and BFS. 5. You have two jugs, measuring 8 gallons, 5 gallons and a water faucet. You can fill the jugs up or empty them out from one to another or onto the ground. You need to measure out exactly three gallons. 6. Three towers labelled A, B and C are given in which tower A has finite number of disks (n) with decreasing size. The task of the game is to more the disks from tower A to tower C using tower B as an auxiliary. The rules of the game are as follows: 1. Only one disk can be moved among the towers at any given time. 2. Only the ―top‖ disk can be removed. 3. No large disk can site over a small disk. Solve the problem and describe the following terms: a) State b) Initial state c) Goal state d) Operators e) Solution 7 Write down the heuristic function for 8-puzzle problem and solve the problem. 8 What is the use of online search agent in unknown environment? 9 Define the following terms in reference to Hill Climbing a) Local Maximum b) Plateau c) Ridge 10 Using suitable example, illustrate steps of A * search. Why is A* search better than best first search? 11 Describe A* search algorithm. Prove that A* is complete and optimal 3.11 BIBLIOGRAPHY  Deva Rahul, Artificial Intelligence a Rational Approach, Shroff, 2014.  Khemani Deepak, A First course in Artificial Intelligence, McGraw Hill, 2013.  Chopra Rajiv, Artificial Intelligence a Practical Approach, S. Chand, 2012.  Russell Stuart, and Peter Norvig Artificial Intelligence a Modern Approach, Pearson, 2010.  Rich Elaine, et al. Artificial intelligence, Tata McGraw Hill, 2009. ***** 52 4 BEYOND CLASSICAL SEARCH Unit Structure 4.1 Objective 4.2 Introduction 4.3 Local search algorithms 4.4 Searching with non-deterministic action 4.5 Searching with partial observations 4.6 Online search agents and unknown environments 4.7 Summary 4.8 Unit End Questions 4.9 Bibliography 4.1 OBJECTIVES After this chapter, you should be able to:  Understand the local search algorithms and optimization problems.  Understand searching techniques of non-deterministic action.  Familiar with partial observation.  Understand online search agents and unknown environments. 4.2 INTRODUCTION This chapter covers algorithms that perform purely local search in the state space, evaluating and modifying one or more current states rather than systematically exploring paths from an initial state. These algorithms are suitable for problems in which all that matters is the solution state, not the path cost to reach it. The family of local search algorithms includes methods inspired by statistical physics (simulated annealing) and evolutionary biology (genetic algorithms). The search algorithms we have seen so far, more often concentrate on path through which the goal is reached. But the problem does not demand the path of the solution and it expects only the final configuration of the solution then we have different types of problem to solve. Local search algorithm operates using single path instead of multiple paths. They generally move to neighbours of the current state. Local algorithms use very little and constant amount of memory. Such kind of algorithms have ability to figure out reasonable solution for infinite state spaces. They are useful for solving pure optimization problems. For search where path does not matter, Local search algorithms can be used, which operates 53 by using a single current state and generally move only to neighbours of Beyond Classical Search the state. 4.3 LOCAL SEARCH ALGORITHMS Algorithms that perform purely local search in the state space, evaluating and modifying one or more current states rather than systematically exploring paths from an initial state. These algorithms are suitable for problems in which all that matters is the solution state, not the path cost to reach it. The family of local search algorithms includes methods inspired by statistical physics (simulated annealing) and evolutionary biology (genetic algorithms). This systematicity is achieved by keeping one or more paths in memory and by recording which alternatives have been explored at each point along the path. When a goal is found, the path to that goal also constitutes a solution to the problem. In many problems, however, the path to the goal is irrelevant. Local search algorithms operate using CURRENT NODE a single current node (rather than multiple paths) and generally move only to neighbors of that node. Local Search Algorithm use very little memory—usually a constant amount; and they can often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable. It is also useful for solving pure optimization problems, in which the aim is to find the best state according to an objective function. Hill Climbing is a technique to solve certain optimization problems. In these techniques, we start with a sub-optimal solution and the solution is improved repeatedly until some condition is maximized. The idea of starting with a sub-optimal solution is compared to walking up the hill, and finally maximizing some condition is compared to reaching the top of the hill. Algorithm: 1) Evaluate the initial state. If it is goal state then return and quit. 2) Loop until a solution is found or there are no new operators left. 3) Select & apply new operator. 4) Evaluate new state If it is goal state, then quit. 54 Artificial Intelligence If it is better than current state then makes it new current state. If it is not better than current then go to step 2. 4.3.1 Hill Climbing search: It is so called because of the way the nodes are selected for expansion. At each point search path, successor node that appears to lead most quickly to the top of the hill is selected for exploration. It terminates when it reaches a ―peak‖ where no neighbour has a higher value. This algorithm doesn’t maintain a search tree so the data structure for the current node need only record the state and the value of the objective function. Hill Climbing Algorithm is sometimes called as a greedy local search because it grabs a good neighbour state without thinking ahead about where to go next. Unfortunately, hill climbing suffers from following problems: This is shown in Figure 4.2.1 Fig. 4.1 Hill Climbing  Local maxima: Local maximum is a state which is better than its neighbour states, but there is also another state which is higher than it. They are also called as foot-hills. It is shown in Fig. 4.2.2 Local Maximum Fig. 4.2 Local maximum or Foot Hill 55 Solution to this problem: Beyond Classical Search Backtracking technique can be a solution of the local maximum in state space landscape. Create a list of the promising path so that the algorithm can backtrack the search space and explore other paths as well.  Plateau: It is a flat area of the search space in which a whole set of neighbouring states (nodes) have the same heuristic value. A plateau is shown in Fig. 4.3. Plateau/ Flat maximum Fig 4.3 Plateau Solution to this problem: The solution for the plateau is to take big steps or very little steps while searching, to solve the problem. Randomly select a state which is far away from the current state so it is possible that the algorithm could find non- plateau region. Another solution is to apply small steps several times in the same direction.  Ridge: It’s an area which is higher than surrounding states but it cannot be reached in a single move. It is an area of the search space which is higher than the surrounding areas and that itself has a slope. Ridge is shown in Fig. 4.4. Ridge Fig. 4.4 Ridge 56 Artificial Intelligence Solution to this problem: Trying different paths at the same time is a solution. With the use of bidirectional search, or by moving in different directions, we can improve this problem. Advantages of Hill Climbing: It can be used in continuous as well as discrete domains. Disadvantages of Hill Climbing: It is not an efficient method. It is not suitable for those problems whose value of heuristic function drops off suddenly when solution may be in sight. It is local method as it looks at the immediate solution and decides about the next step to be taken rather than exploring all consequences before taking a move. 4.3.2 Simulated Annealing: A hill-climbing algorithm which never makes a move towards a lower value guaranteed to be incomplete because it can get stuck on a local maximum. And if algorithm applies a random walk, by moving a successor, then it may complete but not efficient. Simulated Annealing is an algorithm which yields both efficiency and completeness. In mechanical term Annealing is a process of hardening a metal or glass to a high temperature then cooling gradually, so this allows the metal to reach a low-energy crystalline state. The same process is used in simulated annealing in which the algorithm picks a random move, instead of picking the best move. If the random move improves the state, then it follows the same path. Otherwise, the algorithm follows the path which has a probability of less than 1 or it moves downhill and chooses another path. Algorithm: 1. Evaluate the initial state. If it is also goal state, then return it and quit. Otherwise, continue with the initial state as the current state. 2. Initialize BEST-SO-FAR to the current state. 3. Initialize T according to the annealing schedule. 4. Loop until a solution is found or until there are no new operators left to be applied in the current state. a) Select an operator that has not yet been applied to the current state and apply it to produce a new state. b) Evaluate the new state. Compute 57  E = (value of current)- (value of new state) Beyond Classical Search  If the new state is a goal state, then return it and quit.  If it is not goal state but is better than the current state, then make it the current

Use Quizgecko on...
Browser
Browser