AI ARTIFICIAL INTELLIGENCE.pdf

Full Transcript

TYBSC-CS 2023-24 PREPARED BY SIA PURSWANI AI ARTIFICIAL INTELLIGENCE PREPARED BY SIA PURSWANI SYLLABUS UNIT 1: What Is AI: Foundations, History and State of the Art of AI. Intelligent Agents: Agents and Environments, Nature of Environments, Structure of Agents. Problem...

TYBSC-CS 2023-24 PREPARED BY SIA PURSWANI AI ARTIFICIAL INTELLIGENCE PREPARED BY SIA PURSWANI SYLLABUS UNIT 1: What Is AI: Foundations, History and State of the Art of AI. Intelligent Agents: Agents and Environments, Nature of Environments, Structure of Agents. Problem Solving by searching: Problem-Solving Agents, Example Problems, Searching for Solutions, Uninformed Search Strategies, Informed (Heuristic) Search Strategies, Heuristic Functions. PREPARED BY SIA PURSWANI UNIT II Learning from Examples: Forms of Learning, Supervised Learning, Learning Decision Trees, Evaluating and Choosing the Best Hypothesis, Theory of Learning, Regression and Classification with Linear Models, Artificial Neural Networks, Nonparametric Models, Support Vector Machines, Ensemble Learning, Practical Machine Learning PREPARED BY SIA PURSWANI UNIT III Learning probabilistic models: Statistical Learning, Learning with Complete Data, Learning with Hidden Variables: The EM Algorithm. Reinforcement learning: Passive Reinforcement Learning, Active Reinforcement Learning, Generalization in Reinforcement Learning, Policy Search, Applications of Reinforcement Learning PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI Developers use artificial intelligence to more efficiently perform tasks that are otherwise done manually, connect with customers, identify patterns, and solve problems. PREPARED BY SIA PURSWANI History of Artificial Intelligence Artificial Intelligence is not a new word and not a new technology for researchers. This technology is much older than you would imagine. Even there are the myths of Mechanical men in Ancient Greek and Egyptian Myths. Following are some milestones in the history of AI which defines the journey from the AI generation to till date development. PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI The birth of Artificial Intelligence (1952-1956) ○ Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems. ○ Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field. At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that time. PREPARED BY SIA PURSWANI HISTORY OF AI ○ Year 1980: AI came back with "Expert System". Expert systems were programmed that emulate the decision-making ability of a human expert. ○ Year 2006: AI came in the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI. ○ Year 2012: Google has launched an Android app feature "Google now", which was able to provide information to the user as a prediction. ○ Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a boom. ○ Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with high intelligence. PREPARED BY SIA PURSWANI STATE OF ART The technology, becoming more advanced everyday, has given researchers new tools or in some cases upgraded the existing ones. These tools have made possible important breakthroughs in many fields that have helped to achieve some of the targets Please note that the AI landscape is rapidly evolving, and there may have been further progress since then. Here are some key areas that represent the state of the art in AI up to that time: PREPARED BY SIA PURSWANI 1. Speech Recognition 2. Game Playing 3. Spam Fighting: Each day learning algorithms classify over a billion messages as spam, saving the recipient time to delete unrequired messages. 4. Logistic Planning: AI planning techniques generated in hours a plan that would have taken weeks with older methods. 5. Robotics 6. Machine Translation: Computer program automatically translates one language into another PREPARED BY SIA PURSWANI Agent ❖ An agent is anything that can be viewed as perceiving its environment through its sensors and acting upon that environment through actuators ❖ A human agent has eyes, ears, and other organs for sensors and hands, legs and so on for actuators. ❖ A robotic agent might have cameras and infrared for sensors and various motors for actuators. ❖ A software agent receives keystrokes, file contents and network packets as sensory input and acts on the environment by displaying on the screen, writing files, and sending network packets. PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI Percept? An agent percept sequence is the complete history of everything the agent has ever perceived. In general, an agent choice of action at any instant can depend upon the entire percept sequence observed to date, but not on anything it hasn’t perceived. PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI Agent Terminology 1. Performance Measure of Agent − It is the criteria, which determines how successful an agent is. 2. Behavior of Agent − It is the action that agent performs after any given sequence of percepts. 3. Percept − It is agent’s perceptual inputs at a given instance. 4. Percept Sequence − It is the history of all that an agent has perceived till date. 5. Agent Function − It is a map from the precept sequence to an action.Internally the agent function for an artificial agent will be implemented by an agent program. PREPARED BY SIA PURSWANI Vacuum Cleaner Problem in Artificial Intelligence Vacuum cleaner problem is a well-known search problem for an agent which works on Artificial Intelligence. In this problem, our vacuum cleaner is our agent. It is a goal based agent, and the goal of this agent, which is the vacuum cleaner, is to clean up the whole area. So, in the classical vacuum cleaner problem, we have two rooms and one vacuum cleaner. There is dirt in both the rooms and it is to be cleaned. The vacuum cleaner is present in any one of these rooms. So, we have to reach a state in which both the rooms are clean and are dust free. So, there are eight possible states possible in our vacuum cleaner problem. PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI Here, states 1 and 2 are our initial states and state 7 and state 8 are our final states (goal states). This means that, initially, both the rooms are full of dirt and the vacuum cleaner can reside in any room. And to reach the final goal state, both the rooms should be clean and the vacuum cleaner again can reside in any of the two rooms. The vacuum cleaner can perform the following functions: move left, move right, move forward, move backward and to suck dust. But as there are only two rooms in our problem, the vacuum cleaner performs only the following functions here: move left, move right and suck. PREPARED BY SIA PURSWANI Concept of Rationality A rational agent is one that does the right thing- every entry in the table for the agent function is filled out correctly Rationality depends on 4 things: 1. Performance measure that defines the criteria of success 2. Agent’s prior knowledge of environment 3. Actions that the agent can perform 4. Agent’s percept sequence to date PREPARED BY SIA PURSWANI Nature of Environment We need to describe the PEAS for the “bidding on an item at an auction” activity. PEAS stands for Performance measures, Environment, Actuators, and Sensors. We shall see what these terms mean individually. Performance measures: These are the parameters used to measure the performance of the agent. How well the agent is carrying out a particular assigned task. Environment: It is the task environment of the agent. The agent interacts with its environment. It takes perceptual input from the environment and acts on the environment using actuators. Actuators: These are the means of performing calculated actions on the environment. For a human agent; hands and legs are the actuators. Sensors: These are the means of taking the input from the environment. For a human agent; ears, eyes, and nose are the sensors. PREPARED BY SIA PURSWANI PEAS DESCRIPTION OF TASK ENVIRONMENT FOR AN AUTOMATED TAXI Agent Type Performance Environment Actuators Sensors Measure Taxi Driver Safe, fast, legal, Roads, other Steering, Cameras, sonar, comfortable trip, traffic, accelerator, GPS, maximize profits pedestrians, brake, accelerometer, customers signal,horn, engine sensor, display keyboard PREPARED BY SIA PURSWANI PROPERTIES OF TASK ENVIRONMENT 1. FULLY OBSERVABLE vs. PARTIALLY OBSERVABLE If an agents sensor give it access to complete state of environment at each point in time then we can say that the task environment is fully observable. A task environment is effectively full observable if sensors detects all aspects that are relevant to the choice of action. Fully observable environment are convenient because the agent need not maintain any internal state to keep track of the world. An environment might be partially observable because of noise and inaccurate sensors or because parts of the state are simply PREPARED BY SIA PURSWANI 2. Single Agent vs. Multiagent The distinction between single-agent and multiagent environment may seem simple enough. For example, an agent solving a crossword puzzle by itself is clearly a single-agent environment. Whereas an agent playing a chess is in a two-agent environment. Thus chess is a competitive multiagent environment PREPARED BY SIA PURSWANI 3. DETERMINISTIC vs. STOCHASTIC If the next state of the environment is completely determined by the current state and the action executed by the agent then we say that the environment is deterministic otherwise it is stochastic. Most real situations are so complex that it is impossible to keep track of all the unobserved aspects; for practical purposes they must be treated as stochastic Example: taxi driving is clearly stochastic because one can never predict the behavior of traffic exactly PREPARED BY SIA PURSWANI 4. Episodic vs. Sequential In an episodic task environment the agents experience is divided into atomic episodes. In each episode the agent receives a percept and then performs a single action. The next episode does not depend on the actions taken in previous episodes. In sequential environment the current decision could affect all future decisions Chess and taxi driving are sequential: in both cases short-term action can have long-term consequences. PREPARED BY SIA PURSWANI Static vs.Dynamic If the environment can change while an agent is deliberating then we say that environment is dynamic otherwise it is static. Static environment are easy to deal with because the agent need not keep looking at the world while it is deciding. Dynamic environment on the other hand are continuously asking the agent what it wants to do PREPARED BY SIA PURSWANI Known vs. Unknown In a known environment, the outcomes for all actions are given. If the environment is unknown, the agent will have to learn how it works in order to make good decisions. PREPARED BY SIA PURSWANI Structure of Agents The job of AI is to design an agent program that implements the agent function -the mapping from percept to action. agent= architecture + program The program we choose has to be one that is appropriate for the architecture. The architecture makes the percepts from the sensors available to the program, runs the program and feeds the program action choices to the actuators as they are generated. PREPARED BY SIA PURSWANI There are many examples of agents in artificial intelligence. Intelligent personal assistants: These are agents that are designed to help users with various tasks, such as scheduling appointments, sending messages, and setting reminders. Examples of intelligent personal assistants include Siri, Alexa, and Google Assistant. Gaming agents: These are agents that are designed to play games, either against human opponents or other agents. Examples of gaming agents include chess-playing agents. Traffic management agents: These are agents that are designed to manage traffic flow in cities. They can monitor traffic patterns, adjust traffic lights, and reroute vehicles to minimize congestion. Examples of traffic management agents include those used in smart cities around the world. PREPARED BY SIA PURSWANI Types of Agents 1. Simple Reflex Agents The simplest kind of agent is the simple reflex agent. An agent which performs actions based on the current input only, by ignoring all the previous inputs is called as simple reflex agent. These agents select actions on the basis of current percept, ignoring the rest of the percept history. For example the vacuum agent is a simple reflex agent because its decision is based only on the current location and on whether that location contains dirt. Simple reflex behaviors occur more in more complex environment. They have the admirable property of being simple but they turn out to be of limited intelligence. The Simple reflex agent works on Condition-action rule, which means it maps the current state to action. Such as a Room Cleaner agent, it works only if there is dirt in the room. PREPARED BY SIA PURSWANI Fig- Schematic diagram of simple reflex agent PREPARED BY SIA PURSWANI Problems for the simple reflex agent design approach: ○ They have very limited intelligence ○ They do not have knowledge of non-perceptual parts of the current state ○ Mostly too big to generate and to store. ○ Not adaptive to changes in the environment t.t Model-based reflex agent ○ Partially observable environment cannot be handled well by simple reflex agents because it does not keep track on the previous state. ○ So one more type of agent was created known as model-based reflex agent. ○ A model-based agent has two important factors: a. Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent. b. Internal State: It is a representation of the current state based on percept history. ○ These agents have the model, "which is knowledge of the world" and based on the model they perform actions. PREPARED BY SIA PURSWANI Fig A model-based reflex agent PREPARED BY SIA PURSWANI From fig it can be seen that once the sensor takes input from the environment, agent checks for the current state of the environment. After that it checks for the previous state which shows how the world is developing and how the environment is affected by the action which was taken by the agent at earlier stage. Once this is verified based on condition-action protocol an action is decided. Knowledge about how the world is changing is called as a model of the world. Agent which uses such model while working is called as the “model-based agent” Goal-based agents ○ The knowledge of the current state environment is not always sufficient to decide for an agent to what to do. ○ The agent needs to know its goal which describes desirable situations. ○ Goal-based agents expand the capabilities of the model-based agent by having the "goal" information. ○ They choose an action, so that they can achieve the goal. ○ These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. ○ Such considerations of different scenario are called searching and planning, which makes an agent proactive. PREPARED BY SIA PURSWANI Fig a model-based, goal-based agent PREPARED BY SIA PURSWANI Utility-based agents ○ These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. ○ Utility-based agent act based not only goals but also the best way to achieve the goal. ○ The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. ○ The utility function maps each state to a real number to check how efficiently each action achieves the goals. PREPARED BY SIA PURSWANI Fig- a model-based, utility-based agent PREPARED BY SIA PURSWANI Learning Agents ○ A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities. ○ It starts to act with basic knowledge and then able to act and adapt automatically through learning. ○ A learning agent has mainly four conceptual components, which are: a. Learning element: It is responsible for making improvements by learning from environment b. Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard. c. Performance element: It is responsible for selecting external action d. Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences. ○ Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance. PREPARED BY SIA PURSWANI Fig- a general learning agent PREPARED BY SIA PURSWANI PROBLEM SOLVING AGENT The reflex agents are known as the simplest agents because they directly map states into actions. Unfortunately, these agents fail to operate in an environment where the mapping is too large to store and learn. Goal-based agent, on the other hand, considers future actions and the desired outcomes. Here, we will discuss one type of goal-based agent known as a problem-solving agent, which uses atomic representation with no internal states visible to the problem-solving algorithms. PREPARED BY SIA PURSWANI The problem-solving agent performs precisely by defining problems and its several solutions. According to psychology, “a problem-solving refers to a state where we wish to reach to a definite goal from a present state or condition.” According to computer science, a problem-solving is a part of artificial intelligence which encompasses a number of techniques such as algorithms, heuristics to solve a problem. Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal PREPARED BY SIA PURSWANI Steps performed by Problem-solving agent Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. Goal formulation is based on the current situation and the agent’s performance measure PREPARED BY SIA PURSWANI Problem Formulation It is the most important step of problem-solving which decides what actions should be taken to achieve the formulated goal. There are following five components involved in problem formulation: 1. Initial State: It is the starting state or initial step of the agent towards its goal. 2. Actions: It is the description of the possible actions available to the agent. 3. Transition Model: It describes what each action does. 4. Goal Test: It determines if the given state is a goal state. 5. Path cost: It assigns a numeric cost to each path that follows the goal. The problem solving agent selects a cost function, which reflects its performance measure. Remember, an optimal solution has the lowest path cost among all the solutions. PREPARED BY SIA PURSWANI EXAMPLE PROBLEMS The problem-solving approach has been applied to a vast array of task environment. Basically, there are two types of problem approaches: Toy Problem: It is a concise and exact description of the problem which is used by the researchers to compare the performance of algorithms. A toy problem is intended to illustrate various problem-solving methods. Real-world Problem: It is real-world based problems which require solutions. Unlike a toy problem, it does not depend on descriptions, but we can have a general formulation of the problem. PREPARED BY SIA PURSWANI 8 Puzzle Problem: Here, we have a 3×3 matrix with movable tiles numbered from 1 to 8 with a blank space. The tile adjacent to the blank space can slide into that space. The objective is to reach a specified goal state similar to the goal state, as shown in the below figure. In the figure, our task is to convert the current state into goal state by sliding digits into the blank space. PREPARED BY SIA PURSWANI The problem formulation is as follows: States: It describes the location of each numbered tiles and the blank tile. Initial State: We can start from any state as the initial state. Actions: Here, actions of the blank space is defined, i.e., either left, right, up or down Transition Model: It returns the resulting state as per the given state and actions. Goal test: It identifies whether we have reached the correct goal-state. Path cost: The path cost is the number of steps in the path where the cost of each step is 1. Note: The 8-puzzle problem is a type of sliding-block problem which is used for testing new search algorithms in artificial intelligence PREPARED BY SIA PURSWANI 8-queens problem: The aim of this problem is to place eight queens on a chessboard in an order where no queen may attack another. A queen can attack other queens either diagonally or in same row and column. From the following figure, we can understand the problem as well as its correct solution. It is noticed from the above figure that each queen is set into the chessboard in a position where no other queen is placed diagonally, in same row or column. PREPARED BY SIA PURSWANI Following steps are involved in this formulation States: Arrangement of any 0 to 8 queens on the chessboard. Initial State: An empty chessboard Actions: Add a queen to any empty box. Transition model: Returns the chessboard with the queen added in a box. Goal test: Checks whether 8-queens are placed on the chessboard without any attack. Path cost: There is no need for path cost because only final states are counted. In this formulation, there is approximately 1.8 x 1014 possible sequence to investigate. PREPARED BY SIA PURSWANI REAL WORLD PROBLEMS Traveling salesperson problem(TSP): It is a touring problem where the salesman can visit each city only once. The objective is to find the shortest tour and sell-out the stuff in each city. VLSI Layout problem: In this problem, millions of components and connections are positioned on a chip in order to minimize the area, circuit-delays, stray-capacitances, and maximizing the manufacturing yield. The layout problem is split into two parts: Cell layout: Here, the primitive components of the circuit are grouped into cells, each performing its specific function. Each cell has a fixed shape and size. The task is to place the cells on the chip without overlapping each other. Channel routing: It finds a specific route for each wire through the gaps between the cells PREPARED BY SIA PURSWANI SEARCHING FOR SOLUTION A solution is an action sequence, so search algorithms work by considering various possible action sequences. The possible action sequences starting at the initial state form a search tree with the initial state at the root; the branches are actions and the nodes correspond to states in the state space of the problem. PREPARED BY SIA PURSWANI Uninformed Search Strategies Uninformed search is a class of general-purpose search algorithms which operates in brute force-way. Uninformed search algorithms do not have additional information about state or search space other than how to traverse the tree, so it is also called blind search. Following are the various types of uninformed search algorithms: 1. Breadth-first Search 2. Depth-first Search 3. Uniform cost search 4. Depth-limited Search PREPARED BY SIA PURSWANI BREADTH FIRST SEARCH BFS is a simple strategy in which the root node is expanded first, then all the successors of the root node are expanded next, then their successors and so on. All the nodes are expanded at a given depth in the search tree before any nodes at the next level are expanded. BFS is an instance of the general graph-search algorithm in which the shallowest unexpanded node is chosen for expansion. This is achieved very simply by using FIFO queue for the frontier. Thus new nodes go to the back of the queue. PREPARED BY SIA PURSWANI Starting from the root, all the nodes at a particular level are visited first and then the nodes of the next level are traversed till all the nodes are visited. To do this a queue is used. All the adjacent unvisited nodes of the current level are pushed into the queue and the nodes of the current level are marked visited and popped from the queue. Step1: Initially queue and visited arrays are empty. PREPARED BY SIA PURSWANI Step2: Push node 0 into queue and mark it visited. PREPARED BY SIA PURSWANI Step 3: Remove node 0 from the front of queue and visit the unvisited neighbours and push them into queue. PREPARED BY SIA PURSWANI Step 4: Remove node 1 from the front of queue and visit the unvisited neighbours and push them into queue. PREPARED BY SIA PURSWANI Step 5: Remove node 2 from the front of queue and visit the unvisited neighbours and push them into queue. PREPARED BY SIA PURSWANI Step 6: Remove node 3 from the front of queue and visit the unvisited neighbours and push them into queue. As we can see that every neighbours of node 3 is visited, so move to the next node that are in the front of the queue. PREPARED BY SIA PURSWANI Steps 7: Remove node 4 from the front of queue and visit the unvisited neighbours and push them into queue. As we can see that every neighbours of node 4 are visited, so move to the next node that is in the front of the queue. PREPARED BY SIA PURSWANI Depth-Limited Search Algorithm: A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit will treat as it has no successor nodes further. Depth-limited search can be terminated with two Conditions of failure: ○ Standard failure value: It indicates that problem does not have any solution. ○ Cut off failure value: It defines no solution for the problem within a given depth limit. Advantages: Depth-limited search is Memory efficient. Disadvantages: ○ Depth-limited search also has a disadvantage of incompleteness. ○ It may not be optimal if the problem has more than one solution. PREPARED BY SIA PURSWANI Uniform-cost Search Algorithm: Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to their path costs form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same. Advantages: Uniform cost search is optimal because at every state the path with the least cost is chosen. Disadvantages: It does not care about the number of steps involved in searching and only concerned about path cost. Due to which this algorithm may be stuck in an infinite loop. PREPARED BY SIA PURSWANI Informed (Heuristic) Search Strategies A heuristic is a technique that is used to solve a problem faster than the classic methods. These techniques are used to find the approximate solution of a problem when classical methods do not. Heuristics are said to be the problem-solving techniques that result in practical and quick solutions. Heuristics are strategies that are derived from past experience with similar problems. Heuristics use practical methods and shortcuts used to produce the solutions that may or may not be optimal, but those solutions are sufficient in a given limited timeframe. Heuristics are used in situations in which there is the requirement of a short-term solution. On facing complex situations with limited resources and time, Heuristics can help the companies to make quick decisions by shortcuts and approximated calculations. Most of the heuristic methods involve mental shortcuts to make decisions on past experiences. PREPARED BY SIA PURSWANI PREPARED BY SIA PURSWANI A* Search Algorithm A* search is the most commonly known form of best-first search. It uses the heuristic function h(n) and cost to reach the node n from the start state g(n). It finds the shortest path through the search space using the heuristic function. This search algorithm expands fewer search tree and gives optimal results faster. Algorithm of A* search: Step 1: Place the starting node in the OPEN list. Step 2: Check if the OPEN list is empty or not. If the list is empty, then return failure and stops. Step 3: Select the node from the OPEN list which has the smallest value of the evaluation function (g+h). If node n is the goal node, then return success and stop, otherwise. Step 4: Expand node n and generate all of its successors, and put n into the closed list. For each successor n', check whether n' is already in the OPEN or CLOSED list. If not, then compute the evaluation function for n' and place it into the Open list. Step 5: Else, if node n' is already in OPEN and CLOSED, then it should be attached to the back pointer which reflects the lowest g(n') value. Step 6: Return to Step 2. PREPARED BY SIA PURSWANI In A* search algorithm, we use search heuristic as well as the cost to reach the node. Hence we can combine both costs as following, and this sum is called as a fitness number. PREPARED BY SIA PURSWANI Example: In this example, we will traverse the given graph using the A* algorithm. The heuristic value of all states is given in the below table so we will calculate the f(n) of each state using the formula f(n)= g(n) + h(n), where g(n) is the cost to reach any node from start state. Here we will use OPEN and CLOSED list. PREPARED BY SIA PURSWANI Heuristics function Heuristic is a function which is used in Informed Search, and it finds the most promising path. It takes the current state of the agent as its input and produces the estimation of how close agent is from the goal. The heuristic method, however, might not always give the best solution, but it guaranteed to find a good solution in reasonable time. Heuristic function estimates how close a state is to the goal. It is represented by h(n), and it calculates the cost of an optimal path between the pair of states. The value of the heuristic function is always positive. h(n)

Use Quizgecko on...
Browser
Browser