AI in 6 Hours PDF
Document Details
Uploaded by Deleted User
Tags
Summary
This document provides an overview of artificial intelligence (AI), covering topics such as problem-solving methods, knowledge representation, and applications. It describes different types of AI, their capabilities, and how they are used to address various tasks.
Full Transcript
http://www.knowledgegate.in/gate Artificial Intelligence Chapter-1 (INTRODUCTION) Chapter-2 (PROBLEM SOLVING METHODS) Chapter-3 (KNOWLEDGE REPRESENTATION) Chapter-4 (SOFTWARE AGENTS) Chapter-5 (APPLICATIONS) http://www.knowledgegate.in/gate ...
http://www.knowledgegate.in/gate Artificial Intelligence Chapter-1 (INTRODUCTION) Chapter-2 (PROBLEM SOLVING METHODS) Chapter-3 (KNOWLEDGE REPRESENTATION) Chapter-4 (SOFTWARE AGENTS) Chapter-5 (APPLICATIONS) http://www.knowledgegate.in/gate Video chapters Chapter-1 (INTRODUCTION): Introduction-Definition - Future of Artificial Intelligence - Characteristics of Intelligent Agents - Typical Intelligent Agents - Problem Solving Approach to Typical Al problems. Chapter-2 (PROBLEM SOLVING METHODS): Problem solving Methods - Search Strategies- Uninformed - Informed - Heuristics - Local Search Algorithms and Optimization Problems - Searching with Partial Observations - Constraint Satisfaction Problems - Constraint Propagation - Backtracking Search - Game Playing - Optimal Decisions in Games - Alpha - Beta Pruning - Stochastic Games. Chapter-3 (KNOWLEDGE REPRESENTATION): First Order Predicate Logic - Prolog Programming - Unification - Forward Chaining - Backward Chaining - Resolution - Knowledge Representation - Ontological Engineering-Categories and Objects - Events - Mental Events and Mental Objects - Reasoning Systems for Categories - Reasoning with Default Information. Chapter-4 (SOFTWARE AGENTS): Architecture for Intelligent Agents - Agent communication - Negotiation and Bargaining - Argumentation among Agents - Trust and Reputation in Multi-agent systems. Chapter-5 (APPLICATIONS): Al applications - Language Models - Information Retrieval - Information Extraction - Natural Language Processing - Machine Translation - Speech Recognition - http://www.knowledgegate.in/gate Robot -Hardware – Perception - Planning – Moving. Chapter-1 (INTRODUCTION): Introduction-Definition - Future of Artificial Intelligence - Characteristics of Intelligent Agents - Typical Intelligent Agents - Problem Solving Approach to Typical Al problems. http://www.knowledgegate.in/gate History of AI During the Second World War, British computer scientist Alan Turing worked to crack the ‘Enigma’ code which was used by German forces to send messages securely. Alan Turing and his team created the Bombe machine that was used to decipher Enigma’s messages. The Enigma and Bombe Machines laid the foundations for Machine Learning. http://www.knowledgegate.in/gate First work in AI The first work that is now generally recognized as AI was done by Warren McCulloh and Walter Pits in 1943. They drew on three sources: o Knowledge of basic Physiology and Functions of Neurons in brain. o A formal analysis of propositional logic o Turing’s Theory of Computation http://www.knowledgegate.in/gate Key Findings They proposed a model of artificial neurons in which each neuron can be characterized as being on and off. They showed that any computable function could be computed by some network of connected neurons. They also suggested that suitably defined networks can also learn. http://www.knowledgegate.in/gate Acting humanly: The Turing Test approach The Turing Test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. A computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written responses come from a person or from a computer. http://www.knowledgegate.in/gate Marven Minsky and Dean Edmonds from Harvard built the first neural network computer in 1950, it was called “The SNARC”. http://www.knowledgegate.in/gate The Birth of AI The term AI was coined in 1956 by a computer scientist “John McCarthy” in Dartmouth Conference organized by him and three other scientists. He is often known as “Father of AI”. McCarthy in 1958 developed LISP. There were so many discoveries going on in the field of AI from the year 1952 to 1969. http://www.knowledgegate.in/gate AI Winter (1974-1980) In 1970’s AI was subjected to criticism and financial setbacks. The expectations from AI was quite high due to heavy optimism but the researchers had failed to materialize the promised results. http://www.knowledgegate.in/gate Boom in AI after 1980 (1980-87) In 1980’s “Expert Systems” an AI program was adopted by many corporations. The Expert system was a program that answers questions or solves problems from a particular domain using logical rules derived from the knowledge of experts. http://www.knowledgegate.in/gate 1997: IBM's Deep Blue beats the world chess champion, Garry Kasparov, a significant milestone for AI in games. http://www.knowledgegate.in/gate 2005: Stanley, the self-driving car developed by Stanford University, wins the DARPA Grand Challenge. http://www.knowledgegate.in/gate 2011: IBM's Watson wins Jeopardy! against former winners, showing the capabilities of AI in natural language processing. http://www.knowledgegate.in/gate 2015: A breakthrough in deep learning occurs when a neural network wins the ImageNet competition, significantly improving the state of image recognition. http://www.knowledgegate.in/gate 2016: Google's AlphaGo defeats world champion Go player Lee Sedol, a significant milestone due to the complexity of the game. http://www.knowledgegate.in/gate 2018: OpenAI's GPT showcases a leap in natural language processing capabilities. http://www.knowledgegate.in/gate Intelligence and AI: Intelligence in general is the “Ability to acquire and apply knowledge and skills”. We are known as Homo-Sapiens “Homo- Man and Sapiens - Wise” it is because our intelligence is so important to us. For thousands of years we have tried to understand how our brain functions. The field of AI goes beyond this: It not only attempts to understand it but also tries to build intelligent entities. http://www.knowledgegate.in/gate What is AI There is no universally accepted definition of AI in general refers to the simulation of human intelligence processes by machines, especially computer systems. Sub-fields: It includes various sub-fields like machine learning, where machines learn from experience, natural language processing, which is about interaction with human language, and computer vision, where AI interprets and understands visual data. http://www.knowledgegate.in/gate Problem-solving: AI is used to solve complex problems and perform tasks that would usually require human intelligence. This could be anything from recognizing speech or images, making decisions, or translating languages. http://www.knowledgegate.in/gate Training: AI systems are trained using large amounts of data. For instance, an AI used for recognizing speech would be trained on a large database of spoken language. http://www.knowledgegate.in/gate Machine Learning: One of the most common types of AI in use today is based on machine learning, where AI gets smarter the more data it is given. http://www.knowledgegate.in/gate Neural Networks: AI systems often use neural networks, which are designed to mimic the human brain's ability to recognize patterns and make decisions. http://www.knowledgegate.in/gate Benefits: AI has many potential benefits, such as improving efficiency and productivity, making faster and more informed decisions, and performing tasks that are dangerous for humans. The future potential of AI is vast, with ongoing research in fields like self-driving cars, voice assistants, predictive healthcare, and many more. http://www.knowledgegate.in/gate Artificial Intelligence (AI) can be classified into four categories based on capabilities and functionalities: Reactive Machines: These are the most basic type of AI. They can react to specific situations or inputs but don't have memory or past experience to influence their decisions. Example: IBM's Deep Blue, the chess-playing AI. http://www.knowledgegate.in/gate Limited Memory: These AI systems can use past experiences or historical data to make current decisions. They have a temporary or limited memory to store past information and predictions. Example: Self-driving cars. http://www.knowledgegate.in/gate Theory of Mind: This is a more advanced type of AI that researchers are still working to develop. These systems would understand emotions, people, beliefs, and be able to interact socially. Example: More advanced personal assistants that can understand human emotions and react accordingly. http://www.knowledgegate.in/gate Self-aware AI: This is the most advanced form of AI, which is still theoretical and not yet realized. These systems would have their own consciousness, self- awareness, and emotions. They would be smarter and more capable than the human mind. http://www.knowledgegate.in/gate AGENTS AND ENVIRONMENTS An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators. A human agent has eyes, ears, and other organs for sensors are hands, legs, vocal tract, and so on for actuators. A robotic agent might have cameras and infrared range finders for sensors and various motors for actuators. http://www.knowledgegate.in/gate Percept: This refers to what an AI agent (like a robot or a computer program) senses or perceives at a specific moment. Percept Sequence: This is the total history or record of what the AI agent has sensed or perceived since it started operating. An AI agent's decisions or actions at any moment can be influenced by the entire percept sequence it has experienced so far. However, it can't base decisions on information it hasn't perceived or sensed. Agent Function: This is a rule or set of rules that determines what action an AI agent takes. based on its percept sequence. Agent Program: This is the practical, real-world implementation of the agent function. http://www.knowledgegate.in/gate Concept of Rationality in AI Rational Agent: An AI entity that always chooses the best action. Action Selection: Based on its percepts, a rational agent selects an action that's expected to maximize its performance measure, using its percept history and innate knowledge. Performance Measure: Not a fixed measure for all tasks and agents, it varies based on the specific task at hand. Designing Performance Measures: Ideally, these are designed based on desired outcomes in the environment, not preconceived ideas about the agent's behavior. http://www.knowledgegate.in/gate Understanding Task Environments and Rational Agents Task Environment: a task environment is essentially the setting or context in which an AI agent operates. This could be a physical environment, like a warehouse where a robot is sorting boxes, or a virtual environment, like a chess game where an AI is playing against a human opponent. each task environment presents a unique set of challenges or problems that the AI agent needs to navigate. the task environment of a chess game includes the rules of the game, the current state of the board, and the goal of checkmating the opponent's king. The "problem" in this case is determining the best move to make at each step to win the game. In the chess example, a rational agent would be an AI system that can analyze the state of the game, consider all possible moves, and choose the one that maximizes its chances of winning. A rational agent, then, is the "solution" to this problem: it's an AI system designed to make the best possible decisions given the circumstances of the task environment. http://www.knowledgegate.in/gate Acronymically we call it PEAS (Performance, Environment, Actuators, Sensors) description. PEAS for agent Taxi Driver: http://www.knowledgegate.in/gate Properties of task environments Fully Observable vs. Partially Observable: Fully Observable: Every relevant aspect in the environment is visible to the agent. Partially Observable: Not all aspects are visible. The agent doesn't have complete information about the environment's state because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data. Unobservable: If the agent has no sensors at all then the environment is unobservable. http://www.knowledgegate.in/gate Single Agent vs. Multi-Agent: Single Agent: Only one agent is acting in the environment, like playing solitaire. Multi-Agent: Multiple agents are interacting with the environment and potentially with each other, like playing chess. Driving taxi requires avoiding collisions maximizes the performance measure of all agents, so it is a partially cooperative multiagent environment. It is also partially competitive because only one car can occupy a parking space. http://www.knowledgegate.in/gate Deterministic vs. Stochastic: Deterministic: The next state of the environment is entirely determined by the current state and the action executed by the agent. Stochastic: The next state cannot be entirely predicted and might depend on probability, like playing poker. http://www.knowledgegate.in/gate Episodic vs. Sequential: Episodic: The agent's experience is divided into episodes. Each episode is independent of others, like in image recognition tasks. Sequential: The current decision could influence all future decisions, like in a game of chess. http://www.knowledgegate.in/gate Static vs. Dynamic: Static: The environment doesn't change while the agent is deciding, like a game of tic-tac-toe. Dynamic: The environment can change while the agent is making a decision, like in stock trading. http://www.knowledgegate.in/gate Discrete vs. Continuous: Discrete: There are a finite number of actions the agent can take, like moves in a chess game. Continuous: The actions or states can vary in a continuous way, like a self-driving car controlling its speed. http://www.knowledgegate.in/gate Known vs. Unknown: Known: The agent knows the rules of the environment, like in a game of checkers. Unknown: The agent doesn't fully understand the rules of the environment and must learn them, like navigating a new city. http://www.knowledgegate.in/gate Four basic kinds of agent programs that embody the principles underlying almost all intelligent systems are: Simple reflex agents; Model-based reflex agents; Goal-based agents; and Utility-based agents. http://www.knowledgegate.in/gate Simple reflex agents Basic Functioning: Simple reflex agents act based on current percepts, ignoring historical percepts. Condition-Action Rule: These agents follow "if-then" rules, like "if someone is in front of the car then initiate-braking." Simplicity and Limitations: While such agents are simple to design do not perform complex calculation, their intelligence and memory is limited. Problems with Unobservability: Issues arise if all necessary information isn't observable, like different configurations of car lights. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Model-based reflex agents Handling Partial Observability: Model-based agents is designed to work in partially observable environment. Internal State: These agents maintain an internal state that reflects unobserved aspects of the environment, based on percept history. Updating Internal State: This process requires knowledge of how the world evolves independently of the agent and how the agent's actions affect the world. http://www.knowledgegate.in/gate Model of the World: Agents use this knowledge, whether simple or complex, to form a model of how the world works, and then take actions accordingly. http://www.knowledgegate.in/gate Internal Model: The car has a map of the roads, traffic rules, and typical traffic patterns. Sensors: Uses cameras and sensors to perceive traffic lights, road signs, other vehicles, pedestrians, and road conditions. Updating Knowledge: It updates its model in real-time as it drives, learning current traffic conditions, construction work, or detours. Decision Making: If the car approaches a red light, it stops. When the light turns green and the path is clear, it proceeds. Learning Over Time: The car refines its internal model to improve predictions, like estimating the time it will take to reach destinations considering the current traffic. http://www.knowledgegate.in/gate Goal-based agents Goal Information: It is an extension of model based reflex agent, here we have a goal information that describes desirable situations, guiding their actions. Action Decision: These agents use their internal model and goal information to decide actions. For example, a taxi at a junction chooses a direction based on its destination. Action Selection: Goal-based agents may need to consider sequences of actions to achieve the goal. This can be straightforward or complex depending on the goal. http://www.knowledgegate.in/gate Example: If it starts to rain, a goal-based agent can update its knowledge about how the brakes will operate, adjusting behavior accordingly. This is more flexible than a reflex agent that requires explicit rules for each new situation. http://www.knowledgegate.in/gate Example: Automated Vacuum Cleaner Goal:- Clean the entire floor area. Perception:- Detects location and whether a spot is clean or dirty. Actions: - **Move:** Changes position to cover the floor. - **Clean:** Activates vacuum on dirty spots. Decision-making:- Chooses actions based on its goal and what it perceives (e.g., cleans dirty spots, moves to new areas if current is clean). Goal Achievement:- Stops operating once no dirty spots are detected, indicating the floor is clean. http://www.knowledgegate.in/gate Utility-based agents Performance Measure: Beyond having a goal, utility-based agents have a performance measure that compares different world states based on how "desirable" they are. Utility: To sound more scientific, the term "utility" is used to denote how happy or satisfied an agent is with a state. Example: Many routes can get a taxi to its destination, but some are quicker, safer, or cheaper. Utility allows the agent to distinguish between these routes. http://www.knowledgegate.in/gate Example: Smart Thermostat Goal: Maintain comfortable home temperature. Perception: Senses current temperature and occupant preferences. Actions: Heat, Cool, or Maintain temperature. Utility Calculation: Balances occupant comfort with energy efficiency. Decision-making: Chooses action to maximize comfort while minimizing energy use. Utility Achievement: Adjusts temperature to keep high utility, considering both comfort and efficiency. http://www.knowledgegate.in/gate Model-Based Utility-Based Feature Simple Reflex Agents Goal-Based Agents Reflex Agents Agents Basis of Use an internal model to Operate on the current Act to achieve defined Act to maximize a utility keep track of the world Operation percept only. goals. function. state. Environment Update internal state React to stimuli with Consider future Evaluate success based from current state and Interaction conditioned actions. consequences of actions. on a utility function. action history. More flexible, can handle Flexible; can adapt actions Very limited; cannot handle Highly flexible; aims for Flexibility new situations well. unseen scenarios to an to achieve goals in optimal performance. extent. changing environments. Learning Limited; can improve the Can adjust utility None; they do not learn from Can learn better strategies model with new function based on Ability past actions. to achieve goals. experiences. outcomes. Investment AI Thermostat controlling a A car with anti-lock Chess-playing AI Example maximizing portfolio http://www.knowledgegate.in/gate heating system. braking system. considering future moves. return. Key characteristics of a learning agent Adaptability Memory Feedback Sensitivity Decision-Making Ability Exploration and Exploitation Generalization Performance Improvement Over Time Problem-Solving Skills http://www.knowledgegate.in/gate Chapter-2 (PROBLEM SOLVING METHODS): Problem solving Methods - Search Strategies- Uninformed - Informed - Heuristics - Local Search Algorithms and Optimization Problems - Searching with Partial Observations - Constraint Satisfaction Problems - Constraint Propagation - Backtracking Search - Game Playing - Optimal Decisions in Games - Alpha - Beta Pruning - Stochastic Games. http://www.knowledgegate.in/gate Problem Spaces States Space Representation The concept of "Problem Spaces States Space Representation" is a method used in artificial intelligence to formulate and solve problems. We use this representation for several reasons: Structure: It provides a structured way of defining all possible configurations that can occur while solving a problem. Search: It allows algorithms to search through possible states. Optimization and Decision Making: Helps in finding the most efficient path or sequence of actions to reach a desired goal state from an initial state. Clarity: Visually illustrates the complexity of a problem, making it easier to understand and explain. Scalability: It can be applied to simple as well as very complex problems, scaling from basic puzzles to real-world issues in robotics or planning. http://www.knowledgegate.in/gate Problem Spaces States Space Representation Defining a problem as a state space search A problem can be defined formally by five components: {Initial State, Action(s), Result(s, a), Goal Test, Path Cost Function} http://www.knowledgegate.in/gate Initial State: The agent is at the starting point, for example, 'Home'. Action(s): The possible actions include choosing a road to travel from one intersection to another. Result(s, a): For a chosen action 'a' (road taken), the result would be a new state representing the agent's new location (another intersection). Goal Test: A function to determine if the agent has reached the destination, 'Work'. Path Cost Function: A function that adds up the distance (or time) to travel from the initial state to the current state via the chosen paths. The objective is to minimize this cost. http://www.knowledgegate.in/gate Search Strategies Search strategies refer to systematic methods and approaches used to explore and find relevant information or solutions within a given search space or dataset. Parameters for Evaluating Search Technique Performance: Completeness: Determines if the search technique guarantees finding a solution if one exists within the search space. Time and Space Complexity: Evaluates the efficiency of the search technique in terms of the time and space required to find a solution. Optimality: Determines whether the search technique can find the best or optimal solution among all possible solutions. http://www.knowledgegate.in/gate Uninformed Search(Brute Force Search/Blind Search/Exhaustive Search) Uninformed search strategies explore the search space without any specific information or heuristics about the problem. Here we proceed in a systematic way by exploring nodes in some predetermined order or simply by selecting nodes at random. Advantage: Simplicity: Uninformed search strategies are generally easy to implement and understand. Disadvantage: Inefficiency: Without additional information, uninformed search strategies may require an extensive search, leading to inefficiency in terms of time and space. Examples: Breadth First Search Depth First Search Uniform Cost Search http://www.knowledgegate.in/gate Informed Search(Heuristic Search) Informed search strategies utilize domain-specific information or heuristics to guide the search towards more promising paths. Here we have knowledge such as how far we are from the goal, path cost, how to reach to goal node etc. This knowledge helps agents to explore less to the search space and find more efficiently the goal node. Advantage: Efficiency: By leveraging domain knowledge, informed search strategies can make informed decisions and focus the search on more relevant areas, leading to faster convergence to a solution. Disadvantage: Heuristic Accuracy: The effectiveness of informed search strategies heavily relies on the quality and accuracy of the chosen heuristic function. An inaccurate or misleading heuristic can lead to suboptimal or incorrect solutions. Example: Hill Climbing Best First Search A* Algorithm http://www.knowledgegate.in/gate Aspect Uninformed Search Informed Search Searches without any prior Uses knowledge, such as Knowledge knowledge. heuristics, to guide the search. Generally more time- Finds solutions quicker by Time Efficiency consuming. prioritizing certain paths. Reduced complexity and Higher complexity due to lack typically more efficient in both Complexity of information, affecting time time and space due to and space complexity. informed decisions. Example Depth-First Search (DFS), A* Search, Heuristic Depth- Techniques Breadth-First Search (BFS). First Search, Best-First Search. Explores paths strategically Solution Approach Explores paths blindly. http://www.knowledgegate.in/gate towards the goal. BFS Breadth-First Search is an uninformed search strategy that explores all neighboring nodes at the current level before moving to the next level starting from the root. This is achieved very simply by using a FIFO queue. New nodes (which are always deeper than their parents) go to the back of the queue, and old nodes, which are shallower than the new nodes, get expanded first. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Algorithm: 1. Start with the initial state as the root node. 2. Enqueue the root node into a queue. 3. While the queue is not empty, do the following: 1. Dequeue a node from the front of the queue. 2. Expand the node and enqueue its unvisited neighboring nodes. 3. Mark the dequeued node as visited. 4. Repeat steps 3 until the queue is empty or a goal state is found. http://www.knowledgegate.in/gate Advantages: Guarantees finding the shortest path if one exists in an unweighted graph. Closer nodes are discovered before moving to farther ones. Disadvantages: Inefficient in terms of time and space for large search spaces. Not suitable for scenarios where the cost or weight of edges is not uniform. http://www.knowledgegate.in/gate Completeness: Breadth-First Search is complete if the search space is finite or if there is a solution within a finite depth. Optimality: Breadth-First Search is optimal in terms of finding the shortest path in an unweighted or uniformly weighted graph. Time Complexity: The time complexity of Breadth-First Search is O(bd), where b is the branching factor and d is the depth of the shallowest goal node. Space Complexity: The space complexity of Breadth-First Search is O(bd), as it stores all the nodes at each level in the search tree in memory. http://www.knowledgegate.in/gate Depth-first search Depth-First Search is an uninformed search strategy that explores as far as possible along each branch before backtracking. It traverses the depth of a search tree or graph before exploring the neighboring nodes. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Algorithm: 1. Start with the initial state as the root node. 2. Explore a node and recursively explore its unvisited neighboring nodes. 3. Backtrack if all neighboring nodes have been visited or there are no unvisited nodes left. 4. Repeat steps 2 and 3 until a goal state is found or all nodes have been visited. http://www.knowledgegate.in/gate Advantages: Memory Efficiency: Depth-First Search stores fewer nodes in memory compared to breadth- first search, as it explores deeper paths first. Disadvantages: Completeness Not Guaranteed: DFS may get trapped in infinite loops or infinite branches without reaching the goal state if there are cycles in the search space. Non-Optimal Solution: DFS does not guarantee finding the optimal solution, as it may find a solution at a greater depth before finding a shorter path. http://www.knowledgegate.in/gate Completeness: DFS is not complete if the search space is infinite or contains cycles. If depth is finite than it is complete. Optimality: DFS does not guarantee finding the optimal solution, as it may find a solution at a greater depth before finding a shorter path. Time Complexity: The time complexity of DFS can vary depending on the search space structure. In the worst case, it can be O(bm), where b is the branching factor and m is the maximum depth of the search tree. Space Complexity: The space complexity of DFS is O(bm), where b is the branching factor and m is the maximum depth of the search tree. It stores nodes along the current path in memory. http://www.knowledgegate.in/gate Depth-limited search It is an extension of Breadth-First Search (BFS) that takes into account the cost of reaching each node to find the lowest-cost path to the goal. Example: Consider the graph below. Based on the cost we will expand the graph in order: aà b à d à c à f à e à g à h http://www.knowledgegate.in/gate Algorithm: 1. Start with the initial state as the root node. 2. Maintain a priority queue or a priority-based data structure to store nodes based on their path cost. 3. Enqueue the root node with a path cost of zero. 4. While the priority queue is not empty, do the following: 1. Dequeue the node with the lowest path cost from the priority queue. 2. If the dequeued node is the goal state, terminate the search and return the solution. 3. Otherwise, expand the node and enqueue its unvisited neighboring nodes with their updated path costs. 5. Repeat steps 4 until the goal state is found or the priority queue is empty. http://www.knowledgegate.in/gate Advantages: Optimal Solution: Uniform Cost Search guarantees finding the lowest-cost path to the goal. Disadvantages: Time and Space Complexity: The time and space complexity can be high, especially in large search spaces or when there are many nodes with different path costs. http://www.knowledgegate.in/gate Completeness: Uniform Cost Search is complete if the search space is finite, and all path costs are greater than some threshold. Optimality: Uniform Cost Search is optimal as it guarantees finding the lowest-cost path to the goal. Time Complexity: The time complexity of Uniform Cost Search depends on the number of nodes and the cost of the lowest-cost path to the goal. In the worst case, it can be exponential, i.e., O(bd), where b is the branching factor and d is the depth of the shallowest goal node. Space Complexity: The space complexity of Uniform Cost Search can also be exponential in the worst case, i.e., O(bd), as it may need to store all the nodes along the lowest-cost path in memory. http://www.knowledgegate.in/gate Bidirectional Search Bidirectional Search is a search algorithm that simultaneously performs two separate searches, one forward from the initial state and one backward from the goal state. It aims to meet in the middle by searching for a common node reached from both directions. The motivation is that bd/2 + bd/2 is much less than bd. Bidirectional search is implemented by replacing the goal test with a check to see whether the frontiers of the two searches intersect; if they do, a solution has been found. http://www.knowledgegate.in/gate Advantages: Faster Exploration: Bidirectional Search can be faster than traditional searches by exploring the search space simultaneously from both ends, potentially reducing the search effort. Reduces Effective Branching Factor: As the search progresses from both directions, the effective branching factor is reduced, leading to a more efficient search. Disadvantages: Increased Memory Requirement: Bidirectional Search requires storing visited nodes from both directions, leading to increased memory consumption compared to unidirectional searches. Additional Overhead: The coordination and synchronization between the two searches introduce additional overhead in terms of implementation complexity. http://www.knowledgegate.in/gate Completeness: Bidirectional Search is complete if both the forward and backward searches are complete in a finite search space. Optimality: Bidirectional Search is optimal if both the forward and backward searches are optimal. Time Complexity: The time complexity of Bidirectional Search depends on the branching factor, the depth of the shallowest goal state, and the meeting point of the two searches. In the best case, it can be O(bd/2), where b is the branching factor and d is the depth of the goal state. Space Complexity: The space complexity of Bidirectional Search depends on the memory required to store visited nodes from both directions. In the best case, it can be O(bd/2), where b is the branching factor and d is the depth of the goal state. http://www.knowledgegate.in/gate Comparison of Uniformed Searches http://www.knowledgegate.in/gate Problems in Uninformed Search 1. Blind Exploration: Uninformed search strategies, such as Breadth-First Search or Depth-First Search, lack domain-specific knowledge or heuristics. They explore the search space without any information about the problem structure or the goal state 2. Inefficient in Complex Spaces: Uninformed search strategies can be inefficient in large or complex search spaces, as they may require extensive exploration of irrelevant or unpromising paths. http://www.knowledgegate.in/gate INFORMED (HEURISTIC) SEARCH STRATEGIES Heuristic search strategies are like smart search methods that use special knowledge or hints to find solutions more efficiently. It's similar to when we use our intuition or common sense to solve a problem. http://www.knowledgegate.in/gate Best-first Search Best-first search is a search algorithm which explores a graph by expanding the most promising node chosen according to a specified rule. It's called "best-first" because it greedily chooses the node that seems most promising according to the heuristic. Best-first search takes the advantage of both BFS and DFS. The evaluation function is construed as a cost estimate, so the node with the lowest evaluation is expanded first. h(n) = estimated cost of the cheapest path from the state at node n to a goal state. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Algorithm: 1. Define a ‘priority queue’ of nodes, where each node has a priority equal to its heuristic value ‘h(n)’. 2. Start with the ‘root node’ 3. Dequeue the node with the highest priority. 4. If the node contains the goal state, return it and exit. Else, enqueue all the node's successors. 5. If the queue is empty, return failure. Else, go to step 3. http://www.knowledgegate.in/gate Advantages of Best-First Search It can find a solution without exploring much of the state space. It uses less memory than other informed search methods like A* as it does not store all the generated nodes. Disadvantages of Best-First Search It is not complete. In some cases, it may get stuck in an infinite loop. It is not optimal. It does not guarantee the shortest possible path will be found. It heavily depends on the accuracy of the heuristic. http://www.knowledgegate.in/gate Time and Space Complexity The time and space complexity of Best-First Search are both O(bd), where b is the branching factor (number of successors per state), and d is the depth of the shallowest solution. These complexities can be very high in problems with a large number of states and actions. In the worst case, the algorithm will need to visit all nodes, and since each node is stored, the space complexity is also proportional to the number of nodes. http://www.knowledgegate.in/gate A* search: Minimizing the total estimated solution cost A* Search is an informed search algorithm that combines the advantages of both Uniform Cost Search and Greedy Best-First Search. It evaluates a node based on a combination of the cost of the path from the start node to that node and an estimated heuristic function that estimates the cost to reach the goal form the current node. A∗ search evaluates nodes by combining g(n), the cost to reach the node, and h(n), the estimated cost to get from the node to the goal: f(n) = g(n) + h(n) http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Algorithm: 1. Start with the initial state as the root node. 2. Create an evaluation function that combines the cost of the path and a heuristic estimate. 3. Initialize an empty priority queue or priority-based data structure. 4. Enqueue the initial state into the priority queue based on the evaluation function. 5. While the priority queue is not empty, do the following: 1. Dequeue the node with the highest priority from the priority queue. 2. If the dequeued node is the goal state, terminate the search and return the solution. 3. Otherwise, expand the node and enqueue its unvisited neighboring nodes into the priority queue based on the evaluation function. 6. Repeat steps 5 until a solution is found or the priority queue is empty. http://www.knowledgegate.in/gate Completeness: A* Search is complete if the search space is finite and the heuristic function is admissible (never overestimates the actual cost). Optimality: A* Search is optimal if the heuristic function is admissible and consistent (also known as monotonic). Time Complexity: The time complexity of A* Search depends on the heuristic function, the branching factor, and the structure of the search space. In the worst case, it can be exponential. Space Complexity: The space complexity of A* Search depends on the size of the priority queue and the number of nodes stored in memory. In the worst case, it can be exponential. http://www.knowledgegate.in/gate Conditions for optimality: Admissibility and consistency The first condition we require for optimality is that h(n) be an admissible heuristic. An admissible heuristic is one that never overestimates the cost to reach the goal. Admissible heuristics are by nature optimistic because they think the cost of solving the problem is less than it actually is. For example, Straight-line distance is admissible because the shortest path between any two points is a straight line, so the straight line cannot be an overestimate. http://www.knowledgegate.in/gate A second, slightly stronger condition called consistency (or monotonicity) is required only for applications of A∗ to graph search. A heuristic h(n) is consistent if, for every node n and every successor n’ of n generated by any action a, the estimated cost of reaching the goal from n is no greater than the step cost of getting to n’ plus the estimated cost of reaching the goal from n’: h(n) ≤ c(n, a, n’) + h(n’) http://www.knowledgegate.in/gate Problem with A* Algorithm So far we have considered search strategies for OR graphs through which we want to find a single path to a goal. The AND-OR GRAPH (or tree) is useful for representing the solution of problems that can solved by decomposing them into a set of smaller problems, all of which must then be solved. This decomposition, or reduction, generates arcs that we call AND arcs. One AND arc may point to any number of successor nodes, all of which must be solved in order for the arc to point to a solution. http://www.knowledgegate.in/gate AO Search Algorithm* http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate The pseudocode of AO* is as follows: 1. Initialize the graph with a single node (the start node). 2. While the solution graph contains non-terminal nodes (nodes that have successors not in the graph): 1. Choose a non-terminal node for expansion based on a given strategy. 2. Expand the node (add successors to the graph and update the costs of nodes). 3. The process continues until the start node is labeled as a terminal node. http://www.knowledgegate.in/gate Advantages of AO Algorithm* It can efficiently solve problems with multiple paths due to its use of heuristics. It is optimal when the heuristic function is admissible (never overestimates the true cost). Disadvantages of AO Algorithm* It can consume a large amount of memory, similar to the A* algorithm. The performance of AO* is heavily dependent on the accuracy of the heuristic function. If the heuristic function is not well-chosen, AO* could perform poorly. http://www.knowledgegate.in/gate Completeness and Optimality AO* is complete, meaning it is guaranteed to find a solution if one exists. It is also optimal, meaning it will find the best solution, provided that the heuristic function is admissible (never overestimates the true cost) and consistent (satisfies the triangle inequality). Time and Space Complexity The time and space complexity of the AO* algorithm is highly dependent on the problem at hand, particularly the size of the state space, the number of goals, and the quality of the heuristic function. In the worst case, the algorithm will need to visit all nodes, and since each node is stored, the space complexity is also proportional to the number of nodes. http://www.knowledgegate.in/gate Hill Climbing Algorithm It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making a change, If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found. Hill climbing is sometimes called greedy local search with no backtracking because it grabs a good neighbor state without thinking ahead about where to go next. http://www.knowledgegate.in/gate Algorithm for Simple Hill climbing Start: Begin at a random point on the problem landscape. Evaluate: Assess the current position. Look Around: Check neighboring points and evaluate them. Can Improve: Determine if any neighbors are better (higher value) than the current position. Move: If a better neighbor exists, move to that point. Repeat: Continue the process from the new point. No Improvement: If no better neighbors are found, you've hit a peak. End: The process stops when no further improvement can be made. http://www.knowledgegate.in/gate Problems with Hill Climbing Local maxima: a local maximum is a peak that is higher than each of its neighboring states but lower than the global maximum. Hill-climbing algorithms that reach the vicinity of a local maximum will be drawn upward toward the peak but will then be stuck with nowhere else to go. http://www.knowledgegate.in/gate Ridges: Ridges result in a sequence of local maxima that is very difficult for greedy algorithms to navigate. http://www.knowledgegate.in/gate Plateau: A plateau is a flat area of the state-space landscape. It can be a flat local maximum, from which no uphill exit exists, or a shoulder, from which progress is possible. A hill-climbing search might get lost on the plateau. In each case, the algorithm reaches a point at which no progress is being made. http://www.knowledgegate.in/gate Advantages of Hill Climbing: It's simple to understand and easy to implement. It requires less computational power compared to other search algorithms. If the heuristic is well chosen, it can find a solution in a reasonable time. Disadvantages of Hill Climbing: It's not guaranteed to find the optimal solution. It's highly sensitive to the initial state and can get stuck in local optima. It does not maintain a search history, which can cause the algorithm to cycle or loop. It can't deal effectively with flat regions of the search space (plateau) or regions that form a ridge. http://www.knowledgegate.in/gate Random-restart hill climbing To avoid local optima, you can perform multiple runs of the algorithm from different random initial states. This is called random-restart hill climbing and increases the chance of finding a global optimum. Tabu Search: To prevent the algorithm from getting stuck in a loop or revisiting states, a list of previously visited states can be maintained, which are then avoided in future steps. Local Beam Search: To tackle plateau and local optima, this variation of hill climbing keeps track of k states rather than just one. It begins with k randomly generated states. At each step, all the successors of all k states are generated, and if any one is a goal, it halts. Else, it selects the best k successors from the complete list and repeats. http://www.knowledgegate.in/gate Means-Ends Analysis What it is: A strategic approach combining forward and backward reasoning to tackle complex and large problems in AI. Why use it: It helps in focusing the problem-solving effort on significant discrepancies between the current state and the goal, reducing unnecessary exploration. http://www.knowledgegate.in/gate Process: Break down the problem: Split a large problem into major components and address these first. Address sub-problems: Once the major components are aligned towards the goal, solve any smaller issues that emerge. Integrate strategies: Use a blend of forward search (from the starting point) and backward search (from the goal) to guide the problem-solving process effectively. http://www.knowledgegate.in/gate We first evaluate the difference between the current and the goal state. For each difference, we will generate a new state and will apply the operators. Applying Delete operator: The first difference that we find is that in goal state there is no dot symbol which is present in the initial state, so, first we will apply the Delete operator to remove this dot. http://www.knowledgegate.in/gate Applying Move Operator: After applying the Delete operator, the new state occurs which we will again compare with goal state. After comparing these states, there is another difference that is the triangle is outside the circle, so, we will apply the Move Operator. http://www.knowledgegate.in/gate Applying Expand Operator: Now a new state is generated in the third step, and we will compare this state with the goal state. After comparing the states there is still one difference which is the size of the triangle, so, we will apply Expand operator, and finally, it will generate the goal state. http://www.knowledgegate.in/gate Operator Subgoaling: What it is: A process within Means-Ends Analysis (MEA) for handling situations where a direct action (operator) cannot be applied to the current state. Purpose: To establish the necessary conditions (preconditions) for applying an operator by setting up sub-goals that modify the current state. http://www.knowledgegate.in/gate Means-Ends Analysis Steps: Identify Differences: Note the discrepancies between the current and goal states. Set Sub-goals: Create intermediate goals aimed at reducing identified differences. Find Operators: Look for actions that can achieve these sub-goals. Apply Operators: Execute the most promising actions. If constraints prevent direct action, set new sub-goals to remove these constraints. Iterate: Repeat these steps until the goal is achieved or no further progress can be made. http://www.knowledgegate.in/gate Water Jug Problem The Water Jug Problem is a classic example often used in computer science and artificial intelligence to demonstrate problem-solving methods such as Means-Ends Analysis. Here's a brief overview: Problem Statement: You have two jugs with different capacities (for example, one jug holds 3 liters and the other holds 5 liters) and a water source. The goal is to measure out a specific amount of water (say 4 liters), you can fill, empty, or transfer water between the jugs and no measurements marking on either jug. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Constraints Satisfaction Problem CSPs are a key topic in computer science, especially in artificial intelligence (AI). Definition: A CSP consists of a set of variables, a domain for each variable, and a set of constraints that specify allowable combinations of values. Variables: Elements that have to be assigned values from their domains. For example, in a scheduling problem, variables could be different time slots. Domains: The range of possible values that each variable can take. For example, in a coloring problem, the domain might be a set of colors. Constraints: Rules that define which combinations of values are valid or invalid. They restrict the ways in which variables can be assigned values. Solutions: Assignments of values to all variables that do not violate any constraints. A problem can have zero, one, or multiple solutions. Solving Methods: There are various strategies for solving CSPs, including backtracking, forward checking, and constraint propagation. Applications: CSPs are widely used in many fields such as scheduling, planning, resource allocation, timetabling, and in games and puzzles like Sudoku. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate The 8-puzzle Problem The 8-puzzle problem is a classic search problem in the field of Artificial Intelligence (AI) and computer science. It involves a 3x3 grid with 8 numbered tiles and one empty space. The goal is to move the tiles around until they are in a specific goal state. http://www.knowledgegate.in/gate Grid Structure: The puzzle consists of a 3x3 grid, making 9 spaces. There are 8 numbered tiles and one empty space (often represented by 0 or a blank). Initial State: The puzzle starts in a random configuration with the tiles not in order. Goal State: The objective is to arrange the tiles in numerical order. Moves: You can move a tile into the empty space if it is adjacent to it (up, down, left, or right). Diagonal moves are not allowed. Difficulty Levels: Not all initial states are solvable. Search Algorithms: To find a solution, various search algorithms can be used, such as Breadth-First Search (BFS), Depth-First Search (DFS), A* Search, and others. These algorithms explore possible moves from the initial state to reach the goal state. Heuristics: For more efficient search, heuristics such as the Manhattan distance or the number of misplaced tiles can be used, especially in algorithms like A* to estimate the cost to reach the goal from a given state. Applications: Solving the 8-puzzle problem helps in understanding important concepts in AI and algorithms, such as state space, search strategies, and the use of heuristics. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Find the most cost-effective path to reach the final state from initial state using A* Algorithm. f(n) = g(n) + h(n) http://www.knowledgegate.in/gate Consider g(n) = Depth of node, and h(n) = Number of misplaced tiles. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate The minimax algorithm Definition: Minimax is a specialized search algorithm used in game playing to determine the optimal sequence of moves for a player in a zero-sum game involving two players. Mechanism: It operates by recursively using backtracking to simulate all possible moves in the game, effectively searching through a game tree. Decision Making: The algorithm computes the minimax decision for the current state and uses a depth-first search algorithm for the exploration of the complete game tree. Players' Roles: In the game tree, there are two types of nodes: MAX nodes, where the algorithm selects the move with the maximum value, and MIN nodes, where the algorithm selects the move with the minimum value. Games: It's typically applied to perfect information games like chess, checkers, and tic-tac-toe. Initial Values: In the algorithm, the MAX player is often represented with negative infinity (-∞) as the initial worst case, while MIN is represented with positive infinity (∞). http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Properties: Completeness: The algorithm is complete and will definitely find a solution if one exists. Optimality: It guarantees to find the optimal strategy for both players. Time Complexity: The time complexity is O(b^m), where b is the branching factor (the average number of child nodes for each node in the tree) and m is the maximum depth of the tree. Space Complexity: The space complexity is also O(b^m), because of the need to store all the nodes in memory. Limitations: The algorithm can be slow for complex games with many possible moves (like chess). A game like chess can have around 35 possible moves at any point, leading to a vast number of possible game states to evaluate. http://www.knowledgegate.in/gate ALPHA–BETA PRUNING Alpha-beta pruning is an optimization technique for the minimax algorithm. It significantly reduces the number of nodes that need to be evaluated in the game tree without affecting the final result. Alpha-beta pruning skips the evaluation of certain branches in the game tree by identifying moves that are evidently worse than previously examined moves. Alpha Value: The best value that the maximizer currently can guarantee at that level or above. Beta Value: The best value that the minimizer currently can guarantee at that level or above. Initial Values: Alpha starts at negative infinity (-∞), and Beta starts at positive infinity (∞). When to Prune: A branch is pruned when the minimizer's best option (Beta) is less than the maximizer's best option (Alpha) since the maximizer will never allow the game to go down a path that could lead to a worse outcome than it has already found. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate Benefits: Efficiency: Alpha-beta pruning can greatly reduce the number of nodes that are explored in the game tree, leading to faster decision-making. Optimality: The final decision made by the alpha-beta pruning is the same as the decision that would be made by the full minimax algorithm, ensuring optimality is not compromised. Widely Applicable: It can be applied to any game tree, not just two-player games, though the algorithm assumes perfect play on both sides. http://www.knowledgegate.in/gate Branch and Bound Objective: To find the optimal solution in problems like the traveling salesman, knapsack, and job scheduling. Mechanism: Searches a state space tree of potential solutions by branching and bounding. Branching: Expands nodes, creating children that represent partial solutions. Bounding: Uses bounds to estimate the minimum or maximum value of regions in the space; prunes if the bound is worse than the best solution so far. Strategies: Can use best-first, breadth-first, or depth-first search to explore nodes. Pruning: Skips branches that can't improve the best current solution. Efficiency: Reduces the search space, making some intractable problems solvable. Result: Finds the global optimum if the bounding function is accurate. http://www.knowledgegate.in/gate http://www.knowledgegate.in/gate 0/1 Knapsack using Branch and Bound i 1 2 3 4 P 31 29 21 24 W 6 8 5 3 http://www.knowledgegate.in/gate Chapter-3 (KNOWLEDGE REPRESENTATION) http://www.knowledgegate.in/gate LOGICAL AGENTS Intelligence comes from Knowledge and Knowledge comes from the ability to reason. Knowledge-based agents: The intelligence of humans is achieved—not by purely reflex mechanisms but by processes of reasoning that operate on internal representations of knowledge. In AI, this approach to intelligence is embodied in knowledge-based agents. Logical Agents: Agents that can form representations of a complex world, use a process of inference to derive new representations about the world, and use these new representations to deduce what to do. http://www.knowledgegate.in/gate KNOWLEDGE-BASED AGENTS The central component of a knowledge-based agent is its knowledge base, or KB which may initially contain some background knowledge. A knowledge base is a set of sentences. Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world. Sometimes we dignify a sentence with the name axiom. We add new sentences to the knowledge base and query what is known through Inferences (deriving new sentences from old). http://www.knowledgegate.in/gate A proposition is a declarative sentence (that is, a sentence that declares a fact) that is either true or false, but not both. 1. Delhi is the capital of USA 2. How are you doing 3. 5