Foundations of Artificial Intelligence (SCSB1311) PDF
Document Details
Uploaded by Deleted User
SCSB
Tags
Summary
This document covers the foundational concepts of artificial intelligence (AI). It discusses various types of AI, including reactive machines, limited memory, and the utility based agents, along with introductions to knowledge representation and reasoning. The document explores machine learning and its applications in areas such as language translation and fraud detection.
Full Transcript
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE (SCSB1311) UNIT – I INTRODUCTION AND PROBLEM SOLVING Introduction Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such...
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE (SCSB1311) UNIT – I INTRODUCTION AND PROBLEM SOLVING Introduction Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language processing (NLP). Fig: 1 Intelligence Intelligence is a property of mind that encompasses many related mental abilities, such as the capabilities to Reason Plan Solve Problems Think Abstractly Comprehend Ideas and Language Learn Fig: 2 At the simplest level, machine learning uses algorithms trained on data sets to create machine learning models that allow computer systems to perform tasks like making song recommendations, identifying the fastest way to travel to a destination, or translating text from one language to another. Some of the most common examples of AI in use today include: ChatGPT: Uses large language models (LLMs) to generate text in response to questions or comments posed to it. Google Translate: Uses deep learning algorithms to translate text from one language to another. Netflix: Uses machine learning algorithms to create personalized recommendation engines for users based on their previous viewing history. Tesla: Uses computer vision to power self-driving features on their cars. Finance industry. Fraud detection is a notable use case for AI in the finance industry. AI's capability to analyze large amounts of data enables it to detect anomalies or patterns that signal fraudulent behavior. Health care industry. AI-powered robotics could support surgeries close to highly delicate organs or tissue to mitigate blood loss or risk of infection. The relation between knowledge and intelligence: Knowledge of real-worlds plays a vital role in intelligence and same for creating artificial intelligence. Knowledge plays an important role in demonstrating intelligent behavior in AI agents. An agent is only able to accurately act on some input when he has some knowledge or experience about that input. Fig: 3 Types of AI Fig: 4 Type 1 Fig: 5 Type 2: Reactive Machines Purely reactive machines are the most basic types of Artificial Intelligence. Such AI systems do not store memories or past experiences for future actions. These machines only focus on current scenarios and react on it as per possible best action. IBM's Deep Blue system is an example of reactive machines. Google's AlphaGo is also an example of reactive machines. Limited Memory Limited memory machines can store past experiences or some data for a short period of time. These machines can use stored data for a limited time period only. Self-driving cars are one of the best examples of Limited Memory systems. These cars can store recent speed of nearby cars, the distance of other cars, speed limit, and other information to navigate the road. Theory of Mind Theory of Mind AI should understand the human emotions, people, beliefs, and be able to interact socially like humans. This type of AI machines is still not developed, but researchers are making lots of efforts and improvement for developing such AI machines. Self-Awareness Self-awareness AI is the future of Artificial Intelligence. These machines will be super intelligent, and will have their own consciousness, sentiments, and self-awareness. These machines will be smarter than human mind. Self-Awareness AI does not exist in reality still and it is a hypothetical concept. Foundations of AI Foundations of AI generally refer to the fundamental principles, theories, and concepts that underpin the field of Artificial Intelligence. These include: Fig: 6 Machine Learning: Techniques and algorithms that allow computers to learn from data and make predictions or decisions. Knowledge Representation and Reasoning: Methods to represent and manipulate knowledge to enable intelligent behavior. Computer Vision: Algorithms and systems that enable computers to interpret visual information. Natural Language Processing (NLP): Techniques for enabling computers to understand, interpret, and generate human language. Robotics: Design and control of robots to perform tasks traditionally done by humans. Ethics and AI: The study of ethical issues arising from the development and deployment of AI systems. Philosophical Foundations: The exploration of questions such as consciousness, intelligence, and the nature of mind in relation to AI. These foundations are crucial for both the development of AI systems and for understanding the societal implications of AI technologies. Foundation of AI is based on Mathematics Neuroscience Control Theory Linguistics Foundations – Mathematics More formal logical methods Boolean logic Fuzzy logic Uncertainty The basis for most modern approaches to handle uncertainty in AI applications can be handled by Probability theory Modal and Temporal logics Foundations – Neuroscience How do the brain works? Early studies (1824) relied on injured and abnormal people to understand what parts of brain work More recent studies use accurate sensors to correlate brain activity to human thought By monitoring individual neurons, monkeys can now control a computer mouse using thought alone Moore’s law states that computers will have as many gates as humans have neurons in 2020 How close are we to have a mechanical brain? Parallel computation, remapping, interconnections, Foundations – Control Theory Machines can modify their behavior in response to the environment (sense/action loop) Water-flow regulator, steam engine governor, thermostat The theory of stable feedback systems (1894) Build systems that transition from initial state to goal state with minimum energy In 1950, control theory could only describe linear systems and AI largely rose as a response to this shortcoming Fig: 7 Applications of AI Fig: 8 History of AI YEAR INNOVATION 1923 Karel Čapek play named “Rossum's Universal Robots” (RUR) opens in London, first use of the word "robot" in English. 1943 Foundations for neural networks laid. 1945 Isaac Asimov, a Columbia University alumni, coined the term Robotics. 1950 Alan Turing introduced Turing Test for evaluation of intelligence and published Computing Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a search. 1956 John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program at Carnegie Mellon University. 1958 John McCarthy invents LISP programming language for AI. 1964 Danny Bobrow's dissertation at MIT showed that computers can understand natural language well enough to solve algebra word problems correctly. 1965 Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in English. 1969 Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion, perception, and problem solving. 1973 The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot, capable of using vision to locate and assemble models. 1979 The first computer-controlled autonomous vehicle, Stanford Cart, was built. 1985 Harold Cohen created and demonstrated the drawing program, Aaron. 1997 The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov. 2000 Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates meteorites. Intelligent agent In artificial intelligence, an agent is a computer program or system that is designed to perceive its environment, make decisions and take actions to achieve a specific goal or set of goals. The agent operates autonomously, meaning it is not directly controlled by a human operator. Agents can be classified into different types based on their characteristics, such as whether they are reactive or proactive, whether they have a fixed or dynamic environment, and whether they are single or multi-agent systems. Reactive agents are those that respond to immediate stimuli from their environment and take actions based on those stimuli. Proactive agents, on the other hand, take initiative and plan ahead to achieve their goals. The environment in which an agent operates can also be fixed or dynamic. Fixed environments have a static set of rules that do not change, while dynamic environments are constantly changing and require agents to adapt to new situations. Multi-agent systems involve multiple agents working together to achieve a common goal. These agents may have to coordinate their actions and communicate with each other to achieve their objectives. Agents are used in a variety of applications, including robotics, gaming, and intelligent systems. They can be implemented using different programming languages and techniques, including machine learning and natural language processing. Fig: 9 The intelligence is intangible. It is composed of Reasoning Learning Problem Solving Perception Linguistic Intelligence Reasoning − It is the set of processes that enables us to provide basis for judgement, making decisions, and prediction. There are broadly two types − Learning − It is the ac vity of gaining knowledge or skill by studying, prac sing, being taught, or experiencing something. Learning enhances the awareness of the subjects of the study. The ability of learning is possessed by humans, some animals, and AI-enabled systems. Learning is categorized as − Auditory Learning − It is learning by listening and hearing. For example, students listening to recorded audio lectures. Episodic Learning − To learn by remembering sequences of events that one has witnessed or experienced. This is linear and orderly. Motor Learning − It is learning by precise movement of muscles. For example, picking objects, Writing, etc. Observational Learning − To learn by watching and imita ng others. For example, child tries to learn by mimicking her parent. Perceptual Learning − It is learning to recognize s muli that one has seen before. For example, identifying and classifying objects and situations. Relational Learning − It involves learning to differen ate among various s muli on the basis of relational properties, rather than absolute properties. For Example, Adding ‘little less’ salt at the time of cooking potatoes that came up salty last time, when cooked with adding say a tablespoon of salt. Spatial Learning − It is learning through visual s muli such as images, colors, maps, etc. For Example, A person can create roadmap in mind before actually following the road. Stimulus-Response Learning − It is learning to perform a par cular behavior when a certain stimulus is present. For example, a dog raises its ear on hearing doorbell. Problem Solving − It is the process in which one perceives and tries to arrive at a desired solution from a present situation by taking some path, which is blocked by known or unknown hurdles. Problem solving also includes decision making, which is the process of selecting the best suitable alternative out of multiple alternatives to reach the desired goal are available. Perception − It is the process of acquiring, interpre ng, selec ng, and organizing sensory information. Perception presumes sensing. In humans, perception is aided by sensory organs. In the domain of AI, perception mechanism puts the data acquired by the sensors together in a meaningful manner. Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the verbal and written language. It is important in interpersonal communication. An AI system is composed of an agent and its environment. The agents act in their environment. The environment may contain other agents. An agent is anything that can be viewed as: Perceiving its environment through sensors and Acting upon that environment through actuators Structure of an AI Agent Fig: 10 To understand the structure of Intelligent Agents, we should be familiar with Architecture and Agent programs. Architecture is the machinery that the agent executes on. It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC. An agent program is an implementation of an agent function. An agent function is a map from the percept sequence(history of all that an agent has perceived to date) to an action. Agent = Architecture + Agent Program Architecture = the machinery that an agent executes on. Agent Program = an implementation of an agent function. Uses of Agents Robotics: Agents can be used to control robots and automate tasks in manufacturing, transportation, and other industries. Smart homes and buildings: Agents can be used to control heating, lighting, and other systems in smart homes and buildings, optimizing energy use and improving comfort. Transportation systems: Agents can be used to manage traffic flow, optimize routes for autonomous vehicles, and improve logistics and supply chain management. Healthcare: Agents can be used to monitor patients, provide personalized treatment plans, and optimize healthcare resource allocation. Finance: Agents can be used for automated trading, fraud detection, and risk management in the financial industry. Games: Agents can be used to create intelligent opponents in games and simulations, providing a more challenging and realistic experience for players. Natural language processing: Agents can be used for language translation, question answering, and chatbots that can communicate with users in natural language. Cybersecurity: Agents can be used for intrusion detection, malware analysis, and network security. Environmental monitoring: Agents can be used to monitor and manage natural resources, track climate change, and improve environmental sustainability. Social media: Agents can be used to analyze social media data, identify trends and patterns, and provide personalized recommendations to users. Types of agents Agents can be grouped into five classes based on their degree of perceived intelligence and capability : Simple Reflex Agents Model-Based Reflex Agents Goal-Based Agents Utility-Based Agents Learning Agent Multi-agent systems Hierarchical agents Fig: 11 Simple Reflex Agents Fig: 12 This is a simple type of agent which works on the basis of current percept and not based on the rest of the percepts history. The agent function, in this case, is based on condition-action rule where the condition or the state is mapped to the action such that action is taken only when condition is true or else it is not. If the environment associated with this agent is fully observable, only then is the agent function successful, if it is partially observable, in that case the agent function enters into infinite loops that can be escaped only on randomization of its actions. The problems associated with this type include very limited intelligence, No knowledge of non-perceptual parts of the state, huge size for generation and storage and inability to adapt to changes in the environment. Example: A thermostat in a heating system. Model-Based Reflex Agents Fig: 13 Model-based agent utilizes the condition-action rule, where it works by finding a rule that will allow the condition, which is based on the current situation, to be satisfied. Irrespective of the first type, it can handle partially observable environments by tracking the situation and using a particular model related to the world. It consists of two important factors, which are Model and Internal State. Model provides knowledge and understanding of the process of occurrence of different things in the surroundings such that the current situation can be studied and a condition can be created. Actions are performed by the agent based on this model. Internal State uses the perceptual history to represent a current percept. The agent keeps a track of this internal state and is adjusted by each of the percepts. The current internal state is stored by the agent inside it to maintain a kind of structure that can describe the unseen world. The state of the agent can be updated by gaining information about how the world evolves and how the agent's action affects the world. Example: A vacuum cleaner that uses sensors to detect dirt and obstacles and moves and cleans based on a model. Learning Agents Fig: 14 Learning agent, as the name suggests, has the capability to learn from past experiences and takes actions or decisions based on learning capabilities. Example: A spam filter that learns from user feedback. It gains basic knowledge from past and uses that learning to act and adapt automatically. It comprises of four conceptual components, which are given as follows: Learning element: It makes improvements by learning from the environment. Critic: Critic provides feedback to the learning agent giving the performance measure of the agent with respect to the fixed performance standard. Performance element: It selects the external action. Problem generator: This suggests actions that lead to new and informative experiences. Goal Based Agents Fig: 15 This type takes decisions on the basis of its goal or desirable situations so that it can choose such an action that can achieve the goal required. It is an improvement over model based agent where information about the goal is also included. This is because it is not always sufficient to know just about the current state, knowledge of the goal is a more beneficial approach. The aim is to reduce the distance between action and the goal so that the best possible way can be chosen from multiple possibilities. Once the best way is found, the decision is represented explicitly which makes the agent more flexible. It carries out considerations of different situations called searching and planning by considering long sequence of possible actions for confirming its ability to achieve the goal. This makes the agent proactive. It can easily change its behavior if required. Example: A chess-playing AI whose goal is winning the game. Utility Based Agents Fig: 16 Utility agent have their end uses as their building blocks and is used when best action and decision needs to be taken from multiple alternatives. It is an improvement over goal based agent as it not only involves the goal but also the way the goal can be achieved such that the goal can be achieved in a quicker, safer, cheaper way. The extra component of utility or method to achieve a goal provides a measure of success at a particular state that makes the utility agent different. It takes the agent happiness into account and gives an idea of how happy the agent is because of the utility and hence, the action with maximum utility is considered. This associated degree of happiness can be calculated by mapping a state onto a real number. Mapping of a state onto a real number with the help of utility function gives the efficiency of an action to achieve the goal. Example: A delivery drone that delivers packages to customers efficiently while optimizing factors like delivery time, energy consumption, and customer satisfaction. Hierarchical Agent A Hierarchical Agent is an advanced AI solution that helps businesses manage and optimize complex operations across various levels. It enables efficient allocation of tasks and responsibilities based on skill levels and proficiency. With Hierarchical Agents, businesses can monitor team performance, streamline communication, and boost productivity. Here are some use cases of hierarchical agents for business: Sales and Marketing Optimization Customer Service Enhancement Supply Chain Management Financial Decision-Making Human Resources and Talent Management Data Analytics and Insights Process Automation Risk Management Product and Service Innovation Future Trends in AI Agents Fig: 17 AI-Enabled Customer Experience (CX) The future of customer experience is heavily influenced by AI, with agents providing personalized recommendations and powering intelligent chatbots and virtual assistants. These advancements enhance customer satisfaction and loyalty through tailored interactions and responsive service. Automation and Robotics AI agents are transforming traditional processes across sectors, from manufacturing with industrial robots to autonomous vehicles. This trend increases efficiency, reduces human error, and paves the way for safer, more reliable operations. Generative AI Generative AI enables AI agents to create new content, including art, music, and written content, using models like GANs, RNNs, and CNNs. This trend has the potential to revolutionize fields like advertising, entertainment, and media. AI-Assisted Decision-Making AI agents will become integral in decision support systems across industries, analyzing complex datasets to identify trends and provide insights for more informed decision-making. Ethical AI Emphasis on ethical AI involves developing systems that are not only effective but also responsible and transparent. Problem solving agents The problem-solving agent perfoms precisely by defining problems and its several solutions. According to computer science, a problem-solving is a part of artificial intelligence which encompasses a number of techniques such as algorithms, heuristics to solve a problem. Therefore, a problem-solving agent is a goal-driven agent and focuses on satisfying the goal. To build a system to solve a particular problem, we need to do four things: Define the problem precisely. This definition must include specification of the initial situations and also final situations which constitute (i.e) acceptable solution to the problem. Analyze the problem (i.e) important features have an immense (i.e) huge impact on the appropriateness of various techniques for solving the problems. Isolate and represent the knowledge to solve the problem. Choose the best problem – solving techniques and apply it to the particular problem. Steps performed by Problem-solving agent Fig: 18 Goal Formulation: It is the first and simplest step in problem-solving. It organizes the steps/sequence required to formulate one goal out of multiple goals as well as actions to achieve that goal. Goal formulation is based on the current situation and the agent’s performance measure (discussed below). Problem Formulation: It is the most important step of problem-solving which decides what actions should be taken to achieve the formulated goal. There are following five components involved in problem formulation: o Initial State: It is the starting state or initial step of the agent towards its goal. o Actions: It is the description of the possible actions available to the agent. o Transition Model: It describes what each action does. o Goal Test: It determines if the given state is a goal state. o Path cost: It assigns a numeric cost to each path that follows the goal. The problem solving agent selects a cost function, which reflects its performance measure. Remember, an optimal solution has the lowest path cost among all the solutions. The 8-puzzle is a 3 × 3 array containing eight square pieces, numbered 1 through 8, and one empty space. A piece can be moved horizontally or vertically into the empty space, in effect exchanging the positions of the piece and the empty space. There are four possible moves, UP (move the blank space up), DOWN, LEFT and RIGHT. The aim of the game is to make a sequence of moves that will convert the board from the start state into the goal state: Fig: 19 This example can be solved by the operator sequence UP, RIGHT, UP, LEFT, DOWN. Uninformed search strategies Uninformed search is a class of general-purpose search algorithms which operates in brute force-way. Uninformed search algorithms do not have additional information about state or search space other than how to traverse the tree, so it is also called blind search. Following are the various types of uninformed search algorithms: Breadth-first Search Depth-first Search Depth-limited Search Iterative deepening depth-first search Uniform cost search Bidirectional Search Breadth first search Breadth-first search is the most common search strategy for traversing a tree or graph. This algorithm searches breadthwise in a tree or graph, so it is called breadth-first search. BFS algorithm starts searching from the root node of the tree and expands all successor node at the current level before moving to nodes of next level. The breadth-first search algorithm is an example of a general-graph search algorithm. Breadth-first search implemented using FIFO queue data structure. In the below tree structure, we have shown the traversing of the tree using BFS algorithm from the root node S to goal node K. BFS search algorithm traverse in layers, so it will follow the path which is shown by the dotted arrow, and the traversed path will be: S---> A--->B---->C--->D---->G--->H--->E---->F---->I---->K Fig: 20 Advantages: BFS will provide a solution if any solution exists. If there are more than one solutions for a given problem, then BFS will provide the minimal solution which requires the least number of steps. Disadvantages: It requires lots of memory since each level of the tree must be saved into memory to expand the next level. BFS needs lots of time if the solution is far away from the root node. Time Complexity: Time Complexity of BFS algorithm can be obtained by the number of nodes traversed in BFS until the shallowest Node. Where the d= depth of shallowest solution and b is a node at every state. T (b) = 1+b2+b3+.......+ bd= O (bd) Space Complexity: Space complexity of BFS algorithm is given by the Memory size of frontier which is O(bd). Completeness: BFS is complete, which means if the shallowest goal node is at some finite depth, then BFS will find a solution. Optimality: BFS is optimal if path cost is a non-decreasing function of the depth of the node. Algorithm Uniform cost search Uniform-cost search is a searching algorithm used for traversing a weighted tree or graph. This algorithm comes into play when a different cost is available for each edge. The primary goal of the uniform-cost search is to find a path to the goal node which has the lowest cumulative cost. Uniform-cost search expands nodes according to their path costs form the root node. It can be used to solve any graph/tree where the optimal cost is in demand. A uniform-cost search algorithm is implemented by the priority queue. It gives maximum priority to the lowest cumulative cost. Uniform cost search is equivalent to BFS algorithm if the path cost of all edges is the same. Fig: 21 Advantages: Uniform cost search is optimal because at every state the path with the least cost is chosen. Disadvantages: It does not care about the number of steps involve in searching and only concerned about path cost. Due to which this algorithm may be stuck in an infinite loop. Completeness:Uniform-cost search is complete, such as if there is a solution, UCS will find it. Time Complexity: Let C* is Cost of the optimal solution, and ε is each step to get closer to the goal node. Then the number of steps is = C*/ε+1. Here we have taken +1, as we start from state 0 and end to C*/ε. Hence, the worst-case time complexity of Uniform-cost search is O(b1 + [C*/ε])/. Space Complexity: The same logic is for space complexity so, the worst-case space complexity of Uniform-cost search is O(b1 + [C*/ε]). Optimal: Uniform-cost search is always optimal as it only selects a path with the lowest path cost. Algorithm Depth first search Depth-first search is a recursive algorithm for traversing a tree or graph data structure. It is called the depth-first search because it starts from the root node and follows each path to its greatest depth node before moving to the next path. DFS uses a stack data structure for its implementation. The process of the DFS algorithm is similar to the BFS algorithm. Fig: 22 Advantage: DFS requires very less memory as it only needs to store a stack of the nodes on the path from root node to the current node. It takes less time to reach to the goal node than BFS algorithm (if it traverses in the right path). Disadvantage: There is the possibility that many states keep re-occurring, and there is no guarantee of finding the solution. DFS algorithm goes for deep down searching and sometime it may go to the infinite loop. In the below search tree, we have shown the flow of depth-first search, and it will follow the order as: Root node--->Left node ----> right node. It will start searching from root node S, and traverse A, then B, then D and E, after traversing E, it will backtrack the tree as E has no other successor and still goal node is not found. After backtracking it will traverse node C and then G, and here it will terminate as it found goal node. Completeness: DFS search algorithm is complete within finite state space as it will expand every node within a limited search tree. Time Complexity: Time complexity of DFS will be equivalent to the node traversed by the algorithm. It is given by: T(n)= 1+ n2+ n3 +.........+ nm=O(nm) Where, m= maximum depth of any node and this can be much larger than d (Shallowest solution depth) Space Complexity: DFS algorithm needs to store only single path from the root node, hence space complexity of DFS is equivalent to the size of the fringe set, which is O(bm). Optimal: DFS search algorithm is non-optimal, as it may generate a large number of steps or high cost to reach to the goal node. Algorithm Depth limited search A depth-limited search algorithm is similar to depth-first search with a predetermined limit. Depth-limited search can solve the drawback of the infinite path in the Depth-first search. In this algorithm, the node at the depth limit will treat as it has no successor nodes further. Fig: 23 Depth-limited search can be terminated with two Conditions of failure: Standard failure value: It indicates that problem does not have any solution. Cutoff failure value: It defines no solution for the problem within a given depth limit. Advantages: Depth-limited search is Memory efficient. Disadvantages: Depth-limited search also has a disadvantage of incompleteness. It may not be optimal if the problem has more than one solution. Completeness: DLS search algorithm is complete if the solution is above the depth-limit. Time Complexity: Time complexity of DLS algorithm is O(bℓ). Space Complexity: Space complexity of DLS algorithm is O(b×ℓ). Optimal: Depth-limited search can be viewed as a special case of DFS, and it is also not optimal even if ℓ>d. Bidirectional search Bidirectional search algorithm runs two simultaneous searches, one form initial state called as forward-search and other from goal node called as backward-search, to find the goal node. Bidirectional search replaces one single search graph with two small sub graphs in which one starts the search from an initial vertex and other starts from goal vertex. The search stops when these two graphs intersect each other. Fig: 24 Bidirectional search can use search techniques such as BFS, DFS, DLS, etc. Advantages: Bidirectional search is fast. Bidirectional search requires less memory Disadvantages: Implementation of the bidirectional search tree is difficult. In bidirectional search, one should know the goal state in advance. Completeness: Bidirectional Search is complete if we use BFS in both searches. Time Complexity: Time complexity of bidirectional search using BFS is O(bd). Space Complexity: Space complexity of bidirectional search is O(bd). Optimal: Bidirectional search is Optimal. Searching with partial Information If the environment is not fully observable or deterministic, then the following types of problems occur: Sensorless problems If the agent has no sensors, then the agent cannot know it’s current state, and hence would have to make many repeated action paths to ensure that the goal state is reached regardless of it’s initial state. Contingency problems This is when the environment is partially observable or when actions are uncertain. Then after each action the agent needs to verify what effects that action has caused. Rather than planning for every possible contingency after an action, it is usually better to start acting and see which contingencies do arise. This is called interleaving of search and execution. A problem is called adversarial if the uncertainty is caused by the actions of another agent. Exploration problems This can be considered an extreme case of contingency problems: when the states and actions of the environment is unknown, the agent must act to discover them.