AI Lecture 2 PDF

Document Details

FavoriteSugilite4851

Uploaded by FavoriteSugilite4851

Helwan University

Dr. Mohamed Awni

Tags

artificial intelligence agent-based systems search algorithms AI lectures

Summary

This document provides an introduction to artificial intelligence, specifically focusing on agent types, state spaces, and search problems. It explains different types of agents and the factors that contribute to rationality. The document also lays out examples to showcase search problems and results.

Full Transcript

Introduction to Artificial Intelligence Lecture 2 Presented By Dr. Mohamed Awni Outline  Recap  Agent types.  State Spaces and.  State Space Graphs and Search Trees.  Search Problems Recap Quest...

Introduction to Artificial Intelligence Lecture 2 Presented By Dr. Mohamed Awni Outline  Recap  Agent types.  State Spaces and.  State Space Graphs and Search Trees.  Search Problems Recap Question What’s the difference between the agent function and the agent program? Recap Rationality: Do the action that causes the agent to be most successful Rationality depends on 4 things: Rationality 1.Performance measure of success. (Performance measure) 2.Agent’s prior knowledge of environment. (Environment) 3.Actions agent can perform. (Actuators) 4.Agent’s percept sequence to date. (Sensors) Rational agent: for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has Partially observable environments: the agent does not have full Recap information about the state and thus the agent must have an internal estimate of the state of the world. Fully observable environments: the agent has full information Environment about their state. types Stochastic environments: have uncertainty in the transition model. Deterministic environments: taking an action in a state has a single outcome that is guaranteed to happen. Multi-agent environments the agent acts in the environments along with other agents. For this reason, the agent might need to randomize its actions in order to avoid being “predictable" by other agents. Static environments: If the environment does not change as the agent acts on it. Dynamic environments: Changes as the agent interacts with it. Agent types Reflex agent: Simple Reflex agent Model based reflex agent planning a head agents Goal based agent. Utility based agent. Agent types Reflex agent: Doesn’t think about the consequences of its actions, but rather selects an action based solely on the current state of the world. Planning a head agents  Maintain a model of the world.  Use this model to simulate performing various actions.  The agent can determine hypothesized consequences of the actions and can select the best one.  This is simulated "intelligence" in the sense that it’s exactly what humans do when trying to determine the best possible move in any situation - thinking ahead. Reflex Agents Reflex agents: Choose action based on current percept (and maybe memory) May have memory or a model of the world’s current state Do not consider the future consequences of their actions Consider how the world IS Can a reflex agent be rational? Video of Demo Reflex Optimal Video of Demo Reflex Odd Simple Reflex Agent  Selects actions using only the current percept.  Do not consider the future consequences of their actions  Works on condition-action rules.  Will only work correctly if the environment is fully observable  Example: A thermostat in a room. It senses the current temperature (percept) and triggers the heating or cooling system if the temperature is outside a predefined range. Model based Reflex Agent  Can handle partially observable environments using a model about the world. Maintain some internal state that keeps track of the part of the world it can’t see now  Needs model (encodes knowledge about how the world works)  Example: An autonomous vacuum cleaner. It maintains an internal map of the room and uses sensors to detect obstacles and dirty areas. Based on this model, it decides where to move and when to clean. Planning Agents Planning agents: Ask “what if” Decisions based on (hypothesized) consequences of actions Must have a model of how the world evolves in response to actions Must formulate a goal (test) Consider how the world WOULD BE Optimal vs. complete planning Planning vs. replanning Video of Demo Mastermind Goal based Agent  Goal information guides agent’s actions (looks to the future)  Flexible: simply reprogram the agent by changing goals. Example: A delivery drone. Its goal is to deliver packages to specified locations. It assesses its current position, the destination, and any obstacles in between, then plans a route to reach the goal efficiently. Video of Demo Replanning Utility based Agent  What if there are many paths to the goal?  Utility measures which states are preferable to other states  Maps state to real number (utility or “happiness”) Example: A self-driving car. It evaluates various actions (e.g., changing lanes, accelerating, braking) based on their expected utilities, such as reaching the destination quickly while minimizing fuel consumption and ensuring passenger safety. What you should know  What it means to be rational.  Be able to do a PEAS description of a task environment.  Be able to determine the properties of the environment.  Know which agent program is appropriate for your task. Search Problem  In order to create a rational planning agent, we need a way to mathematically express the given environment in which the agent will exist.  To do this, we must formally express a search problem.  Given our agent’s current state (its configuration within its environment).  How can we arrive at a new state that satisfies its goals in the best possible way? Search Problem  A search problem consists of the following elements: State space - The set of all possible states that are possible in your given world Set of actions available in each state. Transition model - Outputs the next state when a specific action is taken at current state Action cost - Incurred when moving from one state to another after applying an action Start state - The state in which an agent exists initially Goal test - A function that takes a state as input, and determines whether it is a goal state  A search problem consists of: A state space A successor function (with actions, costs) “N”, Search 1.0 Problems A start state. “E”, 1.0 a goal test  A solution is a sequence of actions (a plan) which transforms the start state to a goal state What’s in a State Space? The world state includes every last detail of the environment A search state keeps only the details needed for planning (abstraction)  Problem: Pathing  Problem: Eat-All-Dots  States: (x,y) location  States: {(x,y), dot  Actions: NSEW booleans}  Successor: update location  Actions: NSEW only  Successor: update  Goal test: is (x,y)=END location and possibly a dot boolean  Goal test: dots all false State Space Sizes?  World state:  Agent positions: 120  Food count: 30  Ghost positions: 12  Agent facing: NSEW  How many  World states? 120x(230)x(122)x4  States for pathing? 120  States for eat-all-dots? 120x(230) Safe Passage  Problem: eat all dots while keeping the ghosts perma-scared  What does the state space have to specify?  (agent position, dot Booleans, power pellet Booleans, remaining scared time) State Space Graphs  State space graph: A mathematical representation of a search problem  Nodes are (abstracted) world configurations  Arcs represent successors (action results)  The goal test is a set of goal nodes (maybe only one)  In a state space graph, each state G a occurs only once! b c e  We can rarely build this full graph in d f S h memory (it’s too big), but it’s a useful p r idea q Tiny search graph for a tiny search Search Trees This is now / start “N”, 1.0 “E”, 1.0 Possible futures  A search tree:  A “what if” tree of plans and their outcomes  The start state is the root node  Children correspond to successors  Nodes show states, but correspond to PLANS that achieve those states  For most problems, we can never actually build the whole tree Example: Tree Search State Graphs vs. Search Trees Solving Search Problems  A search problem is solved by first considering the start state.  Exploring the state space using the action and transition and cost methods.  Iteratively computing children of various states until we arrive at a goal state, at which point we will have determined a path from the start state to the goal state (typically called a plan). The order in which states are considered is determined using a predetermined strategy. Search Problem Formulation A search problem has 5 components: 1.A finite set of states S 2.A non-empty set of initial states I S 3.A non-empty set of goal states G S 4.A successor function succ(s) which takes a state s as input and returns as output the set of states you can reach from state s in one step. 5. A cost function cost(s, s’ )which returns the non-negative one-step cost of travelling from state s to s’. The cost function is only defined if s’ is a successor state of s. Example 1: Traveling in Romania  State space: Cities  Successor function: Roads: Go to adjacent city with cost = distance  Start state: Arad  Goal test: Is state == Bucharest?  Solution? Example2: Oregon Results of a Search Problem General Tree Search  Important ideas:  Fringe  Expansion  Exploration strategy  Main question: which fringe nodes to explore? Tree Search a G b c e d f S h p r q  Important ideas:  Fringe (Frontier)  Expansion  Exploration strategy  Main question: which fringe nodes to explore? Uninformed search  When we have no knowledge of the location of goal states in our search tree, we are forced to select our strategy for tree search from one of the techniques that falls under the umbrella of uninformed search:  Depth-first search.  Breadth-first search.  Uniform cost search. Ref: Agents in Artificial Intelligence – GeeksforGeeks CS 188 Fall 2022 | Introduction to Artificial Intelligence at UC Berkeley

Use Quizgecko on...
Browser
Browser