Goal-Based Agents and State Space Search

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

What type of agent uses a mapping from states to actions?

Reflex agent

What is another name for goal-based agents?

Problem solving agents OR planning agents

Unlike reflex agents, what do planning agents consider when making decisions?

Hypothesized consequences of actions

What must a planning agent formulate in order to make decisions?

<p>A goal</p> Signup and view all the answers

In state space search, how are problems typically solved?

<p>By searching among alternative choices</p> Signup and view all the answers

Why might a human not search the entire state space when solving a problem?

<p>Exhaustive search</p> Signup and view all the answers

What are judgmental rules known as when they are used to limit the exploration of search space?

<p>Heuristics</p> Signup and view all the answers

What does a heuristic strategy selectively explore?

<p>The search space</p> Signup and view all the answers

Does a heuristic guarantee an optimal solution to a problem?

<p>No</p> Signup and view all the answers

What kind of environments are considered when designing goal-based agents in the context of search problems?

<p>Fully observable, deterministic, discrete, and known</p> Signup and view all the answers

What must an agent find to reach its goal?

<p>Sequence of actions</p> Signup and view all the answers

What two factors define the 'performance measure' of an agent?

<p>Reaching the goal and the cost of the path</p> Signup and view all the answers

What is the 'path cost' typically assumed to be in search problems?

<p>A sum of nonnegative step costs</p> Signup and view all the answers

What is the name of the sequence of actions that gives the lowest path cost for reaching the goal?

<p>Optimal solution</p> Signup and view all the answers

In the Romania example, name the starting and goal states.

<p>Arad and Bucharest</p> Signup and view all the answers

In the Romania example, what constitutes the 'path cost'?

<p>Sum of edge costs OR total distance traveled</p> Signup and view all the answers

What defines the state space of a problem?

<p>The initial state, actions, and transition model</p> Signup and view all the answers

How can the state space be represented?

<p>A directed graph</p> Signup and view all the answers

In graph theory, what is a 'graph' defined as?

<p>A set of nodes and links that connect them</p> Signup and view all the answers

What is the primary task of a goal-based agent?

<p>To identify the action or series of actions that lead to the goal</p> Signup and view all the answers

In the context of problem solving, what is 'goal formulation'?

<p>Defining what the agent wants to achieve</p> Signup and view all the answers

What does the 'initial state' refer to in problem formulation?

<p>The state in which the agent starts</p> Signup and view all the answers

What do 'states' refer to in the context of problem formulation?

<p>All states reachable from the initial state by any sequence of actions OR State space</p> Signup and view all the answers

In problem formulation, what does Actions(s) return at a given state s?

<p>The set of actions that can be executed in state s OR Action space</p> Signup and view all the answers

What does the 'transition model' describe?

<p>What each action does OR Results(s, a)</p> Signup and view all the answers

What does the 'goal test' determine?

<p>If a given state is a goal state</p> Signup and view all the answers

What is the purpose of the 'path cost' function?

<p>Assigns a numeric cost to a path w.r.t. performance measure</p> Signup and view all the answers

In the 8-queens problem, what does 'states' refer to?

<p>All arrangements of 0 to 8 queens on the board</p> Signup and view all the answers

In the 8-queens problem, what is the 'initial state'?

<p>No queen on the board</p> Signup and view all the answers

What are the actions available in the 8-queens problem?

<p>Add a queen to any empty square</p> Signup and view all the answers

What is the transition model in the 8-queens problem?

<p>Updated board</p> Signup and view all the answers

When does a state pass the goal test in the 8-queens problem?

<p>8 queens are on the board with none attacked</p> Signup and view all the answers

What are the available actions in the 8-puzzle problem?

<p>Move Left, Right, Up or Down</p> Signup and view all the answers

What is the 'transition model' for the 8-puzzle problem?

<p>Given a state and an action, returns resulting state</p> Signup and view all the answers

In the 8-puzzle, when does a state meet the goal test

<p>state matches the goal state</p> Signup and view all the answers

What is an example of a real-world problem, described in the provided text?

<p>Route finding problem</p> Signup and view all the answers

What is the state space?

<p>A physical configuration</p> Signup and view all the answers

What is the difference between state space and search space?

<p><code>State space</code> is a physical configuration, and <code>search space</code> is an abstract configuration represented by a search tree or graph of possible solutions</p> Signup and view all the answers

In the context of search trees, what does expand do?

<p>A function that given a node, creates all children nodes</p> Signup and view all the answers

What are the three regions that search space can be divided into, according to the provided text?

<p>Explored, frontier, and unexplored.</p> Signup and view all the answers

Flashcards

Goal-based agents

Agents that work towards a goal.

Reflex Agent

Uses a mapping from states to actions. Chooses action based on current percept without considering future consequences.

Planning agent

Decisions based on (hypothesized) consequences of actions. Must have a model of how the world evolves in response to actions and must formulate a goal

State Space Search

Problems are solved by searching among alternative choices.

Signup and view all the flashcards

Heuristic Search

A strategy for selectively exploring the search space. It guides the search along lines that have a high probability of success and employs knowledge about the nature of a problem to find a solution.

Signup and view all the flashcards

Search Environments

We will consider the problem of designing goal-based agents in fully observable, deterministic, discrete, known environments.

Signup and view all the flashcards

Agent Action Sequence

The agent must find a sequence of actions that reaches the goal.

Signup and view all the flashcards

Performance measure

Defined by reaching the goal and how "expensive" the path to the goal is.

Signup and view all the flashcards

Optimal solution

Sequence of actions that gives the lowest path cost for reaching the goal.

Signup and view all the flashcards

Initial State

Starting point in the problem.

Signup and view all the flashcards

Actions

Possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space)

Signup and view all the flashcards

Transition model

Description of what each action does.

Signup and view all the flashcards

Goal test

Determines if a given state is a goal state

Signup and view all the flashcards

Path cost

A function that assigns a numeric cost to a path w.r.t. performance measure

Signup and view all the flashcards

State Space

The initial state, actions, and transition model define the state space of the problem; The set of all states reachable from initial state by any sequence of actions.

Signup and view all the flashcards

Graph

A set of nodes and links that connect them.

Signup and view all the flashcards

Initial state (problem formulation)

The state in which the agent starts

Signup and view all the flashcards

States (problem formulation)

Represents state space

Signup and view all the flashcards

Actions (problem formulation)

Possible actions available to the agent. At a state s, Actions(s) returns the set of actions that can be executed in state s. (Action space)

Signup and view all the flashcards

Transition model (problem formulation)

A description of what each action does

Signup and view all the flashcards

Goal test (problem formulation)

Determines if a given state is a goal state

Signup and view all the flashcards

Path cost (problem formulation)

Function that assigns a numeric cost to a path w.r.t. performance measure

Signup and view all the flashcards

State space

A physical configuration

Signup and view all the flashcards

Search space

An abstract configuration represented by a search tree or graph of possible solutions

Signup and view all the flashcards

Search tree

Models the sequence of actions

Signup and view all the flashcards

Expand

A function that given a node, creates all children nodes

Signup and view all the flashcards

Search Space Regions

The search space is divided into three regions: Explored, Frontier, Unexplored

Signup and view all the flashcards

Study Notes

Goal-Based Agents

  • Reflex agents use a mapping from states to actions
  • Goal-based agents consist of problem-solving agents or planning agents

Types of Agents

  • Reflex agents chooses an action based on the current percept
  • Reflex agents don't consider the future consequences of actions
  • Planning agents considers decisions based on hypothesized consequences of actions
  • Planning agents must have a model of how the world evolves in response to actions
  • Planning agents must formulate a goal
  • Problems are solved by searching among alternative choices
  • People contemplate various strategies to solve problems
  • Chess players consider a few alternative moves
  • Mathematicians choose from different strategies to find a proof for a theorem
  • Physicians evaluate several possible diagnoses

How Human Beings Think

  • Humans don't search the entire state space (exhaustive search)
  • Only alternatives that experience has shown to be effective are explored
  • Human problem-solving uses judgmental rules that limit the exploration of search space to portions that seem promising
  • Judgmental rules are known as heuristics
  • Heuristics are a strategy for selectively exploring the search space
  • Heuristics guide the search along lines that have a high probability of success
  • Heuristics employ knowledge about the nature of a problem to find a solution
  • Heuristics don't guarantee an optimal solution but come close most of the time
  • Humans use many heuristics in problem-solving
  • Search designing incorporates goal-based agents in fully observable, deterministic, discrete, known environments
  • The agent must find a sequence of actions that reaches the goal
  • The performance measure includes reaching the goal
  • The measure also includes how "expensive” the path to the goal is

Search Problem Components

  • Search problem components consist of initial state, actions, transition model, goal state, solution path, and path cost
  • A transition model determines what state results from performing a given action in each state
  • The path cost is the sum of non-negative step costs
  • An optimal solution is the sequence of actions that gives the lowest path cost for reaching the goal

Romania Example

  • A vacation in Romania starts in Arad with a flight leaving from Bucharest
  • Initial state will be in Arad
  • Actions consist of moving from one city to another
  • If you go from city A to city B, you end up in city B
  • The goal state is Bucharest
  • Path cost is the sum of edge costs indicating total distance traveled

State Space

  • Initial state, actions, and transition model define the state space of a problem
  • State space is the set of all states reachable from the initial state by any sequence of actions
  • State space can be represented as a directed graph where the nodes are states and links between nodes are actions

AI Problem Graph Theory

  • An AI problem can be represented as a state space graph
  • A graph is a set of nodes and links that connect them
  • Graph theory includes labeled graphs, directed graphs, paths, rooted graphs, trees, parents, children, siblings, ancestors, and descendants

Goal-Based Agents

  • Goal-based agents work towards a goal
  • Goal-based agents consider the impact of actions on future states
  • Agent’s job is to identify the action or series of actions that lead to the goal
  • Search helps formalize possible solutions

Examples

  • The 8-queen problem involves placing 8 queens on a chessboard
  • The board placement should ensure no queen attacks any other horizontally, vertically, or diagonally
  • Number of possible sequences to investigate equals 1.8 x 10^14
  • Defining the problem consists of goal formulation and problem formulation

Problem Formulation

  • Initial state: the state in which the agent starts
  • States: all states reachable from the initial state by any sequence of actions i.e., the State space
  • Actions: possible actions available to the agent
  • For actions at a state "s", Actions(s) returns the set of actions that can be executed in state "s" (Action space)
  • Transition model: a description of what each action does Results(s, a)
  • Goal test: determines if a given state is a goal state
  • Path cost: function that assigns a numeric cost to a path w.r.t. performance measure

Chess Board Example

  • States: all arrangements of 0 to 8 queens on the board
  • Initial state: no queen on the board
  • Actions: add a queen to any empty square
  • Transition model: updated board
  • Goal test: 8 queens on the board with none attacked

8-Puzzle Example

  • States: location of each of the 8 tiles in the 3x3 grid.
  • Initial state: any state
  • Actions: move left, right, up or down
  • Transition model: a given state and an action, returns a resulting state
  • Goal test: state matches the goal state
  • Path cost: total moves, each move costs 1

Route Finding Example

  • States: In City i.e. Los Angeles, San Francisco, Denver,...
  • Initial state: In Boston
  • Actions: Go New York, etc.
  • Transition model:
    • Results (In (Boston), Go (New York)) = In(New York)
  • Goal test: In(Denver)
  • Path cost: path length in kilometers

Real-world Examples

  • Route finding problem for map search requires directions to go from location to location using links or transitions
  • Applications include tools for driving directions in websites and in-car systems

State Space vs. Search Space

  • State space: a physical configuration
  • Search space: a depiction using search trees or a graph of possible solutions
  • Search tree models the sequence of actions
    • Root: initial state
    • Branches: actions
    • Nodes: results from actions
      • A node has: parent, children, depth, path cost, and associated state in the state space
  • Expand: A function that given a node, creates all the children nodes

Search Space Regions

  • Search space consists of the sections: explored, frontier, and unexplored
    • Explored: Closed List, Visited Set
    • Frontier: Open List, the Fringe
    • Unexplored
  • The essence of search is moving nodes from unexplored to the frontier to explored
  • The essence of search strategy is deciding the order of such moves
  • Adopted color coding includes orange nodes for explored, grey nodes for the frontier, white nodes for unexplored, and black nodes are failures

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

Use Quizgecko on...
Browser
Browser