Document Details

SuperiorSard5855

Uploaded by SuperiorSard5855

Pázmány Péter Katolikus Egyetem Informatikai és Bionikai Kar

Kristóf Karacs

Tags

intelligent agents artificial intelligence computer science agents

Summary

This document provides an overview of intelligent agents, their characteristics, and different types. It also covers topics including rationality, evaluation, and complexity levels.

Full Transcript

Intelligent agents Artificial intelligence Kristóf Karacs PPKE-ITK Recap n What is intelligence? n What can we use it for? n How does it work? How to create it? n How to control / repair / improve it? n What are the consequences? n Do we need to be afraid...

Intelligent agents Artificial intelligence Kristóf Karacs PPKE-ITK Recap n What is intelligence? n What can we use it for? n How does it work? How to create it? n How to control / repair / improve it? n What are the consequences? n Do we need to be afraid of it? n What can we do? 1 Program n Problem solving by search n Adversarial search n Logic and inference n Search in logic representation, planning n Inference in case of constraints n Bayesian networks n Fuzzy logic n Machine learning Outline n Agents and environments n Rationality n PEAS: performance measure, environment, actuators, sensors n Models of agents n Aspects of environments 2 Intelligent agents n An agent is anything that can be viewed as ¨ perceiving its environment through sensors and ¨ acting upon the environment through actuators. How do agents work? Percepts Sensors Environment ? Actions Actuators Agent 3 Type of agents n Human n Robot n Software Rational agent n A rational agent is one that does the right thing n Assessing the agent’s performance ¨ Performance measure: objectively tells how successful the agent is 4 Evaluation of rationality n Goals and a performance measure n Prior knowledge about the environment n Abilities: possible primitive actions n History ¨ Percept sequence ¨ Past experiences (data to learn from) Internal Structure n Agent = Architecture + Program HW, bg. SW + actual algorithm n Knowledge of Environment ¨ Source n Given a-priori n Learned from sensory input ¨ May include n Present / past states of environment n Influence of actions on the environment 5 PEAS grouping n P: performance measure n E: environment n A: actuators n S: sensors Complexity levels n Reflex agents ¨ Lookup table: if-then rules ¨ Problems: size, time, flexibility n Model-based reflex agents ¨ Internal state n Goal-based agents ¨ Search and planning n Utility-based agents ¨ Non-binary measure 6 Reflexes n Action depends only on sensory input n Background knowledge not used n Humans – flinching, blinking n Chess – openings, endings ¨ Lookup table (not a good idea in general) ¨ 35100 entries required for the entire game Reflex Agents Agent Sensors Percepts What the world is like now? Environment Condition- What action action should I do (if-then) now? rules Actions Actuators 7 Model-based Reflex Agents Agent Sensors State Percepts What the world How the world is like now? evolves? Environment What my actions do? Condition What action -action should I do (if-then) now? rules Actions Actuators Goal of an agent n Environment in itself is often not enough to decide what to do n Goal is described by some properties n A goal based agent ¨ uses knowledge about a goal to guide its actions (search and planning) ¨ compares the results of possible actions n Principle: The action taken should modify the environment towards the goal 8 Goal-based Agents Agent Sensors State Percepts What the world How the world is like now? evolves? Environment What my actions do? What it will be if I do action A? What action Goals should I do now? Actions Actuators Search & Planning Utility Functions n Knowledge of a goal may be difficult to pin down (e.g. checkmate in chess) n Agent may have multiple, controversial goals n Comparing utility of states ¨ Utility functions measure value of world states ¨ Localized measures n Choose action which best improves utility (Best First Search) 9 Utility-based Agents Agent Sensors State Percepts What the world How the world is like now? evolves? Environment What my actions do? What it will be if I do action A? How happy I Utility will be in the state? What action should I do now? Actions Actuators utility function: X (state space) ® Â Other aspects n Hybrid agents ¨ Hierarchical architecture ¨ Trade-off between efficiency and flexibility n Capability of learning n Multi-agent systems ¨ Competitive vs. cooperative relationship 10 Hierarchical control n Delivery robot Follow plan Go to target and avoid obstacles Steer, accelerate, brake, detect obstacles and position ENVIRONMENT Learning Agents Performance standard Critic Sensors Percepts feedback changes Environment Learning Performance element Element knowledge learning goals Problem generator Actions Actuators 11 Autonomy of Agents n Autonomy = extent to which the agent’s behaviour is determined by its own experience n Extreme cases ¨ Noautonomy – ignores input (environment) ¨ Complete autonomy – acts randomly/no program n Ideal agent: some autonomy ¨ Gradually increasing overtime ¨ Example: baby learning to crawl & navigate Details of the Environment n Properties of the world are different (real- world robot vs. software agent) ¨ Fully observable vs. Partially observable ¨ Deterministic vs. Stochastic ¨ Episodic vs. Sequential ¨ Static vs. Dynamic ¨ Discrete vs. Continuous ¨ Single agent vs. Multiple agents 12 Observability (sensing uncertainty) n An environment is fully observable, if the agent can access every information in its environment it takes into account when choosing an action n Partially observable if parts of the environment are not observable n Unobservable information must be guessed ® the agent needs a model n Example: chess (fully) vs. poker (partially) Determinism (effect uncertainty) n An environment is deterministic if a change in the world state depends only on ¨ current state of the world ¨ agent’s action n Non-deterministic environments ¨ have aspects beyond the control of the agent ¨ non-observable can seem to be non-det. ¨ can be treated as stochastic or probabilistic n Example: chess (det.) vs. poker (non-det.) 13 Episodicity n An environment is episodic if the choice of current action does not depend on previous actions n In sequential environments ¨ Agent has to plan ahead ¨ Current choice affects future actions n Example: mail sorting system (episodic) vs. poker, chess (sequential) Time variance n Static environments don’t change over time n Dynamic environments: changes have to be taken into account by either ¨ sensing the change ¨ predicting the change ¨ neglecting the change (in the short run) n Example: poker, chess (static) vs. taxi driving (dynamic) 14 Continuity n Type of sensor data and choices of action n Discrete: distinct, clearly defined set n Continuous: non-sectionable n Example: chess (discrete) vs. taxi driving (continuous) Number of agents n Single agent: the environment is not changed by other actors n Multi-agent: the agent is aware of other agents, who also modifying the environment ¨ Modelling question: multi-agent vs. stochastic single agent n Examples: solitaire (single) vs. poker (multi-) 15 Summary n Agent: defined in connection with the environment ¨ Perceives and acts n Rationality n Basic mode: reflex, model, goal, and utility based ¨ Hierarchical control n Learning n Environments: observable?, deterministic?, static?, episodic?, continuous?, multi-agent? 16

Use Quizgecko on...
Browser
Browser