Module 1 Topic 2: Introduction to Intelligent Agents PDF

Document Details

NiceQuantum

Uploaded by NiceQuantum

Technical University of Mombasa

Tags

intelligent agents agent architecture artificial intelligence computer science

Summary

This document provides an introduction to intelligent agents, exploring concepts such as agent performance, examples, agent faculties, and agent environments. It delves into various agent architectures and concludes with a brief summary.

Full Transcript

UNIT 2 INTRODUCTION TO INTELLIGENT AGENTS CONTENTS 1.0 Introduction 2.0 Objectives 3.0 Main Content 3.1. Introduction to Agent 3.1.1 Agent Performance 3.1.2 Examples of Agents 3.1.3 Agent Faculties 3.1.4 Intelligent Age...

UNIT 2 INTRODUCTION TO INTELLIGENT AGENTS CONTENTS 1.0 Introduction 2.0 Objectives 3.0 Main Content 3.1. Introduction to Agent 3.1.1 Agent Performance 3.1.2 Examples of Agents 3.1.3 Agent Faculties 3.1.4 Intelligent Agents 3.1.5 Rationality 3.1.6 Bound Rationality 3.2 Agent Environment 3.2.1 Observability 3.2.2 Determinism 3.2.3 Episodicity 3.2.4 Dynamism 3.2.5 Continuity 3.2.6 Presence of other Agents 3.3 Agent Architectures or Reflex Agent 3.3.1 Table Based Agent 3.3.2 Percept based 3.3.3 Subsumption Architecture 3.3.4 State-based Reflex Agent 4.0 Conclusion 5.0 Summary 6.0 Tutor-Marked Assignment 7.0 References/Further Reading 1.0 INTRODUCTION This unit introduces you to Intelligence Agents (IA), how it interacts with the environment and Agent architectures. IA is an autonomous entity which observes and acts upon an environment. It may use knowledge to achieve their goals. They may be very simple or very complex. 2.0 OBJECTIVES At the end of this unit, you should be able to: Explain what an agent is and how it interacts with the environment. 1 CCS 4302 ARTIFICIAL INTELLIGENCE identify the percepts available to the agent and the actions that the agent can execute, if given a problem situation Measure the performance used to evaluate an agent List based agents Identify the characteristics of the environment. 3.0 MAIN CONTENT 3.1 Introduction to Agent An agent perceives its environment through sensors. The complete set of inputs at a given time is called a percept. The current percept or a sequence of percepts can influence the actions of an agent. The agent can change the environment through actuators or effectors. An operation involving an Effector is called an action. Actions can be grouped into action sequences. The agent can have goals which it tries to achieve. Thus, an agent can be looked upon as a system that implements a mapping from percept sequences to actions. A performance measure has to be used in order to evaluate an agent. An autonomous agent decides autonomously which action to take in the current situation to maximize progress towards its goals. 3.1.1 Agent Performance An agent function implements a mapping from perception history to action. The behaviour and performance of intelligent agents have to be evaluated in terms of the agent function. The ideal mapping specifies which actions an agent ought to take at any point in time. The performance measure is a subjective measure to characterize how successful an agent is. The success can be measured in various ways. It can be measured in terms of speed or efficiency of the agent. It can be measured by the accuracy or the quality of the solutions achieved by the agent. It can also be measured by power usage, money, etc. 3.1.2 Examples of Agents 1. Humans can be looked upon as agents. They have eyes, ears, skin, taste buds, etc. for sensors; and hands, fingers, legs, mouth for effectors. 2 CCS 4302 ARTIFICIAL INTELLIGENCE 2. Robots are agents. Robots may have camera, sonar, infrared, bumper, etc. for sensors. They can have grippers, wheels, lights, speakers, etc. for actuators. Some examples of robots are Xavier from CMU, COG from MIT, etc. Figure 2: Xavier Robot (CMU) Then we have the AIBO entertainment robot from SONY. Figure 3: Aibo from SONY 3. We also have software agents or softbots that have some functions as sensors and some functions as actuators. Askjeeves.com is an example of a softbot. 4. Expert systems like the Cardiologist are an agent. 5. Autonomous spacecrafts. 6. Intelligent buildings. 3 CCS 4302 ARTIFICIAL INTELLIGENCE 3.1.3 Agent Faculties The fundamental faculties of intelligence are Acting Sensing Understanding, reasoning, learning Blind action is not a characterization of intelligence. In order to act intelligently, one must sense. Understanding is essential to interpret the sensory percepts and decide on an action. Many robotic agents stress sensing and acting, and do not have understanding. 3.1.4 Intelligent Agents An Intelligent Agent must sense, must act, must be autonomous (to some extent). It also must be rational. AI is about building rational agents. An agent is something that perceives and acts. A rational agent always does the right thing. 1. What are the functionalities (goals)? 2. What are the components? 3. How do we build them? 3.1.5 Rationality Perfect Rationality assumes that the rational agent knows all and will take the action that maximizes her utility. Human beings do not satisfy this definition of rationality. Rational Action is the action that maximizes the expected value of the performance measure given the percept sequence to date. However, a rational agent is not omniscient. It does not know the actual outcome of its actions, and it may not know certain aspects of its environment. Therefore rationality must take into account the limitations of the agent. The agent has too select the best action to the best of its knowledge depending on its percept sequence, its background knowledge and its feasible actions. An agent also has to deal with the expected outcome of the actions where the action effects are not deterministic. 4 CCS 4302 ARTIFICIAL INTELLIGENCE 3.1.6 Bounded Rationality “Because of the limitations of the human mind, humans must use approximate methods to handle many tasks.” Herbert Simon, 1972 Evolution did not give rise to optimal agents, but to agents which are in some senses locally optimal at best. In 1957, Simon proposed the notion of Bounded Rationality: that property of an agent that behaves in a manner that is nearly optimal with respect to its goals as its resources will allow. Under these promises an intelligent agent will be expected to act optimally to the best of its abilities and its resource constraints. 3.2 Agent Environment Environments in which agents operate can be defined in different ways. It is helpful to view the following definitions as referring to the way the environment appears from the point of view of the agent itself. 3.2.1 Observability In terms of observability, an environment can be characterized as fully observable or partially observable. In a fully observable environment, the entire environment relevant to the action being considered is observable. In such environments, the agent does not need to keep track of the changes in the environment. A chess playing system is an example of a system that operates in a fully observable environment. In a partially observable environment, the relevant features of the environment are only partially observable. A bridge playing program is an example of a system operating in a partially observable environment. 3.2.2 Determinism In deterministic environments, the next state of the environment is completely described by the current state and the agent’s action. Image analysis If an element of interference or uncertainty occurs then the environment is stochastic. Note that a deterministic yet partially observable environment will appear to be stochastic to the agent. Ludo 5 CCS 4302 ARTIFICIAL INTELLIGENCE If the environment state is wholly determined by the preceding state and the actions of multiple agents, then the environment is said to be strategic. Example: Chess 3.2.3 Episodicity An episodic environment means that subsequent episodes do not depend on what actions occurred in previous episodes. In a sequential environment, the agent engages in a series of connected episodes. 3.2.4 Dynamism Static Environment: does not change from one state to the next while the agent is considering its course of action. The only changes to the environment are those caused by the agent itself. A static environment does not change while the agent is thinking. The passage of time as an agent deliberates is irrelevant. The agent doesn’t need to observe the world during deliberation. A Dynamic Environment changes over time independent of the actions of the agent -- and thus if an agent does not respond in a timely manner, this counts as a choice to do nothing 3.2.5 Continuity If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous. 3.2.6 Presence of Other agents Single agent/ Multi-agent A multi-agent environment has other agents. If the environment contains other intelligent agents, the agent needs to be concerned about strategic, game-theoretic aspects of the environment (for either cooperative or competitive agents) Most engineering environments do not have multi-agent properties, whereas most social and economic systems get their complexity from the interactions of (more or less) rational agents. 6 CCS 4302 ARTIFICIAL INTELLIGENCE 3.3 Agent architectures 3.3.1 Table Based Agent In table based agent the action is looked up from a table based on information about the agent’s percepts. A table is simple way to specify a mapping from percepts to actions. The mapping is implicitly defined by a program. The mapping may be implemented by a rule based system, by a neural network or by a procedure. There are several disadvantages to a table based system. The tables may become very large. Learning a table may take a very long time, especially if the table is large. Such systems usually have little autonomy, as all actions are pre-determined. 3.3.2 Percept based agent or reflex agent In percept based agents, 1. information comes from sensors - percepts 2. changes the agents current state of the world 3. triggers actions through the effectors Such agents are called reactive agents or stimulus-response agents. Reactive agents have no notion of history. The current state is as the sensors see it right now. The action is based on the current percepts only. The following are some of the characteristics of percept-based agents. Efficient No internal representation for reasoning, inference. No strategic planning, learning. Percept-based agents are not good for multiple, opposing, goals. 3.3.3 Subsumption Architecture We will now briefly describe the subsumption architecture (Rodney Brooks, 1986). This architecture is based on reactive systems. Brooks notes that in lower animals there is no deliberation and the actions are based on sensory inputs. But even lower animals are capable of many complex tasks. His argument is to follow the evolutionary path and build simple agents for complex worlds. 7 CCS 4302 ARTIFICIAL INTELLIGENCE The main features of Brooks’ architecture are. There is no explicit knowledge representation Behaviour is distributed, not centralized Response to stimuli is reflexive The design is bottom up, and complex behaviours are fashioned from the combination of simpler underlying ones. Individual agents are simple The Subsumption Architecture built in layers. There are different layers of behaviour. The higher layers can override lower layers. Each activity is modeled by a finite state machine. The subsumption architecture can be illustrated by Brooks’ Mobile Robot example. Figure 4: Subsumption Architecture The system is built in three layers. 1. Layer 0: Avoid Obstacles 2. Layer1: Wander behaviour 3. Layer 2: Exploration behavior Layer 0 (Avoid Obstacles) has the following capabilities: Sonar: generate sonar scan Collide: send HALT message to forward Feel force: signal sent to run-away, turn 8 CCS 4302 ARTIFICIAL INTELLIGENCE Layer1 (Wander behaviour) Generates a random heading Avoid reads repulsive force, generates new heading, feeds to turn and forward Layer 2 (Exploration behaviour) Whenlook notices idle time and looks for an interesting place. Pathplan sends new direction to avoid. Integrate monitors path and sends them to the path plan. 3.3.4 State-Based Agent or Model-Based Reflex Agent State based agents differ from percept based agents in that such agents maintain some sort of state based on the percept sequence received so far. The state is updated regularly based on what the agent senses, and the agent’s actions. Keeping track of the state requires that the agent has knowledge about how the world evolves, and how the agent’s actions affect the world. Thus a state based agent works as follows: information comes from sensors – percepts based on this, the agent changes the current state of the world based on state of the world and knowledge (memory), it triggers actions through the effectors 3.3.5 Goal-based Agent The goal based agent has some goal which forms a basis of its actions. Such agents work as follows: Information comes from sensors - percepts Changes the agents current state of the world based on state of the world and knowledge (memory) and goals/intentions, it chooses actions and does them through the effectors. Goal formulation based on the current situation is a way of solving many problems and search is a universal problem solving mechanism in AI. The sequence of steps required to solve a problem is not known a priori and must be determined by a systematic exploration of the alternatives. 9 CCS 4302 ARTIFICIAL INTELLIGENCE 3.3.6 Utility-based Agent Utility based agents provide a more general agent framework. In case that the agent has multiple goals, this framework can accommodate different preferences for the different goals. Such systems are characterized by a utility function that maps a state or a sequence of states to a real valued utility. The agent acts so as to maximize expected utility. 3.3.7 Learning Agent Learning allows an agent to operate in initially unknown environments. The learning element modifies the performance element. Learning is required for true autonomy 4.0 CONCLUSION In conclusion, an intelligent agent (IA) is an autonomous entity which observes and acts upon an environment. Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex: a reflex machine such as a thermostat is an intelligent agent, as is a human being, as is a community of human beings working together towards a goal. 5.0 SUMMARY In this unit, you have learnt that: AI is a truly fascinating field. It deals with exciting but hard problems. A goal of AI is to build intelligent agents that act so as to optimize performance. An agent perceives and acts in an environment that has architecture, and is implemented by an agent program. An ideal agent always chooses the action which maximizes its expected performance, given its percept sequence so far. An autonomous agent uses its own experience rather than built-in knowledge of the environment by the designer. An agent program maps from percept to action and updates its internal state. Reflex agents respond immediately to percepts. Goal-based agents act in order to achieve their goal(s). Utility-based agents maximize their own utility function. Representing knowledge is important for successful agent design. The most challenging environments are partially observable, stochastic, sequential, dynamic, and continuous, and contain multiple intelligent agents. 10 CCS 4302 ARTIFICIAL INTELLIGENCE 6.0 TUTOR-MARKED ASSIGNMENT 1. Define an agent. 2. What is a rational agent? 3. What is bounded rationality? 4. What is an autonomous agent? 5. Describe the salient features of an agent. 7.0 REFERENCES/FURTHER READING Bowling, M. & Veloso, M. (2002). Multiagent Learning Using a Variable Learning Rate. Artificial Intelligence. 136(2): 215-250. Serenko, A.; Detlor, B. (2004). "Intelligent Agents as Innovations". AI and Society 18 (4): 364–381. doi:10.1007/s00146-004-0310-5. http://foba.lakeheadu.ca/serenko/papers/Serenko_Detlor_AI_and _Society.pdf Serenko, A.; Ruhi, U.; Cocosila, M. (2007). "Unplanned Effects of Intelligent Agents on Internet use: Social Informatics approach". AI and Society 21 (1–2): 141–166. doi:10.1007/s00146-006-0051-8. http://foba.lakeheadu.ca/serenko/papers/AI_Society_ Serenko_Social_Impacts_of_Agents.pdf. 11

Use Quizgecko on...
Browser
Browser