Artificial Intelligence CSB2104 Lecture Notes PDF
Document Details
Uploaded by EruditeOcarina
Badr University in Assiut
Abdel-Rahman Hedar
Tags
Related
Summary
These lecture notes cover Artificial Intelligence, focusing on intelligent agents, environment types, and architecture. The slides detail core concepts, agent types, and agent interactions, with illustrations and examples.
Full Transcript
Artificial Intelligence CSB2104 Prof. Abdel-Rahman Hedar Intelligent Agents Chapter 2 2 Contents » Intelligent Agents (IA) » Environment types » IA Structure 3 Intelligent Agents (IA) 4 What is an (Intelligent) Agent? » An over-used...
Artificial Intelligence CSB2104 Prof. Abdel-Rahman Hedar Intelligent Agents Chapter 2 2 Contents » Intelligent Agents (IA) » Environment types » IA Structure 3 Intelligent Agents (IA) 4 What is an (Intelligent) Agent? » An over-used, over-loaded, and misused term. » Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals. 5 What is an (Intelligent) Agent? » PAGE (Percepts, Actions, Goals, Environment). » Task-specific & specialized: well-defined goals and environment. » The notion of an agent is meant to be a tool for analyzing systems Not an absolute characterization that divides the world into agents and non-agents. Much like, e.g., object-oriented vs. imperative program design approaches. 6 Example: A Windshield Wiper Agent How do we design an agent that can wipe the windshields when needed? » Goals ? » Percepts ? » Sensors ? » Effectors/Actuators? » Actions ? » Environment ? 7 Example: A Windshield Wiper Agent » Goals: To keep windshields clean and maintain good visibility » Percepts: Raining, Dirty » Sensors: Camera (moist sensor) » Effectors: Wipers (left, right, back) » Actions: Off, Slow, Medium, Fast » Environment: US inner city, freeways, highways, weather … 8 Example: Autonomous Vehicles Collision Avoidance Agent Lane Keeping Agent (CAA) (LKA) Goals: Avoid running into Goals: Stay in current obstacles lane Percepts ? Percepts ? Sensors? Sensors? Effectors ? Effectors ? Actions ? Actions ? Environment: Freeway Environment: Freeway 9 Collision Avoidance Agent (CAA) » Goals: Avoid running into obstacles » Percepts: Obstacle distance, velocity, trajectory » Sensors: Vision, proximity sensing » Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights » Actions: Steer, speed up, brake, blow horn, signal (headlights) » Environment: Freeway 10 Lane Keeping Agent (LKA) » Goals: Stay in current lane » Percepts: Lane center, lane boundaries » Sensors: Vision » Effectors: Steering Wheel, Accelerator, Brakes » Actions: Steer, speed up, brake » Environment: Freeway 11 Agent PEAS Description 12 Conflict Resolution by Action Selection Agents » Override: CAA overrides LKA. » Arbitrate: if Obstacle is Close then CAA else LKA. » Compromise: Choose action that satisfies both agents. » Any combination of the above. » Challenges: Doing the right thing. 13 Behavior and Performance of IAs » Perception (sequence) to Action Mapping: Ideal mapping: specifies which actions an agent ought to take at any point in time Description: Look-Up-Table vs. Closed Form » Performance measure: a subjective measure to characterize how successful an agent is (e.g., speed, power usage, accuracy, money, etc.) » (degree of) Autonomy: to what extent is the agent able to make decisions and actions on its own? 14 The Right Thing = The Rational Action » Rational = Best (Yes, to the best of its knowledge) » Rational = Optimal (Yes, to the best of its abilities, including its constraints) » Rational ≠ Omniscience » Rational ≠ Clairvoyant » Rational ≠ Successful 15 How is an Agent different from other software? » Agents are autonomous, that is they act on behalf of the user » Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt to changes in the environment » Agents don't only act reactively, but sometimes also proactively » Agents have social ability, that is they communicate with the user, the system, and other agents as required 16 How is an Agent different from other software? » Agents may also cooperate with other agents to carry out more complex tasks than they themselves can handle » Agents may migrate from one system to another to access remote resources or even to meet other agents 17 Environment Types 18 Environment Types Characteristics » Deterministic vs. nondeterministic » Episodic vs. non-episodic » Static vs. dynamic » Discrete vs. continuous » Single vs. multi agent 19 Environment Types » Deterministic vs. » Static vs. dynamic nondeterministic If the next state of the If the environment can environment is completely change while an agent is determined by the current deliberating, then we say state and the action the environment is dynamic executed by the agent, for that agent; otherwise, it then we say the is static. environment is deterministic; otherwise, it is stochastic. 20 Environment Types » Episodic vs. non- » Discrete vs. episodic continuous In an episodic task The discrete/continuous environment, the agent’s distinction applies to the experience is divided into state of the environment, to atomic episodes. In each the way time is handled, episode the agent receives and to the percepts and a percept and then actions of the agent. performs a single action. 21 Environments 22 IA Structure 23 Structure of Intelligent Agents » Agent = architecture + program » Agent program: the implementation of, the agent’s perception-action mapping function: Skeleton-Agent(Percept) returns Action memory UpdateMemory(memory, Percept) Action ChooseBestAction(memory) memory UpdateMemory(memory, Action) return Action » Architecture: a device that can execute the agent program (e.g., general-purpose computer, specialized device, etc.) 24 Using a look-up-table » Example: Collision Avoidance Sensors: 3 proximity sensors obstacle Effectors: Steering Wheel, Brakes sensors » How to generate? agent » How large? » How to select action? 25 Using a look-up-table » Example: Collision Avoidance Sensors: 3 proximity sensors Effectors: Steering Wheel, Brakes » How to generate: for each p Pl Pm Pr generate an appropriate action, a S B obstacle sensors agent 26 Using a look-up-table » How large: size of table = #possible percepts × # possible actions obstacle sensors = |Pl | |Pm| |Pr| |S| |B| agent » E.g., P = {close, medium, far}3 » A = {left, straight, right} {on, off} then size of table = 27*3*2 = 162 » How to select action? Search. 27 Table-Driven Agent 28 Conclusion » Intelligent Agents: Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals. » Rational Action: The action that maximizes the expected value of the performance measure given the percept sequence to date 29 Questions & Comments Abdel-Rahman Hedar [email protected] https://shorturl.at/ntI2x