DAB106 Introduction to Artificial Intelligence PDF

Document Details

WondrousNewOrleans

Uploaded by WondrousNewOrleans

Loyalist College

Tags

artificial intelligence intelligent agents artificial agents introduction to ai

Summary

This document introduces artificial intelligence (AI) and intelligent agents (IA). It defines IA as an autonomous entity acting towards achieving goals based on observations through sensors and consequent actuators. It introduces various types of agents and concepts like Percepts, Percept Sequence, Agent Function, Agent Program, and architecture. The document uses examples like a vacuum cleaner and other AI agent examples.

Full Transcript

DAB106 Introduction to Artificial intelligence What is an In Al, an intelligent agent (IA) is an autonomous entity which acts, directing its Intelligent activity towards achieving goals (i.e. it is an agent), upon an environment using observation th...

DAB106 Introduction to Artificial intelligence What is an In Al, an intelligent agent (IA) is an autonomous entity which acts, directing its Intelligent activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and Agent? consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. Example: A reflex machine, such as a thermostat What is a Percept? Percept: Agent's perceptual inputs at any given instant Percept Sequence: Complete history of everything the agent has ever perceived. An agent's choice of action at any given instant can depend on the entire Percept Sequence observed to date, but not on anything it hasn't perceived. Agent gets fully specified by the choice of action for every possible Percept Sequence. Percepts: location and contents, e.g., [A, dirty] Actions: Left, Right, Suck, NoOp The structure of intelligent agents Internally, an artificial agent's function is carried out by an Agent Program. Here's how it works: Architecture: 1. This refers to the hardware or machinery that the intelligent agent operates on, including sensors and actuators. 2. Example: A personal computer running a software agent, a self-driving car with sensors, or even a camera that responds to environmental inputs. Agent Function: This is the abstract function that maps percepts (what the agent perceives) to actions. Percept Sequence: The history of all the information the agent has sensed, which helps it decide what actions to take. Example: A robot perceiving its environment and deciding whether to move forward or stop based on sensor data. Agent Program: This is the concrete implementation of the agent function. The agent program takes percepts from sensors and computes the appropriate action. Example: The software controlling a vacuum cleaner robot that receives information about the room layout and navigates it accordingly. Structure of Intelligent Agents An agent: –Perceives its environment, –Through its sensors, –Then achieves its goals –By acting on its environment via actuators Agent’s structure can be viewed as − Agent = Architecture + Agent Program Architecture = the machinery that an agent executes on. Agent Program = an implementation of an agent function. Agents and environments Agent Function: An agent's behavior is described by the agent function, which maps every percept sequence (the history of what the agent perceives) to an action. Agent Program: The implementation of the agent function. The agent program is the actual software that runs on the agent’s architecture, processing input from the environment and deciding the next action. The agent function maps percept histories to actions, forming the decision-making process. The agent program runs on the physical architecture (hardware) to produce the agent's actions. Agent = Architecture + Agent Program Mathematical representation of the agent function: f:P →A This represents how the function f maps percept sequences P to actions A Terms in A.I. Agents Terms in A.I. Agents Perception: What the agent observes in the environment through its sensors. Percept History: The history of all past perceptions that an agent has encountered over a period of time. Actuators: Mechanisms that translate the agent’s decisions into actions. They enable the agent to move, manipulate, or change its environment. Effector: The actual physical components, like motors or limbs, which carry out the actions dictated by the actuators. Robotics Agent Example: Vacuum Cleaner In this artificial world, there are two locations: Square A and B Vacuum cleaner finds In which square it is in Whether there is any dirt or not. Then it can pick one of the 4 possible actions: Move Left Move Right Suck up the dirt Do nothing Specification Percepts: location and contents, e.g. (A, Dirty) Actions: Left, Right, Suck, NoOp A simple agent function is: If the current square is dirty, then suck; Otherwise, move to the other square. How do we know if this is a good agent function? What is the best function? — Is there one? Who decides this? Agent Terminology Performance Measure of an Agent: The criteria that determine how successful an agent is in achieving its goals. For example, how effectively a vacuum robot cleans the room. Behavior of Agent: The set of actions that an agent performs based on a sequence of percepts. For instance, if a robot senses an obstacle (percept), it turns right (action). Percept: The input an agent receives from its environment at any given moment. This could be visual, auditory, or sensor-based data. Percept Sequence: The complete history of everything an agent has perceived up to a certain point. For example, for a vacuum cleaner: percept = dirty floor, action = suck; percept = obstacle, action = turn left. Agent Function: A mapping from percept sequences to actions. In simpler terms, it defines how the agent responds to what it perceives over time. Rational agents – Well behaved Agents A rational agent makes decisions that maximize its chances of achieving its goals, based on its knowledge and perceptions of the environment. Goal-Oriented Action: It acts to achieve the best possible outcome, or when faced with uncertainty, the best expected outcome. Example: Consider a navigation app (the agent). It analyzes the available routes from point A to point B and chooses the quickest route based on real-time traffic data. In this scenario, the app is making a rational decision to optimize time efficiency for the driver. When we define a rational agent, we group these properties under PEAS, the problem specification for the task environment. The rational agent we want to design for this task environment is the solution. PEAS PEAS stands for: Performance Environment Actuators Sensors PEAS - Self Driving Cars What is PEAS for a self-driving car? Performance measure: Safety, time, legal drive, comfortable Environment: Roads, other cars, pedestrians, Road Signs Actuators: Steering, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard PEAS - vacuum cleaner How about a vacuum cleaner? Performance: cleanness, efficiency: distance traveled to clean, battery life, security. Environment: room, table, wood floor, carpet, different obstacles. Actuators: wheels, different brushes, vacuum extractor. Sensors: camera, dirt detection sensor, cliff sensor, bump sensors, infrared wall sensors. Automated Taxi Problem (coming soon…) P–? E–? A–? S–? PEAS- Automated Taxi Problem Performance measure – Safe, fast, legal, max revenue, min cost, min fuel, … Environment – city roads, traffic, pedestrians, bikers, construction, … Actuators – Car controls (steering, gas pedal) and human interface Sensors – Cameras, radar, laser rangefinder, GPS, mapping, engine sensors, human input devices PEAS - diagnosis system Agent: Medical diagnosis system Performance measure: Environment: Actuators: Sensors: PEAS - diagnosis system Agent: Medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) PEAS - Part-picking robot Agent: Part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors PEAS Agent: Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard What is an Intelligent Agent? In Al, an intelligent agent (IA) is an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. Example: A reflex machine, such as a thermostat Human agent: – Sensors: eyes, ears, and other organs. – Actuators: hands, legs, mouth, and other body parts. Robotic agent: – Sensors: Cameras and infrared range finders. – Actuators: Various motors. Agents and Agents everywhere – Thermostat environments – Cell phone – Vacuum cleaner – Robot – Alexa Echo – Self-driving car – Human – etc. When we define a rational agent, we group these properties under PEAS, the problem specification for the task environment. The rational agent we want to design for this task environment is the solution. PEAS PEAS stands for: Performance Environment Actuators Sensors Agent types Basic types in order of increasing generality: Simple reflex agents Model-based reflex agents Goal-based agents Utility-based agents Learning Agent All of these can be generalized into learning agents that can improve their performance and generate better actions. Simple reflex agents A simple reflex agent is a type of agent in artificial intelligence that selects actions based solely on the current percept, rather than using a model of the world or planning. The agent's behavior is determined by a set of condition-action rules, called production rules or "if-then" rules. These rules are evaluated in a specific order, and the first one whose condition is met is selected for execution. Works in fully observable environment Simple reflex agents Perception: The agent perceives its environment through sensors. Rules: It follows simple "if-then" rules, like a flowchart. Action: Based on the rules, it takes an action without considering past interactions. Very Simple Example: Imagine a light bulb that's connected to a light sensor: Perception: The sensor detects if it's dark or light. Rule: If it's dark, then turn on the light. Action: The light bulb turns on when it's dark and turns off when it's light. This light bulb is a simple reflex agent because it acts immediately to its perception (darkness) with a pre-set action (turning on) without thinking about past light levels or future conditions. Model-based reflex agents A model-based reflex agent is a type of agent in artificial intelligence that uses a model of the world to choose their actions. The agent's behavior is still determined by a set of condition-action rules, but these rules are evaluated in the context of the agent's internal model of the world, rather than the current percept alone. The agent uses the internal model to predict the effects of its actions before selecting an action. The agent's internal model can include information about the state of the world, the effects of actions, and the goals of the agent. Model-based reflex agents Perception: They sense the environment using sensors. Internal State: They keep track of what they've sensed before (their percept history). Model: They have a model of the world which tells them how things generally work. Decision: They use their model and history to decide what to do next, even if they can't see everything happening around them. Think about a video game character controlled by AI: Perception: The character sees an obstacle ahead. Internal State: It remembers that last time it jumped too late, it didn't clear the obstacle. Model: It knows that jumping earlier helps to clear obstacles. Decision: Next time it approaches a similar obstacle, it jumps earlier than before, clearing it successfully. This game character is a model-based reflex agent because it uses its experience (percept history) and knowledge of the game world (model) to improve its actions over time Goal-based agents A goal-based agent is a type of agent in artificial intelligence that selects actions based on the pursuit of specific goals. The agent's behavior is determined by a set of goals and a set of actions that can be taken to achieve these goals. The agent uses its internal model of the world to reason about the effects of its actions in order to achieve its goals. A goal-based agent is different from a simple reflex agent or a model-based reflex agent in that it is not simply reacting to the current percept or using an internal model of the world to make predictions, but it has a specific objective that it is trying to achieve. Goal-based agents Goals: They have defined objectives they are trying to accomplish. Flexibility: They can adjust their actions to be more effective in reaching their goals. Decision-making: They consider various possible actions and select the one that seems most likely to achieve their goals. Consider playing a video game where the goal is to build a house: Goal: The desired outcome is a completed house. Actions: The game character gathers resources, plans the structure, and constructs the house. Decisions: The character must decide what resources to gather first, where to build, and in what order to construct the house parts. Utility-based agents A utility-based agent is a type of agent in artificial intelligence that selects actions based on their expected utility, which is a measure of how much the agent values the outcomes of its actions. The agent's behavior is determined by a utility function, which assigns a value or "utility" to each possible state of the world. The agent uses its internal model of the world to reason about the effects of its actions in order to maximize its expected utility. A utility-based agent is similar to a goal-based agent in that it has a specific objective in mind, which is to maximize its expected utility, but it differs in the way it measures the success of its actions. A goal-based agent defines success in terms of achieving a specific goal, while a utility-based agent defines success in terms of the value it assigns to different states of the world. Utility-based agents Utility Function: A utility-based agent has a utility function that assigns a numerical value to each possible state of the world. Decision Making: The agent chooses actions that it believes will maximize its utility, taking into account the likelihood of different outcomes. Flexibility and Adaptation: These agents can adapt to changes in the environment and revise their actions to continue maximizing utility. Consider an automatic coffee machine programmed to make coffee: Utility Function: It evaluates the utility of serving coffee at different temperatures and times. Decision Making: The machine will aim to serve the coffee at the optimal temperature for taste and safety, and at the time it's usually needed. Adaptation: If it learns that the user prefers a cooler temperature or a different brew time on weekends, it adjusts its settings to maximize the utility for the user's preferences. This coffee machine is a utility-based agent because it's trying to make coffee that's not just drinkable (goal) but is made exactly how the user likes it best (maximum utility). Learning agents A learning agent is a type of agent in artificial intelligence that can improve its behavior over time by learning from experience. Learning can take many forms, such as adjusting the agent's internal model of the world, adjusting the parameters of its decision-making process, or adapting its goals and objectives. There are two main types of learning agents: Reactive agents: These agents use machine learning algorithms to learn how to react to new situations based on past experiences. They do not maintain an internal model of the world, but they can adjust their behavior based on their past experiences. Deliberative agents: These agents use machine learning algorithms to learn how to plan and reason about the effects of their actions based on past experiences. They maintain an internal model of the world and they use it to improve their decision-making over time. Learning agents Learning Element: Improves the agent by learning from experiences. Performance Element: Executes actions in the environment. Critic: Measures the agent's performance and provides feedback. Problem Generator: Suggests actions that lead to new or informative experiences. Types of Learning Agents: Reactive Agents: Use machine learning to make decisions based on past experiences without an internal model of the world. Deliberative Agents: Have an internal model and use past experiences to plan and reason about future actions. Imagine a music streaming service that learns from your listening habits: Learning Element: Notices which songs and genres you skip or listen to fully. Performance Element: Plays music based on your preferences. Critic: Uses your interactions (like skips or replays) to evaluate how well the recommendations match your taste. Problem Generator: Might suggest a new genre or artist to see if you like them, enhancing your listening experience and its own data on your preferences. This streaming service is a learning agent because it adapts its music recommendations to improve your listening experience over time based on your behavior. A robot that learns to navigate through a maze by adjusting its movements based on the feedback it receives from hitting walls or reaching the end of the maze. A self-driving car that learns to drive safely by adjusting its steering, Learning acceleration, and braking based on the feedback it receives from cameras and sensors. agents A machine learning algorithm that learns to classify images by adjusting the parameters of its neural network based on the feedback it receives from labeled training data. These are some examples of how a learning agent can improve its performance over time by using the experience it gains. The main advantage of learning agent over other type of agents is that it can adapt to new situations and improve its performance over time. What is No-Code AI? No-code AI is a set of technology tools that allows business users to build AI models without writing any code. As the name Introduction suggests, no coding is required—no Python, no R, no technical programming at all. to No-Code How Does It Work? AI No-code AI platforms provide an easy-to-use web or mobile interface, similar to a spreadsheet. Users simply enter data to train AI models, and these models can be built quickly, saving time and effort. What Makes No-Code AI Powerful? Accessible for Non-Technical Users : No-code AI democratizes AI, making it accessible to people who don’t have coding skills, such as business analysts or marketers. Quick Model Building : AI models can be built and deployed faster with drag-and- drop functionality and simple data inputs, saving time compared to traditional coding approaches. Collaboration Across Departments: Teams from different departments can collaborate on AI projects since no technical expertise is required. This makes AI implementation a cross-functional activity. Cost-Effective: By reducing the need for technical resources, no-code AI cuts down costs related to hiring developers or data scientists. Types of Problems No-Code AI Solves Prescriptive Problems (No AI Needed): Prescriptive problems are tasks where steps can be repeated to get the same result, like adding sales figures in a spreadsheet. These can be done in Excel and don't require AI. Predictive Problems (Ideal for AI):AI is perfect for problems where there is no set of repeatable steps and predictions need to be made based on patterns in data. For example, predicting sales for next year based on market trends, seasonality, and customer behavior. Examples of Predictive Problems Solved by AI: Forecasting future sales based on historical data. Identifying customers who are most likely to buy your product. Predicting when equipment is likely to fail, so it can be replaced in advance. Hands-On : Class Activity with No-Code AI Steps to Build an Image Classification AI Model: 1. Go to Teachable Machine. (https://teachablemachine.withgoogle.com/) 2. Select the 'Image Project' option from the main menu. 3. Download the provided dataset from Blackboard (apple, orange, and banana images). 4. Upload images from the dataset into the project. 1. Upload apple images into one class labeled "Apple." 2. Upload orange images into a class labeled "Orange." 3. Upload banana images into a class labeled "Banana." 5. Train the model by clicking the "Train Model" button, allowing Teachable Machine to learn from the uploaded images. 6. Test the model by uploading images from the "Test Data" folder in the dataset and observing if the AI correctly classifies them as apples, oranges, or bananas. Outcome: You will have built a simple image classification AI model capable of identifying apples, oranges, and bananas from images, using a no-code approach.

Use Quizgecko on...
Browser
Browser