Artificial Intelligence: Types & Capabilities

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson
Download our mobile app to listen on the go
Get App

Questions and Answers

Which of the following best describes Narrow AI?

  • AI that has self-awareness and consciousness.
  • AI that can perform a dedicated task with intelligence. (correct)
  • AI that can perform any intellectual task that a human can.
  • AI that is capable of surpassing human intelligence.

General AI systems are currently available and can perform tasks as perfectly as humans.

False (B)

Which of the following is a key characteristic of Super AI?

  • Inability to learn and adapt to new environments.
  • Ability to perform only a specific set of tasks.
  • Capacity to surpass human intelligence and perform any task better than humans. (correct)
  • Dependence on human input for decision-making.

Purely ______ machines are the most basic types of Artificial Intelligence and do not store memories or past experiences.

<p>reactive</p> Signup and view all the answers

Which type of AI can store past experiences for a short period of time?

<p>Limited Memory AI (A)</p> Signup and view all the answers

Theory of Mind AI is fully developed and can understand human emotions and interact socially like humans.

<p>False (B)</p> Signup and view all the answers

Which of the following is considered the future of Artificial Intelligence?

<p>Self-Awareness AI (B)</p> Signup and view all the answers

What is the primary difference between Narrow AI and General AI?

<p>Narrow AI is designed for specific tasks, while General AI can perform any intellectual task a human can.</p> Signup and view all the answers

What is the main functionality of actuators in the context of AI agents?

<p>To act on the environment. (A)</p> Signup and view all the answers

An AI agent can only have physical components, not mental properties like knowledge or belief.

<p>False (B)</p> Signup and view all the answers

Which of the following is an example of an effector?

<p>Wheels (C)</p> Signup and view all the answers

A device which detects change in the environment and sends the information to other electronic devices is a ______.

<p>sensor</p> Signup and view all the answers

Which rule is NOT one of the four main rules for an AI agent?

<p>The AI agent must be able to communicate with humans effectively. (B)</p> Signup and view all the answers

A rational agent always performs the 'right' action to maximize its performance measure.

<p>True (A)</p> Signup and view all the answers

Which factor is NOT used to judge the rationality of an agent?

<p>The agent's physical appearance. (D)</p> Signup and view all the answers

In the context of AI agents, what is the role of 'sensors'?

<p>Sensors allow the agent to perceive its environment by detecting changes and sending information.</p> Signup and view all the answers

The structure of an intelligent agent is a combination of ______ and agent program.

<p>architecture</p> Signup and view all the answers

In the structure of AI agents, what is the role of the 'agent function'?

<p>To map a percept to an action. (B)</p> Signup and view all the answers

The agent program executes on the logical component to produce agent function f.

<p>False (B)</p> Signup and view all the answers

What does PEAS stand for in the context of AI agents?

<p>Performance, Environment, Actuators, Sensors (B)</p> Signup and view all the answers

In PEAS representation, 'P' stands for ______.

<p>performance</p> Signup and view all the answers

What is the purpose of 'Performance measure' in the PEAS representation?

<p>It defines the objective for the success of an agent's behavior.</p> Signup and view all the answers

Which of the following is an example of an 'Actuator' for a self-driving car in PEAS representation?

<p>Steering wheel (D)</p> Signup and view all the answers

In a medical diagnosis agent, 'Treatments' would be categorized as 'Sensors' in PEAS representation.

<p>False (B)</p> Signup and view all the answers

Which of the following is a limitation of Simple Reflex Agents?

<p>They have very limited intelligence and are not adaptive to changes. (B)</p> Signup and view all the answers

The Simple reflex agent works on ______-action rule, which means it maps the current state to action.

<p>condition</p> Signup and view all the answers

What does a Model-based agent use to track the situation in a partially observable environment?

<p>The have a model to track the situation.</p> Signup and view all the answers

Which of the following agent types expands the capabilities of the model-based agent by having the 'goal' information?

<p>Goal-based Agent (C)</p> Signup and view all the answers

A Goal-based agent always chooses the action with the highest utility, regardless of whether it achieves the goal.

<p>False (B)</p> Signup and view all the answers

What is the primary characteristic that differentiates a Utility-based agent from a Goal-based agent?

<p>Utility-based agents provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. (A)</p> Signup and view all the answers

The ______ function maps each state to a real number to check how efficiently each action achieves the goals.

<p>utility</p> Signup and view all the answers

What is the primary capability that defines a 'Learning Agent'?

<p>The ability to learn from past experiences and adapt automatically through learning.</p> Signup and view all the answers

Which component of a learning agent is responsible for suggesting actions that will lead to new and informative experiences?

<p>Problem generator (C)</p> Signup and view all the answers

According to AI definitions, the environment is part of the agent itself.

<p>False (B)</p> Signup and view all the answers

What is one of the key features that defines the Environment of an AI agent?

<p>Whether it is fully or partially observable. (D)</p> Signup and view all the answers

If an agent's sensors can access the complete state of an environment at each point in time, the environment is considered fully ______.

<p>observable</p> Signup and view all the answers

Explain the difference between a deterministic and a stochastic environment.

<p>In a deterministic environment, the next state is completely determined by the current state and action; in a stochastic environment, the next state is random and cannot be fully determined.</p> Signup and view all the answers

In which type of environment is only the current percept required for action, without needing memory of past actions?

<p>Episodic environment (B)</p> Signup and view all the answers

Known vs Unknown is actually a property of the environment itself, NOT related to the agent's information.

<p>False (B)</p> Signup and view all the answers

Which of the following environments is described as an environment that can change itself while an agent is deliberating?

<p>dynamic environment (A)</p> Signup and view all the answers

If an agent can obtain complete and accurate information about the state's environment, then such an environment is called ______ environment.

<p>accessible</p> Signup and view all the answers

Why is a taxi driving considered a 'dynamic' environment?

<p>Taxi driving is considered a Dynamic environment because the traffic, pedestrians, and other variables are changing while the agent is deliberating.</p> Signup and view all the answers

Imagine an AI designed to play chess. According to the provided context, how would its environment be classified?

<p>Discrete (C)</p> Signup and view all the answers

Explain what makes AI-driven navigation in a self-driving car a far more complex task environment than pathfinding for the same car within a pre-defined parking lot.

<p>Compared to parking, general navigation poses a more complex task environment because it's continuous rather than discrete, dynamic rather than static, and generally partially observable and stochastic, demanding more complex processing and adaptation.</p> Signup and view all the answers

It is impossible for a known environment to be partially observable

<p>False (B)</p> Signup and view all the answers

Flashcards

Narrow AI

AI that performs a dedicated task with intelligence, but doesn't go beyond limitations.

General AI

AI type that can perform any intellectual task with human-level efficiency.

Super AI

A hypothetical AI that surpasses human intelligence and cognitive abilities.

Reactive Machines

Basic AI systems that react to present situations without memory.

Signup and view all the flashcards

Limited Memory

AI that stores past experiences for a short duration.

Signup and view all the flashcards

Theory of Mind AI

AI that understands human emotions & interacts socially.

Signup and view all the flashcards

Self-Awareness AI

Future AI with consciousness and self-awareness.

Signup and view all the flashcards

AI System

An AI system studying rational agents and their surroundings.

Signup and view all the flashcards

Agent Definition

Any entity perceiving and acting in an environment using sensors/actuators.

Signup and view all the flashcards

Sensor

Detects environmental changes and sends info; agents monitors.

Signup and view all the flashcards

Actuator

Machine part that converts energy into motion; agent interacts.

Signup and view all the flashcards

Effectors

Devices affecting environment; agents can use these.

Signup and view all the flashcards

Intelligent Agent

Autonomous entity acting upon an environment using sensors/actuators.

Signup and view all the flashcards

Rational Agent

Agent maximizing performance with all possible actions.

Signup and view all the flashcards

AI Agent's Task

Implements the agent function: combines architecture and agent program.

Signup and view all the flashcards

Architecture

Machinery where AI agent operates.

Signup and view all the flashcards

Agent Function

Maps a percept to an action.

Signup and view all the flashcards

Agent Program

Implementation of agent function.

Signup and view all the flashcards

PEAS Representation

Model to define AI/rational agent properties.

Signup and view all the flashcards

Simple Reflex Agent

Simplest agent that makes decisions based only on current percepts.

Signup and view all the flashcards

Model-Based Agent

Agent that models the world to track situations in observable environment.

Signup and view all the flashcards

Goal-Based Agent

Agent knowing its goal: expands model-based capability.

Signup and view all the flashcards

Utility-Based Agent

Agent measuring success and taking best action.

Signup and view all the flashcards

Learning Agent

Agent that learn from the past; adapts through learning.

Signup and view all the flashcards

Agent's Environment

Everything that surrounds an agent but is external.

Signup and view all the flashcards

Fully observable

Agent can access the environment completely at all times.

Signup and view all the flashcards

Partially observable

An agent that cannot access the environment completely at all times.

Signup and view all the flashcards

Deterministic

Current state/selected action determines next state environment.

Signup and view all the flashcards

Stochastic

Environment is random in nature/not determined completely.

Signup and view all the flashcards

Episodic

Current percept sufficient for action.

Signup and view all the flashcards

Sequential

Agent requires memory of past actions.

Signup and view all the flashcards

Single-Agent

Only one agent is involved/operating and is by itself.

Signup and view all the flashcards

Multi-Agent

Multiple agents are operating in environment.

Signup and view all the flashcards

Dynamic

Environment changes while the agent is deliberating.

Signup and view all the flashcards

Static

Environment stays same; no looking while deliberating is needed.

Signup and view all the flashcards

Discrete

Environment has finite number of percepts/actions.

Signup and view all the flashcards

Continuous

Environment that does not have a finite number of percepts/actions.

Signup and view all the flashcards

Known

Results for all actions are known to the agent.

Signup and view all the flashcards

Unknown

Agent needs to learn how performs action.

Signup and view all the flashcards

Accessible

Agent has complete, accurate environmental information.

Signup and view all the flashcards

Inaccessible

Agent can not obtain complete, accurate environmental information.

Signup and view all the flashcards

Study Notes

  • Course explores into Artificial Intelligence & Expert Systems (ITE 153) during the 2nd Semester of AY 2024-2025, taught by Dr. Lumer Jude Doce from the IT & Physics Department

Types of Artificial Intelligence

  • Artificial Intelligence is categorized based on capabilities and functionality.

AI Based on Capabilities (Type-1)

Weak AI or Narrow AI

  • Narrow AI performs dedicated tasks with intelligence and is the most common AI.
  • It cannot operate beyond its limitations and is trained for specific tasks
  • Also termed weak AI, failure is possible if it exceeds its limits.
  • Apple Siri exemplifies Narrow AI with a limited pre-defined range of functions.
  • IBM's Watson supercomputer falls under Narrow AI with expert systems, machine learning, and natural language processing.
  • Narrow AI is used in playing chess, e-commerce suggestions, self-driving cars, and image recognition.

General AI

  • General AI can perform any intellectual task with human-level efficiency, being able to think like a human by itself
  • The idea is to create systems that are smarter than humans.
  • Currently, there is no system that can perform tasks as perfectly as a human.
  • Researchers are focused on developing General AI, however, it requires time and effort to develop these systems.

Super AI

  • Super AI is a level of intelligence where machines surpass human intelligence including cognitive properties and it is an outcome of General AI.
  • Super AI's key characteristics are the capability to think, reason, solve puzzles, make judgments, plan, learn, and communicate.
  • Still a hypothetical concept with development a world-changing task.

AI Based on Functionality (Type-2)

Reactive Machines

  • Reactive machines are basic AI types.
  • These AI systems do not store memories or past experiences for future actions, they focus on current scenarios and react accordingly.
  • IBM's Deep Blue and Google's AlphaGo are examples of reactive machines.

Limited Memory

  • Limited memory machines store past experiences or short-term data.
  • They can only use stored data for a limited time
  • Self-driving cars exemplify limited memory systems, they store recent speed, distance of nearby cars, the speed limit and other info.

Theory of Mind

  • Theory of Mind AI should understand human emotions, beliefs, and interact socially.
  • This type of AI is still under development.

Self-awareness

  • Self-awareness AI is the future of Artificial Intelligence.
  • These machines will be highly intelligent with their own consciousness, sentiments, and self-awareness while being smarter than human minds.
  • Self-Awareness AI is a hypothetical concept.

Agents in AI

  • An AI system studies rational agents and their environments.
  • Agents perceive their environment through sensors and act through actuators.
  • AI agents can possess mental properties like knowledge, belief, and intention.
  • An agent runs in a cycle of perceiving, thinking, and action
  • Human-Agents have eyes, ears, and limbs.
  • Robotic Agents use cameras, infrared range finders, NLP, and motors.
  • Software Agents use keystrokes and file contents as sensory input and display output.
  • The world has many agents, these include thermostats, cellphones, cameras, and even humans.
  • Sensors detect environment changes by sending data to electronic devices allowing Agents to observe the environment.
  • Actuators convert energy into motion, for movement and control.
  • Effectors affect the environment, such as legs, wheels, arms, fingers, etc.

Intelligent Agents

  • Intelligent agent is an autonomous entity, it acts upon an environment using sensors and actuators to achieve goals and it learns from the environment.
  • Thermostats are examples of intelligent agents.
  • The four rules for an AI agent, are the ability to perceive, its observation must be used to make decisions, decisions must result in actions, and the actions must be rational.

Rational Agents

  • Rational agents have clear preferences, model uncertainty, and maximize performance.
  • They perform the right things, creating tools for game and decision theory.
  • Rational actions are necessary in reinforcement learning, where correct actions yield rewards and incorrect actions yield negative rewards.
  • Rational agents in AI are similar to intelligent agents.
  • The rationality is measured by performance, prior knowledge, actions, and percept sequences.

Structure of AI Agents

  • Task of AI is to design an agent program that implements the agent function
  • An intelligent program is a combination of architecture and a program. Agent = Architecture + Agent program
  • Architecture is the machinery that AI executes on.
  • Agent Functions map a percept to an action, and they are implemented by an agent program. An program on physical architecture creates function f: (f:P* → A)

PEAS Representation

  • PEAS models define AI agent properties, and is made up of four words.
  • P: Performance measure, the objective for success.
  • E: Environment.
  • A: Actuators.
  • S: Sensors.
  • For self-driving cars, its Performance is its safety, time, legal compliancy and comfort, it operates on roads with other vehicles etc, with actuators being steering, accelerator, brake, horn et and it uses camera, GPS, speedometer and other sensors.

Types of AI Agents

  • Agents are in five classes based on degrees of perceived intelligence and all can improve over time.
  • Simple Reflex Agent
  • Model-based reflex agent
  • Goal-based agents
  • Utility-based agent
  • Learning agent

Simple Reflex Agent

  • Simple reflex agents take actions based on current percepts and ignore the history and are only effective in a fully observable environment.
  • The agent works on condition-action rules, mapping states to actions.
  • A Room Cleaner cleans if dirt is in the room
  • However, simple design has limited intelligence, no knowledge of unobservable states, is large to store, and they are not adaptive.

Model-Based Reflex Agent

  • Model-based agents function in partially observable environments by tracking situations with a model with factors on how the world operates and an internal state that represents current conditions based on past percepts.
  • These agents utilize the model, which is world knowledge, to perform actions.
  • Agents need to state updates and information on the world's evolution and the agent's effects on the world,

Goal-Based Agent

  • Knowing the current environment isn't sufficient, therefore agents need to know its goal.
  • Goal-based agents have capabilities of model-based agents while possessing goal information.
  • They choose actions to achieve the goal.
  • Different scenarios are considered when to decide if the end goal is achieved or not and can cause proactive action.

Utility-Based Agent

  • Utility agents have measurements of success at a given state, also acting on how to achieve it.
  • Utility-based agents are useful for choosing the best action among multiple options.
  • Utility functions map states to real numbers to assess action efficiency.

Learning Agent

  • Learning agents can use past experiences or data to automatically adapt
  • They start with basic knowledge and evolve through learning automatically.
  • Key components are a learning element for improvements, the critic to provide feedback, a performance element, and a problem generator for new experimental actions.
  • Learning agents can learn, analyze, and seek new improvements.

Agents Environment

  • The environment is the surroundings excluding the agent itself with it being described by the agent's present situation.
  • It provides the agent to sense and act upon with it mostly being said to be non-feministic.
  • Based on Russell and Norvig, an environment can be fully or partially observable, static or dynamic, discrete or continuous, deterministic or stochastic, single-agent or multi-agent, episodic or sequential, known or unknown, and accessible or inaccessible.

Fully Observable vs Partially Observable

  • A fully observable environment is when agents can access the environment's complete state at all times, which is easy as there is no need to keep track of past history.
  • A partially observable environment are environments that can not be completely sensed by all agents.
  • An unobservable environment are agent with no sensors.

Deterministic vs Stochastic

  • The environment is deterministic if an agent's current state and action determine the next state.
  • A stochastic environment is random and unpredictable by the agent.
  • Agents don't need to worry of uncertantiy in a determined fully observable environment.

Episodic vs Sequential

  • Episodic environments involve one-shot actions with the current percept.
  • Sequential environments require a memory of past moves to determine the next action..

Single-Agent vs Multi-Agent

  • Single Agents is when only one agent is involved.
  • Multi-agents is when multiple are involved.
  • Problems are different in multi-agent environments.

Static vs Dynamic

  • A static environment remains same while agents deliberate.
  • A dynamic environment can change even as the agent is acting.
  • Taxi driving exemplifies a dynamic environment, while a crossword puzzle is static.

Discrete vs Continuous

  • Descrete is when there is a finite number of percepts and actions.
  • Continuous is whent here is no finite number of percepts and actions.
  • Chess is a descrete environment, while self-driving would be continous.

Known vs Unknown

  • Known/Unknown describe an agent's knowledge to act in an environment, not the environment itself.
  • In a known environment, the results of the agent's acttions are known, while in an unknown, it needs to learn.
  • It's possible to that a known environment to be partially observable and an Unknown environment to be fully observable.

Accessible vs Inaccessible

  • Accessible environments happen when an agent can get complete and accurate information.
  • Inaccessible environments involve something such as the state of the environment cannot be obtained.
  • An empty room with its termperature and state easily defined is an example of an accesbile environment.
  • Information on an event happening on earh is inaccessible.

Studying That Suits You

Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

Quiz Team

Related Documents

More Like This

Use Quizgecko on...
Browser
Browser