Foundations of Artificial Intelligence
48 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to Lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary characteristic of Greedy Best-First Search?

  • It keeps track of a fixed number of nodes at each level.
  • It expands the node with the smallest heuristic value. (correct)
  • It combines depth-first search with heuristic guidance.
  • It uses the cost to reach a node as the heuristic value.
  • In A* Search, what does the function f(n) represent?

  • The total number of nodes expanded.
  • The total cost from the start node to the current node.
  • The combination of the path cost and the heuristic cost. (correct)
  • The estimated cost to reach the goal.
  • Which of the following local search algorithms allows worse moves initially to escape local maxima?

  • Beam Search
  • Simulated Annealing (correct)
  • Genetic Algorithms
  • Hill-Climb Search
  • What role do axioms play in a knowledge base?

    <p>They are specific sentences that are not derived from others. (A)</p> Signup and view all the answers

    What is the purpose of TELL operations in a knowledge base?

    <p>To add new sentences to the knowledge base. (D)</p> Signup and view all the answers

    Which of the following statements about inference is true?

    <p>It is the process of deriving conclusions from known information. (C)</p> Signup and view all the answers

    What defines a satisfiable sentence in the context of a knowledge base?

    <p>It is true in at least one model of the environment. (D)</p> Signup and view all the answers

    What is the primary focus of local search algorithms compared to informed search algorithms?

    <p>They focus on exploring the solution space. (A)</p> Signup and view all the answers

    What is represented by a chance node in game trees?

    <p>Random events (A)</p> Signup and view all the answers

    What defines a dominant strategy?

    <p>The best strategy regardless of other players' choices (A)</p> Signup and view all the answers

    What is a Nash equilibrium?

    <p>An outcome where players cannot unilaterally improve their payoffs (A)</p> Signup and view all the answers

    What results in a weak Nash equilibrium?

    <p>There are multiple optimal strategies with similar payoffs (C)</p> Signup and view all the answers

    Under what conditions does every game have at least one Nash equilibrium?

    <p>It has a finite number of players and strategies with mixed strategies allowed (A)</p> Signup and view all the answers

    What is a mixed strategy in game theory?

    <p>When players randomize their choices instead of consistently picking the same one (B)</p> Signup and view all the answers

    Which of the following is NOT an issue with mixed strategies?

    <p>Relying on the predictability of opponents (C)</p> Signup and view all the answers

    What is the main goal of mechanism design in game theory?

    <p>To create incentives for strategic behavior and ensure fairness (B)</p> Signup and view all the answers

    What is the primary goal of satisficing planning?

    <p>To produce a valid plan quickly under constraints. (A)</p> Signup and view all the answers

    Which statement best describes optimal planning?

    <p>It aims for the best solution based on specific criteria, even if it's time-consuming. (A)</p> Signup and view all the answers

    What is a characteristic of the SRIPS automated planner?

    <p>It only deals with closed systems with no external factors. (A)</p> Signup and view all the answers

    Which of the following issues does an AI planning agent need to consider regarding stochasticity?

    <p>Assessing if actions always produce the same outcomes. (A)</p> Signup and view all the answers

    What does contingent planning allow an agent to do?

    <p>Develop plans that account for multiple potential outcomes. (A)</p> Signup and view all the answers

    In optimal planning, what is a critical aspect that is emphasized?

    <p>An admissible heuristic that guarantees optimality must be used. (D)</p> Signup and view all the answers

    What is a precondition in the context of SRIPS?

    <p>A specific state of knowledge required to execute an action. (A)</p> Signup and view all the answers

    Which of the following best defines partial observability for an AI planning agent?

    <p>Lacking some information about the state that affects decision-making. (B)</p> Signup and view all the answers

    What is included in the model of the environment in First Order Logic (FOL)?

    <p>A set of functions, relations, and a set of objects (A)</p> Signup and view all the answers

    How do constant symbols in First Order Logic (FOL) function?

    <p>They refer directly to an object (D)</p> Signup and view all the answers

    What distinguishes planning from scheduling in the context of achieving objectives?

    <p>Planning involves identifying tasks, while scheduling chooses the timing of those actions (D)</p> Signup and view all the answers

    In what way is AI planning characterized in intelligent systems?

    <p>It involves autonomous decision-making leading to action sequences for goal achievement (D)</p> Signup and view all the answers

    Which of the following represents domain-specific planning?

    <p>A game AI planner tailored for chess strategies (C)</p> Signup and view all the answers

    What does domain-independent planning imply?

    <p>It contains techniques applicable across different domains (C)</p> Signup and view all the answers

    How do functions in First Order Logic (FOL) differ from constant symbols?

    <p>Functions represent properties related to objects, while constant symbols refer to objects (D)</p> Signup and view all the answers

    What is a key aspect of planning in artificial intelligence?

    <p>It aims to maximize the probability of obtaining desired outcomes (A)</p> Signup and view all the answers

    What term describes the strategy of maximizing rewards based on acquired knowledge?

    <p>Exploitation (C)</p> Signup and view all the answers

    What is the main objective of a Markov Decision Process (MDP)?

    <p>To find the best action that maximizes rewards over time (C)</p> Signup and view all the answers

    In a benign multi-agent system, what characterizes the interaction between agents?

    <p>Agents interfere with each other's performance (A)</p> Signup and view all the answers

    Which situation exemplifies a zero-sum game?

    <p>Chess match between two players (A)</p> Signup and view all the answers

    Which of the following best describes Passive Reinforcement Learning?

    <p>The agent follows a fixed policy and evaluates its effectiveness (D)</p> Signup and view all the answers

    What does the minimax algorithm primarily calculate?

    <p>The best move by simulating moves and countermoves (D)</p> Signup and view all the answers

    What does the exploration vs exploitation trade-off entail in Reinforcement Learning?

    <p>Balancing the decision to explore uncharted states and exploiting known rewards (A)</p> Signup and view all the answers

    What is the purpose of an evaluation function in minimax algorithms?

    <p>To estimate how good a state is when the tree depth is limited (B)</p> Signup and view all the answers

    Which of the following accurately describes Active Reinforcement Learning?

    <p>It allows the agent to learn both the policy and value function through exploration (B)</p> Signup and view all the answers

    How does Greedy Temporal Difference (TD) Learning function?

    <p>The agent always chooses the action it believes is the best based on prior learning (A)</p> Signup and view all the answers

    What advantage does Alpha-Beta pruning offer compared to the regular minimax algorithm?

    <p>It skips nodes that won't affect the final decision (A)</p> Signup and view all the answers

    What is typically evaluated in Passive Reinforcement Learning?

    <p>The efficacy of a predefined policy (D)</p> Signup and view all the answers

    How do stochastic games differ from deterministic games?

    <p>They can lead to multiple states based on probabilities (D)</p> Signup and view all the answers

    What is a possible limitation of Passive Reinforcement Learning?

    <p>The potential to miss discovering new states due to a fixed policy (B)</p> Signup and view all the answers

    What is one potential issue with deep game trees when using the minimax algorithm?

    <p>They can be too deep to fully explore or may have infinite depth (A)</p> Signup and view all the answers

    Which of the following terms is associated with a Markov Decision Process (MDP)?

    <p>States, Actions, and Rewards (C)</p> Signup and view all the answers

    Flashcards

    Informed Search Algorithms

    A search algorithm that uses additional knowledge, called heuristics, to guide its search, making it more efficient.

    Greedy Best-First Search

    Expands the node with the smallest heuristic value at each step.

    A* Search

    Uses a formula f(n) = g(n) + h(n) to estimate the total cost of a path, combining cost to reach a node with the estimated cost to the goal.

    Iterative Deepening A*

    Combines the efficient space exploration of Depth-First Search with the heuristic guidance of A*. It explores the search space in gradually increasing depth.

    Signup and view all the flashcards

    Local Search Algorithms

    Algorithms that explore the space of possible solutions rather than the space of states, often used for optimization problems.

    Signup and view all the flashcards

    Hill-Climb Search

    Moves to the neighboring solution with the highest improvement in the heuristic value at each step.

    Signup and view all the flashcards

    Simulated Annealing

    A probabilistic approach that allows worse moves early in the search to escape local maxima and find better solutions.

    Signup and view all the flashcards

    Genetic Algorithms

    Evolves a population of candidate solutions by applying selection, crossover, and mutation.

    Signup and view all the flashcards

    What is a valid sentence in First Order Logic (FOL)?

    A valid sentence in FOL is true in all possible situations or interpretations of the world.

    Signup and view all the flashcards

    What is a 'model' in FOL?

    In FOL, the model represents the world or environment being modeled. It includes objects, functions, and relationships between them.

    Signup and view all the flashcards

    What is a 'term' in FOL?

    A term in FOL is a logical expression that refers to a specific object. It can be a constant, a function, or a variable.

    Signup and view all the flashcards

    What are constant symbols in FOL?

    Constant symbols in FOL directly represent specific objects in the world. For example, 'John' is a constant representing the object John.

    Signup and view all the flashcards

    What are variables in FOL?

    Variables in FOL are placeholders for objects. They can represent any object in the world.

    Signup and view all the flashcards

    What are functions in FOL?

    Functions in FOL represent relationships between objects. They take one or more objects as input and return another object as output.

    Signup and view all the flashcards

    What is planning?

    Planning involves determining a sequence of actions to achieve a desired goal state while considering potential outcomes.

    Signup and view all the flashcards

    What is AI planning?

    AI planning involves designing intelligent systems that can autonomously make decisions and choose actions to achieve their goals. It's about decision-making in various circumstances.

    Signup and view all the flashcards

    Satisficing Planning

    A type of AI planning where the goal is to find a solution that meets the basic requirements of the problem, but not necessarily the best or most efficient solution. This approach prioritizes speed and resource efficiency over finding the optimal solution.

    Signup and view all the flashcards

    Optimal Planning

    A type of AI planning where the goal is to find the best possible solution based on specific criteria, such as minimizing steps or maximizing efficiency. This approach may take longer to compute but ensures the solution is the most optimal.

    Signup and view all the flashcards

    State Representation in AI Planning

    The representation of the current world state in AI planning. It includes a set of true facts about the world and follows the closed-world assumption, where everything not explicitly stated as true is considered false.

    Signup and view all the flashcards

    Plan Space Search

    A common type of AI planning where the goal is to find a valid plan by searching through a graph of partial plans. Each node in the graph represents a possible plan, and the planner explores the graph to find a valid path from the initial state to the goal state.

    Signup and view all the flashcards

    State Space Search

    A common type of AI planning where the goal is to find a valid state by searching through a space of possible world states. The planner explores the state space to find a path from the initial state to a state that satisfies the goal condition.

    Signup and view all the flashcards

    Contingent Planning

    A type of planning in AI that addresses the issue of uncertainty in the environment or action outcomes. It involves creating plans that account for different possible outcomes and incorporating conditional statements to react to real-time observations.

    Signup and view all the flashcards

    Partial Observability

    A situation where the planner does not have complete knowledge of the world state. This can be caused by limitations in the information available or by the inherent complexity of the environment.

    Signup and view all the flashcards

    Stochasticity

    A situation where the outcomes of actions are not deterministic and can vary depending on external factors. This introduces uncertainty and makes planning more challenging.

    Signup and view all the flashcards

    Markov Decision Process (MDP)

    A mathematical framework for making decisions in situations where outcomes are uncertain, especially when aiming to maximize rewards over time.

    Signup and view all the flashcards

    State

    The current state of the system. It represents where the agent is in the environment at a given moment.

    Signup and view all the flashcards

    Actions

    The actions an agent can take within a given state.

    Signup and view all the flashcards

    Transitions

    The probability of transitioning to a new state after taking an action.

    Signup and view all the flashcards

    Rewards

    A numerical value assigned to a state that represents its desirability. The agent aims to maximize rewards.

    Signup and view all the flashcards

    Passive Reinforcement Learning

    A type of reinforcement learning where the agent follows a fixed policy and learns to evaluate its performance. It learns how good the existing policy is, without changing it.

    Signup and view all the flashcards

    Active Reinforcement Learning

    A type of reinforcement learning where the agent learns both the policy and the value function. It explores the environment to find the best actions to maximize rewards.

    Signup and view all the flashcards

    Greedy TD Learning

    A technique used in active reinforcement learning where the agent always chooses the action that it believes will lead to the highest immediate reward, based on its current knowledge.

    Signup and view all the flashcards

    Dominant Strategy

    A strategy that is the best choice for a player, no matter what others do.

    Signup and view all the flashcards

    Equilibrium

    A situation where no player can improve their payoff by changing their strategy alone.

    Signup and view all the flashcards

    Pareto Optimal Outcome

    An outcome where no player can be better off without making someone else worse off.

    Signup and view all the flashcards

    Nash Equilibrium

    A strategy assignment for each player, where each player cannot do better by switching strategy unilaterally.

    Signup and view all the flashcards

    Strict Nash Equilibrium

    When every player would suffer a loss by changing their strategy, assuming the other players' strategies remain unchanged.

    Signup and view all the flashcards

    Weak Nash Equilibrium

    When a player has an alternative strategy that gives the same payoff, but might have a better outcome in some scenarios.

    Signup and view all the flashcards

    Mixed Strategy

    A player chooses based on probability distribution over pure strategies, rather than always selecting the same strategy.

    Signup and view all the flashcards

    Mechanism Design

    Designing the rules of a game to ensure fairness and incentivize players to behave in a desired way.

    Signup and view all the flashcards

    Exploration

    An agent that explores the environment to learn about potential rewards and improve its strategy.

    Signup and view all the flashcards

    Exploitation

    An agent that focuses on using its knowledge to maximize rewards based on its current understanding of the environment.

    Signup and view all the flashcards

    Multi-Agent Systems

    When multiple intelligent agents interact in a shared environment, each with its own goals and actions.

    Signup and view all the flashcards

    Adversarial Agents

    Agents with conflicting objectives, where one agent's gain equals another's loss.

    Signup and view all the flashcards

    Minimax Algorithm

    A strategy for finding the best move in a game by considering all possible future moves and countermoves.

    Signup and view all the flashcards

    Evaluation Function

    A technique that estimates the value of a game state, used when exploring the entire game tree is impossible.

    Signup and view all the flashcards

    Alpha-Beta Pruning

    A method that optimizes the Minimax Algorithm by pruning branches of the game tree that are guaranteed not to affect the final decision.

    Signup and view all the flashcards

    Stochastic Games

    A game where outcomes are uncertain due to random events, making the game tree more complex.

    Signup and view all the flashcards

    Study Notes

    Foundations of Artificial Intelligence

    • Artificial Intelligence (AI) is the study of creating computer systems that act intelligently.
    • AI focuses on creating computers that perform tasks normally requiring human intelligence.
    • Examples of AI include logical reasoning, problem-solving, creativity, and planning.
    • Narrow AI is a type of AI that requires reconfiguration of algorithms to solve specific tasks. This type of AI has seen significant advancement in areas like chess, speech recognition and facial recognition, It requires new algorithms for new problems,.
    • General AI applies intelligent systems to any problem. General AI can understand, learn and apply knowledge across a range of tasks.
    • Major research areas in AI include reasoning, learning, problem-solving, and perception.
    • Applications of AI are many and include robotics (industrial, autonomous, domestic types), industrial automation, health, game AI, and areas like education or personal assistants, among many others.

    Intelligent Agents

    • An agent is anything that perceives its environment through sensors and acts upon its environment through actuators.
    • Intelligent agents act autonomously to achieve goals based on their perceptions and actions.
    • The percept sequence is the complete history of information received by the agent from its sensors.
    • An agent's function determines what actions it takes based on its perceived history.
    • Rational agents act in a way that maximizes their performance based on the knowledge of the environment and the agent's goal.
    • Observability refers to how much information an agent has available to act. If an agent does not have all information needed, it is called partially-observable.

    Stochasticity and discrete vs continuous environments

    • Stochasticity refers to randomness and unpredictability in a system or process.
    • Actions in a system or environment can be deterministic (predictable) or stochastic (unpredictable). For example, playing chess is deterministic, rolling a die is stochastic.
    • A discrete environment has a finite number of possible action choices and states (e.g., a chess game).
    • A continuous environment has endless possibilities of states (e.g., a game of tennis).
    • An adversarial environment has agents competing against each other. A benign environment does not have competing agents.

    Search Techniques

    • Search is a technique in AI that finds a solution to a problem represented by a graph.
    • A directed graph is utilized to represent the problem and find the optimal path from start to end.
    • Uninformed search algorithms, also called blind search, explore the entire search space without knowledge beyond the initial problem statement. Strategies include breadth-first search and depth-first search.
    • Informed search algorithms rely on heuristics to guide the search. Examples include Greedy best-first search and A*.

    Modeling Challenges

    • Algorithmic complexity is the measure of the resources an algorithm uses in relation to the input's size , commonly represented using Big-O notation. Time and space complexities are considered in this calculation.
    • Blind or uninformed search algorithms don't utilize any knowledge.
    • Informed search algorithms utilize heuristics to improve the speed of finding the desired state.

    Knowledge and Reasoning

    • Knowledge in AI refers to the facts, information, and concepts in which an AI system is based.
    • Reasoning is the process of using knowledge to make conclusions or decisions.
    • A knowledge base is a collection of statements (sentences) in knowledge representation language.
    • Axioms are sentences in a knowledge base not derived from other sentences.
    • TELL adds sentences to a knowledge base, and ASK queries a base.
    • Inference rules derive new sentences; inference is the process of deriving conclusions using reasoning.

    First-Order Logic

    • First-order logic (FOL): a knowledge representation language that efficiently represents objects, their relationships and functions.
    • Object representation, variables and functions are included in the model of the environment.
    • Key uses of FOL include representing objects, their characteristics, functions and their relationships. This is used for the development of knowledge-based agents.

    Planning

    • Planning is a technique in AI for creating a strategy to achieve a particular goal from a given start-state.
    • Planning differs from scheduling in that planning focuses on identifying the required task or actions to achieve a goal while scheduling is about determining the best time to perform these actions.
    • Planning in AI is the creation of a sequence of actions to achieve a desired result.
    • Domain-specific planning is designed for a specific domain or problem (e.g., a game-playing AI).
    • Domain-independent planning is applicable to a broad range of applications (e.g., personal assistants).

    AI Planning Agent Requirements

    • Agent robustness includes being resilient to unexpected events or situations. One critical quality is stochasticity, which is how much randomness there is. Is the action deterministic?

    Contingent Planning

    • Contingent planning deals with uncertainty in the environment or actions. The goal is to make plans that work regardless of possible outcomes.

    Time, Resources, and Exogenous Events

    • Temporal constraints, numeric resource constraints, and relationships between numeric properties and time are important considerations in real-world planning problems. An agent needs to account for exogenous (external) events that affect the environment.

    Probability

    • Probability theory is useful in AI because of uncertainty.
    • Partial observability is a factor to consider, in regard to which observable factors are most likely.
    • Stochasticity is another factor, in that the outcome of an action is unpredictable.
    • Bayes Networks are graphical models used to compactly represent probabilistic distributions, used in prediction or decision-making.

    Machine Learning

    • Machine learning (ML) is a subfield of AI focused on discovering models from data.
    • ML uses statistics to identify patterns and make predictions.
    • Different types of ML include supervised (data already classified), unsupervised (data not pre-classified), and reinforcement learning (learning through trial and error).

    Classification and Regression

    • Classification predicts categories (e.g., spam detection).
    • Regression predicts continuous values (e.g., house pricing).

    Reinforcement Learning

    • Reinforcement learning (RL) aims to determine actions or decisions that produce the most reward in environments.
    • RL uses different methods such as learning from the real environment, historical data, or simulation environments.
    • The goal is to find a policy for taking actions that maximize accumulated reward over time

    Markov Decision Processes

    • Markov Decision Processes (MDPs) are used for decision-making in situations where outcomes are probabilistic.
    • MDPs are used to figure out the best way to take actions to maximize rewards.

    Passive vs Active RL

    • Passive RL follows fixed policies, learns the value function for the policy, and does not change the policy (e.g., a pre-programmed robot route).
    • Active RL learns a policy and value function. Learning occurs by exploring the environment (e.g., a robot tries different paths) and maximizing rewards.

    Multi-Agent Systems

    • Multi-agent systems have multiple intelligent agents in the same environment.
    • Agent relationships can be beneficial (e.g., Cooperative agents working together to achieve a goal), or competitive (e.g., Adversarial agents).
    • Zero-sum games have one agent's gain equal to another's loss (e.g., chess).

    Minimax and Alpha Beta Pruning

    • Minimax algorithms are used to determine the best move in a game.
    • Alpha-beta pruning is an optimization method for minimax to make computations faster.

    Game Theory

    • Game theory studies strategies in different types of situations.
    • Zero-sum games have one player's gain equal to another's loss, while cooperative game situations have multiple agents needing to work together to get the desired reward.

    Mechanism Design

    • Mechanism Design focuses on creating the rules of the game (e.g., auctions, bids, etc.) in a way to achieve desired outcomes.

    Online State Estimation

    • Figuring out the most likely current state of a system in real time.
    • Filters are algorithms utilized to estimate a robot's belief about the environment or a system's likely state.

    Particle Filters and Computer Vision

    • Particle filters use small "guesses" called "particles" to determine where an object is located in an environment;
    • Computer vision interprets raw data and images to make decisions.

    State Representation

    • State representation is a description of the current state of a system or environment, used for making decisions about how to proceed with the given situation.
    • Kinematic states describe movement without considering forces or masses.
    • Dynamic states are descriptions that include forces and masses.

    Planning with Uncertainty

    • Conformant planning plans with uncertainty in every possible state outcome.
    • Contingency planning allows taking actions based on possible outcomes and their relationships.

    Thresholding

    • Thresholding is a decision-making tool where a value is compared to a set threshold, and a decision is made based on the result of the comparison.

    Sensing Actions

    • Sensing actions involve the use of sensors to collect data from the environment, for example, cameras, sensors, etc.

    Computer Vision

    • Computer vision is the process of extracting, analyzing, and interpreting images and videos. This is done using algorithms.

    Law and Ethics of AI

    • Critical considerations around fairness, transparency, privacy, and accountability in AI systems.
    • The EU AI Act gives rules and regulations concerning risk and high-risk applications of AI.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Explore the fundamental concepts of Artificial Intelligence, including the distinctions between narrow and general AI. This quiz covers key areas such as logical reasoning, problem-solving, and the diverse applications of AI in various fields. Test your knowledge on the advancements and challenges within the realm of AI.

    More Like This

    Use Quizgecko on...
    Browser
    Browser