Foundations of Artificial Intelligence
48 Questions
0 Views

Choose a study mode

Play Quiz
Study Flashcards
Spaced Repetition
Chat to lesson

Podcast

Play an AI-generated podcast conversation about this lesson

Questions and Answers

What is the primary characteristic of Greedy Best-First Search?

  • It keeps track of a fixed number of nodes at each level.
  • It expands the node with the smallest heuristic value. (correct)
  • It combines depth-first search with heuristic guidance.
  • It uses the cost to reach a node as the heuristic value.
  • In A* Search, what does the function f(n) represent?

  • The total number of nodes expanded.
  • The total cost from the start node to the current node.
  • The combination of the path cost and the heuristic cost. (correct)
  • The estimated cost to reach the goal.
  • Which of the following local search algorithms allows worse moves initially to escape local maxima?

  • Beam Search
  • Simulated Annealing (correct)
  • Genetic Algorithms
  • Hill-Climb Search
  • What role do axioms play in a knowledge base?

    <p>They are specific sentences that are not derived from others.</p> Signup and view all the answers

    What is the purpose of TELL operations in a knowledge base?

    <p>To add new sentences to the knowledge base.</p> Signup and view all the answers

    Which of the following statements about inference is true?

    <p>It is the process of deriving conclusions from known information.</p> Signup and view all the answers

    What defines a satisfiable sentence in the context of a knowledge base?

    <p>It is true in at least one model of the environment.</p> Signup and view all the answers

    What is the primary focus of local search algorithms compared to informed search algorithms?

    <p>They focus on exploring the solution space.</p> Signup and view all the answers

    What is represented by a chance node in game trees?

    <p>Random events</p> Signup and view all the answers

    What defines a dominant strategy?

    <p>The best strategy regardless of other players' choices</p> Signup and view all the answers

    What is a Nash equilibrium?

    <p>An outcome where players cannot unilaterally improve their payoffs</p> Signup and view all the answers

    What results in a weak Nash equilibrium?

    <p>There are multiple optimal strategies with similar payoffs</p> Signup and view all the answers

    Under what conditions does every game have at least one Nash equilibrium?

    <p>It has a finite number of players and strategies with mixed strategies allowed</p> Signup and view all the answers

    What is a mixed strategy in game theory?

    <p>When players randomize their choices instead of consistently picking the same one</p> Signup and view all the answers

    Which of the following is NOT an issue with mixed strategies?

    <p>Relying on the predictability of opponents</p> Signup and view all the answers

    What is the main goal of mechanism design in game theory?

    <p>To create incentives for strategic behavior and ensure fairness</p> Signup and view all the answers

    What is the primary goal of satisficing planning?

    <p>To produce a valid plan quickly under constraints.</p> Signup and view all the answers

    Which statement best describes optimal planning?

    <p>It aims for the best solution based on specific criteria, even if it's time-consuming.</p> Signup and view all the answers

    What is a characteristic of the SRIPS automated planner?

    <p>It only deals with closed systems with no external factors.</p> Signup and view all the answers

    Which of the following issues does an AI planning agent need to consider regarding stochasticity?

    <p>Assessing if actions always produce the same outcomes.</p> Signup and view all the answers

    What does contingent planning allow an agent to do?

    <p>Develop plans that account for multiple potential outcomes.</p> Signup and view all the answers

    In optimal planning, what is a critical aspect that is emphasized?

    <p>An admissible heuristic that guarantees optimality must be used.</p> Signup and view all the answers

    What is a precondition in the context of SRIPS?

    <p>A specific state of knowledge required to execute an action.</p> Signup and view all the answers

    Which of the following best defines partial observability for an AI planning agent?

    <p>Lacking some information about the state that affects decision-making.</p> Signup and view all the answers

    What is included in the model of the environment in First Order Logic (FOL)?

    <p>A set of functions, relations, and a set of objects</p> Signup and view all the answers

    How do constant symbols in First Order Logic (FOL) function?

    <p>They refer directly to an object</p> Signup and view all the answers

    What distinguishes planning from scheduling in the context of achieving objectives?

    <p>Planning involves identifying tasks, while scheduling chooses the timing of those actions</p> Signup and view all the answers

    In what way is AI planning characterized in intelligent systems?

    <p>It involves autonomous decision-making leading to action sequences for goal achievement</p> Signup and view all the answers

    Which of the following represents domain-specific planning?

    <p>A game AI planner tailored for chess strategies</p> Signup and view all the answers

    What does domain-independent planning imply?

    <p>It contains techniques applicable across different domains</p> Signup and view all the answers

    How do functions in First Order Logic (FOL) differ from constant symbols?

    <p>Functions represent properties related to objects, while constant symbols refer to objects</p> Signup and view all the answers

    What is a key aspect of planning in artificial intelligence?

    <p>It aims to maximize the probability of obtaining desired outcomes</p> Signup and view all the answers

    What term describes the strategy of maximizing rewards based on acquired knowledge?

    <p>Exploitation</p> Signup and view all the answers

    What is the main objective of a Markov Decision Process (MDP)?

    <p>To find the best action that maximizes rewards over time</p> Signup and view all the answers

    In a benign multi-agent system, what characterizes the interaction between agents?

    <p>Agents interfere with each other's performance</p> Signup and view all the answers

    Which situation exemplifies a zero-sum game?

    <p>Chess match between two players</p> Signup and view all the answers

    Which of the following best describes Passive Reinforcement Learning?

    <p>The agent follows a fixed policy and evaluates its effectiveness</p> Signup and view all the answers

    What does the minimax algorithm primarily calculate?

    <p>The best move by simulating moves and countermoves</p> Signup and view all the answers

    What does the exploration vs exploitation trade-off entail in Reinforcement Learning?

    <p>Balancing the decision to explore uncharted states and exploiting known rewards</p> Signup and view all the answers

    What is the purpose of an evaluation function in minimax algorithms?

    <p>To estimate how good a state is when the tree depth is limited</p> Signup and view all the answers

    Which of the following accurately describes Active Reinforcement Learning?

    <p>It allows the agent to learn both the policy and value function through exploration</p> Signup and view all the answers

    How does Greedy Temporal Difference (TD) Learning function?

    <p>The agent always chooses the action it believes is the best based on prior learning</p> Signup and view all the answers

    What advantage does Alpha-Beta pruning offer compared to the regular minimax algorithm?

    <p>It skips nodes that won't affect the final decision</p> Signup and view all the answers

    What is typically evaluated in Passive Reinforcement Learning?

    <p>The efficacy of a predefined policy</p> Signup and view all the answers

    How do stochastic games differ from deterministic games?

    <p>They can lead to multiple states based on probabilities</p> Signup and view all the answers

    What is a possible limitation of Passive Reinforcement Learning?

    <p>The potential to miss discovering new states due to a fixed policy</p> Signup and view all the answers

    What is one potential issue with deep game trees when using the minimax algorithm?

    <p>They can be too deep to fully explore or may have infinite depth</p> Signup and view all the answers

    Which of the following terms is associated with a Markov Decision Process (MDP)?

    <p>States, Actions, and Rewards</p> Signup and view all the answers

    Study Notes

    Foundations of Artificial Intelligence

    • Artificial Intelligence (AI) is the study of creating computer systems that act intelligently.
    • AI focuses on creating computers that perform tasks normally requiring human intelligence.
    • Examples of AI include logical reasoning, problem-solving, creativity, and planning.
    • Narrow AI is a type of AI that requires reconfiguration of algorithms to solve specific tasks. This type of AI has seen significant advancement in areas like chess, speech recognition and facial recognition, It requires new algorithms for new problems,.
    • General AI applies intelligent systems to any problem. General AI can understand, learn and apply knowledge across a range of tasks.
    • Major research areas in AI include reasoning, learning, problem-solving, and perception.
    • Applications of AI are many and include robotics (industrial, autonomous, domestic types), industrial automation, health, game AI, and areas like education or personal assistants, among many others.

    Intelligent Agents

    • An agent is anything that perceives its environment through sensors and acts upon its environment through actuators.
    • Intelligent agents act autonomously to achieve goals based on their perceptions and actions.
    • The percept sequence is the complete history of information received by the agent from its sensors.
    • An agent's function determines what actions it takes based on its perceived history.
    • Rational agents act in a way that maximizes their performance based on the knowledge of the environment and the agent's goal.
    • Observability refers to how much information an agent has available to act. If an agent does not have all information needed, it is called partially-observable.

    Stochasticity and discrete vs continuous environments

    • Stochasticity refers to randomness and unpredictability in a system or process.
    • Actions in a system or environment can be deterministic (predictable) or stochastic (unpredictable). For example, playing chess is deterministic, rolling a die is stochastic.
    • A discrete environment has a finite number of possible action choices and states (e.g., a chess game).
    • A continuous environment has endless possibilities of states (e.g., a game of tennis).
    • An adversarial environment has agents competing against each other. A benign environment does not have competing agents.

    Search Techniques

    • Search is a technique in AI that finds a solution to a problem represented by a graph.
    • A directed graph is utilized to represent the problem and find the optimal path from start to end.
    • Uninformed search algorithms, also called blind search, explore the entire search space without knowledge beyond the initial problem statement. Strategies include breadth-first search and depth-first search.
    • Informed search algorithms rely on heuristics to guide the search. Examples include Greedy best-first search and A*.

    Modeling Challenges

    • Algorithmic complexity is the measure of the resources an algorithm uses in relation to the input's size , commonly represented using Big-O notation. Time and space complexities are considered in this calculation.
    • Blind or uninformed search algorithms don't utilize any knowledge.
    • Informed search algorithms utilize heuristics to improve the speed of finding the desired state.

    Knowledge and Reasoning

    • Knowledge in AI refers to the facts, information, and concepts in which an AI system is based.
    • Reasoning is the process of using knowledge to make conclusions or decisions.
    • A knowledge base is a collection of statements (sentences) in knowledge representation language.
    • Axioms are sentences in a knowledge base not derived from other sentences.
    • TELL adds sentences to a knowledge base, and ASK queries a base.
    • Inference rules derive new sentences; inference is the process of deriving conclusions using reasoning.

    First-Order Logic

    • First-order logic (FOL): a knowledge representation language that efficiently represents objects, their relationships and functions.
    • Object representation, variables and functions are included in the model of the environment.
    • Key uses of FOL include representing objects, their characteristics, functions and their relationships. This is used for the development of knowledge-based agents.

    Planning

    • Planning is a technique in AI for creating a strategy to achieve a particular goal from a given start-state.
    • Planning differs from scheduling in that planning focuses on identifying the required task or actions to achieve a goal while scheduling is about determining the best time to perform these actions.
    • Planning in AI is the creation of a sequence of actions to achieve a desired result.
    • Domain-specific planning is designed for a specific domain or problem (e.g., a game-playing AI).
    • Domain-independent planning is applicable to a broad range of applications (e.g., personal assistants).

    AI Planning Agent Requirements

    • Agent robustness includes being resilient to unexpected events or situations. One critical quality is stochasticity, which is how much randomness there is. Is the action deterministic?

    Contingent Planning

    • Contingent planning deals with uncertainty in the environment or actions. The goal is to make plans that work regardless of possible outcomes.

    Time, Resources, and Exogenous Events

    • Temporal constraints, numeric resource constraints, and relationships between numeric properties and time are important considerations in real-world planning problems. An agent needs to account for exogenous (external) events that affect the environment.

    Probability

    • Probability theory is useful in AI because of uncertainty.
    • Partial observability is a factor to consider, in regard to which observable factors are most likely.
    • Stochasticity is another factor, in that the outcome of an action is unpredictable.
    • Bayes Networks are graphical models used to compactly represent probabilistic distributions, used in prediction or decision-making.

    Machine Learning

    • Machine learning (ML) is a subfield of AI focused on discovering models from data.
    • ML uses statistics to identify patterns and make predictions.
    • Different types of ML include supervised (data already classified), unsupervised (data not pre-classified), and reinforcement learning (learning through trial and error).

    Classification and Regression

    • Classification predicts categories (e.g., spam detection).
    • Regression predicts continuous values (e.g., house pricing).

    Reinforcement Learning

    • Reinforcement learning (RL) aims to determine actions or decisions that produce the most reward in environments.
    • RL uses different methods such as learning from the real environment, historical data, or simulation environments.
    • The goal is to find a policy for taking actions that maximize accumulated reward over time

    Markov Decision Processes

    • Markov Decision Processes (MDPs) are used for decision-making in situations where outcomes are probabilistic.
    • MDPs are used to figure out the best way to take actions to maximize rewards.

    Passive vs Active RL

    • Passive RL follows fixed policies, learns the value function for the policy, and does not change the policy (e.g., a pre-programmed robot route).
    • Active RL learns a policy and value function. Learning occurs by exploring the environment (e.g., a robot tries different paths) and maximizing rewards.

    Multi-Agent Systems

    • Multi-agent systems have multiple intelligent agents in the same environment.
    • Agent relationships can be beneficial (e.g., Cooperative agents working together to achieve a goal), or competitive (e.g., Adversarial agents).
    • Zero-sum games have one agent's gain equal to another's loss (e.g., chess).

    Minimax and Alpha Beta Pruning

    • Minimax algorithms are used to determine the best move in a game.
    • Alpha-beta pruning is an optimization method for minimax to make computations faster.

    Game Theory

    • Game theory studies strategies in different types of situations.
    • Zero-sum games have one player's gain equal to another's loss, while cooperative game situations have multiple agents needing to work together to get the desired reward.

    Mechanism Design

    • Mechanism Design focuses on creating the rules of the game (e.g., auctions, bids, etc.) in a way to achieve desired outcomes.

    Online State Estimation

    • Figuring out the most likely current state of a system in real time.
    • Filters are algorithms utilized to estimate a robot's belief about the environment or a system's likely state.

    Particle Filters and Computer Vision

    • Particle filters use small "guesses" called "particles" to determine where an object is located in an environment;
    • Computer vision interprets raw data and images to make decisions.

    State Representation

    • State representation is a description of the current state of a system or environment, used for making decisions about how to proceed with the given situation.
    • Kinematic states describe movement without considering forces or masses.
    • Dynamic states are descriptions that include forces and masses.

    Planning with Uncertainty

    • Conformant planning plans with uncertainty in every possible state outcome.
    • Contingency planning allows taking actions based on possible outcomes and their relationships.

    Thresholding

    • Thresholding is a decision-making tool where a value is compared to a set threshold, and a decision is made based on the result of the comparison.

    Sensing Actions

    • Sensing actions involve the use of sensors to collect data from the environment, for example, cameras, sensors, etc.

    Computer Vision

    • Computer vision is the process of extracting, analyzing, and interpreting images and videos. This is done using algorithms.

    Law and Ethics of AI

    • Critical considerations around fairness, transparency, privacy, and accountability in AI systems.
    • The EU AI Act gives rules and regulations concerning risk and high-risk applications of AI.

    Studying That Suits You

    Use AI to generate personalized quizzes and flashcards to suit your learning preferences.

    Quiz Team

    Related Documents

    Description

    Explore the fundamental concepts of Artificial Intelligence, including the distinctions between narrow and general AI. This quiz covers key areas such as logical reasoning, problem-solving, and the diverse applications of AI in various fields. Test your knowledge on the advancements and challenges within the realm of AI.

    Use Quizgecko on...
    Browser
    Browser