Foundations of Artificial Intelligence PDF

Summary

This document covers the fundamental concepts of Artificial Intelligence (AI), including definitions, agent types, and various search techniques. It's a detailed overview of AI's core principles for those interested in learning about or delving into the field.

Full Transcript

**Foundations of Artificial Intelligence.** Artificial Intelligence -- is the field that studies the synthesis and computational agents that act intelligently. AI is the field that focuses on creating computers and programs that can act intelligently. **What is AI?** According to the Oxford Dicti...

**Foundations of Artificial Intelligence.** Artificial Intelligence -- is the field that studies the synthesis and computational agents that act intelligently. AI is the field that focuses on creating computers and programs that can act intelligently. **What is AI?** According to the Oxford Dictionary, AI is "*The theory and development of computer systems able to perform tasks normally requiring human intelligence"* Example of Intelligence include: - Logical Reasoning - Problem Solving - Creativity - Planning **Narrow VS General AI** - In narrow AI we have a type of AI that requires reconfiguration or a new algorithm to solve a different task, for example a game of chess, speech recognition and facial recognition. In this type of AI, we see the most advancements. - General AI applies intelligent system to any problem. General AI can understand, learn and apply knowledge across a wide range of tasks. The main research areas of AI include: 1. Reasoning 2. Learning 3. Problem Solving 4. Perception Applications of AI include: 1. Robotics: Industrial Robots, Autonomous Systems, Domestic Robots 2. Industrial Automation: Intelligent Control, Safety and Security 3. Health: Drug Design and Development, Operating Theatre Robotics 4. Games: NPC, Virtual and Augmented Reality 5. And others such as: Education, Agriculture, Personal Assistance, etc. **Intelligent Agents:** An agent is anything that can be viewed as *perceiving its environment* through sensors and acting upon that environment through actuators. An intelligent agent perceives, reasons and, acts autonomously to achieve goals. (Ex: Siri). It is an entity which acts, directing its activity towards achieving goals, upon an environment using observation through sensors and consequent actuators. **The Percept Sequence.** - The percept sequence is the complete history of all the data the agent received from its sensors. A diagram of a brain Definitions: 1. Agent: Something that does something, an agent is judged by how it acts. 2. Rational Agents: Entities that act to achieve the best possible outcome based on their knowledge and goals. A rational agent perceives its environment and makes decisions to maximise its performance based on a specific set of criteria. **Agent Rationality:** - Prior Knowledge - Performable Actions - Percept Sequence to date - Success Criterion **Observability:** when not all the necessary information needed for an agent to decide, we call this **partially observable**. **Stochasticity -** refers to the randomness or unpredictability in a system or process. - The outcome of an agent's actions is not entirely predictable. - An action is **deterministic** if the effect applying it always results in the same outcome. (a chess playing agent making a move) - An action is **stochastic** if the effect of applying said action will lead to a different outcome. (rolling a die in a board game) **Discrete vs Continuous:** An environment is discrete if there is a finite number of action choices and a finite number of states to represent it, a chess game for example, has a finite number of board positions and a finite number of possible moves. An environment is continuous if the space of possible states or actions that could be applied may be *infinite*, for example, a game of tennis can be played in infinitely many combinations of states and possible actions players could make. An environment is **adversarial** if the agent is competing against other agents (possibly human) to achieve its objectives. For example, an agent playing a chess game is adversarial, however, an agent trying to predict the weather is **benign**. **Types of Agents:** 1. Reflex agent -- uses the current percept, assumes the environment is fully observable, does not consider history, e.g. smoke detectors. 2. Model-Based Reflex agent - keeps track of part of the world it can't see by maintaining an internal state that depends on the percept history, a model of how the environment evolves, and what effects the applied actions had on the environment, e.g. robot vacuum cleaner. 3. Goal-Based agent - plans into the future and selects actions according to whether they eventually lead to its goals, e.g. GPS. 4. Utility-Based agent - plans into the future and selects actions according to whether they maximise some utility (some measure of the solution quality), e.g. an autonomous car. 5. Learning agent -- improve performance over time by learning from experiences and feedback, e.g. self-driving car. ![](media/image2.png)**Search Techniques:** Search is one of the most powerful techniques to solve AI problems. Problem is formulated as a Directed Graph. A system must find the right action sequence to achieve the goal condition, the solution must [achieve the goal and satisfy all constraints. ] **Modelling Challenges**: Algorithmic Complexity. Is the measure of how much computational resources the algorithm needs in proportion to the size of its input. This is usually represented using the big O notation. We have time complexity and space complexity. Will the algorithm take a long time to complete? Will the algorithm need more memory to run? Blind Search. Also known as uninformed search, refers to search algorithms that explore the solution space without any knowledge beyond the initial problem definition. This works by expanding each state to find all its successors, we keep expanding until the desired state is found. There are different strategies to determine in which order the states are expanded: - Breadth first search - Depth first search - Variants of the above. Informed Search: Evaluating Function: This evaluating function, can involve multiple components, typically the actual cost -- g(n) - and the estimated cost -- h(n) - , to reach a goal node from node *n.* A **heuristic** is a criterion to help us decide which course of action to take if we have incomplete information, we as humans, use this all the time for example choosing which study units to take, choosing a career path, etc\... A heuristic function, ***h(n)*** estimates the cost of the cheapest path from the state at node *n,* to a goal state. **Greedy Best-First Search**: This algorithm selects the node that appears to be the closest to the goal based on a heuristic function. It does not consider the actual cost the reach the node. It is greedy, since, at each step it tries to get as close to the goal as it can. **A\* Search**: This is a very popular best-first search strategy. It combines the benefits of Uniform Cost Search (which explores the least costly path) and Greedy Best-First Search (which explores the most promising path). This search is a very good strategy if coupled with a good heuristic function. Its characteristics include complete and optimal. Variants of A\* search include: **Iterative Deepening (IDA),** which performs a depth first search until the total cost f(n) = g(n) + h(n), exceeds a given threshold. Another variant includes **Weighted A\* -** which is giving a weight to the heuristic component of the evaluation function. In simpler words: Iterative **Deepening**: A search algorithm that combines the depth-first search\'s space efficiency with breadth-first search\'s completeness by repeatedly performing depth-limited searches with increasing depth limits until the goal is found. Weighted **A**\*: A variant of A\* where the heuristic is multiplied by a weight (greater than 1) to prioritize exploration of more promising paths, potentially speeding up the search at the cost of optimality. **Local (Neighbourhood) Search**: Also known as meta-heuristics. This is an optimization technique where the algorithm iteratively explores neighbouring solutions to find a better one, starting from an initial solution. It focuses on making small changes to the current solution to improve it, rather than exploring the entire search space. These typically use minimal amount of memory and are often more effective in very large search spaces. **Hill Climbing (Greedy Local Search):** This is the simples local search algorithm; it chooses the successor with the best value and repeat until no other successors have a better value. Hill Climbing: Starts at an initial state and moves to the neighbouring state with the best heuristic value (closest to the goal). It continues this process until it reaches a local maximum or the goal. To combat the problem of getting stuck at a local maxima or minima, we have **Simulated Annealing**, which combines Hill Climbing with a **random walk**; to get a chance to escape these min/max. This works by randomly choosing a successor, if it is better accepted it, if not accept it with some probability less than 1. **Rundown of all search techniques:** 1. Uninformed (Blind) Search Algorithms - These algorithms explore the search space without any domain-specific knowledge, relying solely on the problem definition. - **Breadth-First Search** Explores all nodes at the current depth before moving to the next level. Guaranteed to find the shortest path in terms of the number of steps if costs are uniform. - **Depth-First Search** Explores as far down a branch as possible before backtracking. - **Uniform-Cost Search** Expands the least-cost node first. Guaranteed to find the optimal solution if costs are non-negative. - **Iterative Deepening Search** - Combines BFS and DFS by performing DFS with progressively increasing depth limits. 2. **Informed Search Algorithms** - These algorithms use additional knowledge known as heuristics to guide the search, making it more efficient. - **Greedy Best-First Search:** Expands the node with the smallest heuristic value. - **A\* Search**: Uses f(n) = g(n) + h(n) where G is the cost to reach n, and h is the estimated cost to the goal. - **Iterative Deepening A\***: Combines the space efficiency of depth first search with the heuristic guidance of A\*. 3. **Local Search Algorithms: These algorithms focus on exploring the solution space rather than the state space, typically used for optimization problems.** - **Hill-Climb Search**: Moves to the neighbour with the highest improvement in heuristic value. - **Simulated Annealing**: Uses a probabilistic approach to escape a local maximum by allowing worse moves early on. - **Genetic Algorithms**: Evolves a population of candidate solutions through selection, crossover and mutation. - **Beam Search**: Keeps track of a fixed number of the most promising nodes at each level. **Knowledge and Reasoning:** Knowledge in AI refers to the data, information and, concepts that an AI system uses to understand the world and solve problems. Reasoning on the other hand, refers to the process of drawing conclusions or making decisions based on the available knowledge. A knowledge-based agent is an AI system that uses a knowledgebase to make decisions, reason and solve problems. It maintains an updated knowledge base and uses inference engine to deduce and update knowledge, and hence choose the best action. ![A diagram of an older model](media/image4.png) - A **knowledge base** is a collection of statements (known as sentences) in some knowledge representation language. - **Axioms** are sentences that were not derived from other sentences. - **TELL** operations adds new sentences to the knowledge base. - **ASK** operations queries the knowledge base. - **Inference Rules** derive new sentences from other sentences. - **Inference** is the process by which conclusions are reached based on some known evidence and reasoning. Sound -- derives only entailed sentences (does not make things up that are not true) Complete -- can derive any sentence that is entailed (it can prove all the true sentences that can be proved) - **Satisfiability** -- a satisfiable sentence is true in at least one model of the environment. - **Valid** -- a valid sentence is true in all models of the environment. **FOL -- First Order Logic:** Can represent object in the environment, functions of these objects, and relationships between objects. What is included in the model of the environment in FOL? - In FOL, the model of the environment includes: - A set of objects - Every model has at least one object - One can refer to a specific object with a constant symbol - A set of functions that maps from an object to another object - A set of relations over the objects. - A **term in FOL** is a logical expression that refers to an object. - **Constant Symbols** are terms that refer directly to an object. - **Functions** refer to an object which is a property of other objects. - **Variables** refer to placeholders of objects A close-up of a text **Planning:** Planning is devising a strategy to achieve some desired goal state, by choosing actions to maximise the probability of obtaining some outcome. Planning is an indicator of high intelligence. **The difference between Planning and Scheduling.** - Planning: Identifying the tasks or actions that need to take place to achieve your objective. - Scheduling: Choosing the right time when such actions should take place. The two are very related: a plan without a schedule is not actionable. **AI Planning:** The ability for an intelligent system to make autonomous decisions that lead to a sequence of actions to achieve its goals. It is the study of decision making under different circumstances. Some Real-World Applications include: - Industrial Automaton in Oil and Gas drilling operations - Autonomous driving - Automation of Unmanned Underwater Vehicles [Domain Specific vs Domain Independent Planning:] Domain Specific refers to planning techniques that are tailored for solving problems within a specific domain or area for example, Game AI, a game AI planner may be designed to make decisions in a particular game like chess, with detailed strategies and move sequences for that game. The system is designed for only one application. Domain Independent refers to planning techniques that are generalized and can be applied across different domains. These planners do not assume specific knowledge of the domain and are designed to work with any problem that can be described in terms of actions and goals. An example of such is personal assistants like Google Assistant or Amazon Alexa, these use domain-independent planning to schedule meetings, create to-do lists or provide reminders. Solving a planning problem is typically solved using some flavour of combinatorial search: 1. State-space search: Find a state that satisfies the goal condition and extract the path from the initial state to the goal state. 2. Plan-space search: Find a valid plan from a graph of partial plans. In the context of AI planning, two key strategies are often discussed when generating plans: satisficing and optimal planning. Both strategies aim to find a solution to a planning problem, but they differ in terms of the quality and efficiency of the solution. 1. **Satisficing Planning:** Refers to the strategy where the planner aims to find a good enough solution or satisfies the basic requirements of the problem, but not necessarily the best or most efficient solution. The goal is to produce a valid plan quickly, often under resource or time constraints. 2. **Optimal Planning:** Aims to find the best possible solution according to some criteria, such as the least number of steps, minimal cost or maximum efficiency. The goal is to ensure that the solution is the best among all possible solutions, though it may take longer to compute. (A\* search guarantees optimality if the problem has an admissible heuristic) **SRIPS (Stanford Research Institute Problem Solver) Automated Planner:** - The most basic form of representation - Propositional STRIPS: - Facts can be true or false. - State is represented as the set of true facts - Closed world assumption - Actions have preconditions, add and delete effects. An AI planning Agent needs to be robust enough to recover from scenarios such as: 1. Stochasticity: Do the actions always have a deterministic outcome? 2. Other Agents: Can they interfere with the plan? Are they competitive agents? 3. Partial Observability: State information we don't know about. 4. Incorrect knowledge: Issues with the data that might affect our decisions, e.g. wrong GPS coordinates. **Contingent Planning:** Is a type of planning used in artificial intelligence when there is uncertainty about the environment or the outcomes of actions. - Plan for different possible outcomes of an action. - Enrich plans with conditional statements that depend on the evaluation of logical statement in real time. **Conformant Planning:** Is a type of planning in artificial intelligence that deals with situations where an agent must plan its actions without knowing the exact state of the world. It assumes that the agent has no access to the current state information. It must plan in a way that is robust to any possible state the world could be in. ![A screenshot of a white text](media/image6.png) **Probability:** Probability theory is one of the most important tools used in AI to handle uncertainty. - Partial Observability -- Which facts are most likely given the data I am observing? - Stochasticity -- What is the most likely outcome of some action? **Bayes Network:** Is a graphical model that represents a set of variables and their conditional dependencies using a specific type of graph. It is widely used in probabilistic reasoning, machine learning, and AI for tasks such as prediction, diagnosis and decision-making. In another definition -- Compact representations of the probability distributions over graphs of random variables. Used in: - Diagnostics - Predictions - Decision Making - Patter Observations - Detecting anomalies Bayes Networks are also the basic building blocks of more advanced AI techniques including Causal Networks, Markov Decision Processes, Particle Filters and more. Variables are categorised into 3 types: 1. Query -- What we want to know 2. Evidence -- What we can observe 3. Hidden -- They have some kind of influence on our model We typically would want to find the probability distribution of our query variables given the evidence or the most likely explanation to the evidence. Probability plays an important role in AI to manage uncertainty. We can use probability to infer knowledge from observable data. Bayes Rule allows us to compute the posterior probability. Variables can be dependent, independent or conditionally independent. Bayes Networks can be used to determine independence given the causal relationships of known and unknown variables. **Machine Learning:** ML is a sub-field of AI. It is about **discovering models from data**. Machine learning makes heavy use of statistics. Applications of Machine Learning: - Customer Purchase patters -- lead to product recommendations. - Robotics -- extracts information from noisy sensors. - Fintech -- Stock market patterns for trading. What does the system actually learn? 1. Model parameters, for example, if it is cloudy in December, what is the likelihood of rain? (probability distributions for certain events) 2. Structure, for example, air pressure is correlated with the weather (relationships between variables). 3. Hidden concepts, for example, customers who have certain tastes of music (identifiers of certain clusters or groups). With these in mind, we can achieve knowledge such as: 1. Classification: Assign categories to data. 2. Regression: Predict continuous numerical values. 3. Prediction: Forecast future data outcomes. 4. Recommendation: Suggest relevant items intelligently. 5. Anomaly Detection: Identify unusual data patterns. **Classification vs Regression:** The goal for classification is to predict a category or class label, with outputs that are discrete in value (dog, cat). For example, email spam detection, image recognition and medical diagnostics. Regression predicts a continuous numerical value, with output that is a real number such as a price, the temperature, etc. For example, predicting of house prices based on features such as size and location. Linear Regression is one of the simplest and widely used machine learning algorithms for regression tasks. It models the relationship between a dependent variable and one or more independent variables by fitting a straight line. It is a method to find a straight line that best fits a set of points, used to predict a number based on another number. The Loss Function -- Measures the residual error of our regression line from our data (objectively, we try to minimise this function for our training data set). Linear Regression is very fast to train and works well if there is a clear linear correlation between the input variables and the target values. If the data has a lot of noise they will steer the regression line away from the real trend-line. There are 7 steps for Machine Learning which are: 1. Collect data: Gathering of data you need to solve your problem. 2. Prepare data: Clean, format, and organize the data for analysis. 3. Choose a model: For example, linear regression for predicting house prices. 4. Train the model: Use your data to teach the model how to make predictions. 5. Evaluate the model: Test the model to see how well it performs on unseen data. 6. Tune the model: Adjust settings. 7. Deploy the model: Put the model into action. **Reinforcement Learning:** Is a machine learning technique to determine which **decisions or actions** deliver most reward. In a sense, it is very related to planning. The agent learns by exploring and observing the effects of its actions on the environment by: - Training from the real environment - Training from historical data - Training from simulations. The objective is to find a policy, that recommends the actions that maximise some reward. **Markov Decision Process:** A MDP is a way to make decisions in situations where outcomes are uncertain. It's used to figure out the best action to take to maximise rewards over time. Key concepts include: - States (where you are) - Actions (choices you can make) - Transitions (what happens after each action) - Rewards (points you earn for reaching a state) The objective of an MDP is to maximise the rewards as we proceed through all time steps from some state. Since our model is probabilistic, we do not know for sure which state we are going to end up in, and what reward we are going to take. **Reinforcement Learning provides techniques that discover** the utility of a state, if the probabilities are already known. The utility of applying an action in each state. If we don't know any other information. Which action is best to apply in a given state. **Passive RL:** - The agent follows a fixed policy and learns to evaluate it. - Learning the value function for the given policy. - Example: A robot that evaluates how good its preprogrammed route is but doesn't change the route. (TD Learning) **Active RL:** - The agent learns both the policy and value function. - Exploring the environment to find the best actions to maximize rewards. - Example: A robot that tries different paths to find the best route to goal. Passive RL: Follow the rules and learn how good they are. - Passive RL sticks to **the same policy**. - Limited to update the states determined by the fixed policy. - Some states will never be discovered. Active RL: Experiment, learn the best rules and act to maximise rewards. Active RL: **Greedy TD Learning.** - A method where the agent always chooses the best action, to maximize the reward while learning using Temporal Difference updates. (The agent always picks the action it thinks is best right now to get the most reward, based on what it has learned so far) Exploration vs Exploitation Trade-Off: - Reinforcement Learning algorithms will always have a trade-off between: 1. **Exploration:** Allows the agent to traverse states that have not yet been analysed sufficiently to compute their utility. 2. **Exploitation:** Allows the agent to take advantage of the knowledge it acquired to take the rewards. We want an agent that explores other states to discover alternative paths that could carry more reward and update the policy. - Explores more when it is less certain about the environment - Exploits the policy when it becomes more certain about the environment. **Multi-Agent Systems:** Single Agent vs Multi Agent: What happens if there are multiple intelligent agents operating in the environment? Dynamics: - Benign: Each agent is minding its own business but could still interfere. - Cooperative: Agents are working towards a common set of goals. - Adversarial: Agents are competitive against other agents. **Zero-Sum Games:** A situation where the gain of some participants is equivalent to the loss of the others. For example, chess. The objective of each player is to maximise their utility. **Minimax Algorithm Complexity**: The minimax algorithm is used in games like chess or tic-tac-toe to decide the best move by simulating all possible moves and countermoves. The time complexity is determined by: 1. Branching factor (the average number of possible moves per turn) 2. Depth of tree (the number of turns the algorithm looks ahead) Reducing the Depth in game trees: 1. Problem: Some trees are often too deep to fully explore or have infinite depth. 2. Solution: Stop at a certain depth (d) and use an evaluation function to estimate how good a state is. 3. Evaluation function: Provides a numerical estimate of the value of a game state at depth (d). For example, assign weights to pieces in chess. When the game tree is too large, stop early and use an evaluation function to estimate the value of states instead of exploring to the end. **Alpha-Beta Pruning**: Is an optimization technique for the Minimax Algorithm. It reduces the number of nodes the algorithm evaluates in a game tree, allowing it to look deeper without increasing computation time. Prune (skip) branches of the game tree that don't affect the final decision. Focus only on the most promising moves, ignoring others that are guaranteed to be worse. - Alpha-Beta pruning is an optimized version of the minimax algorithm. **Stochastic Games:** Some games involve stochastic actions, which could lead to multiple states with some probability. - A game tree for a stochastic game is a way to represent all possible states, actions, outcomes, and transitions, including random events. It combines elements of traditional game trees with probabilities to handle uncertainty in state transitions. - Chance node: represent random events like a dice roll or coin flip. **Game Theory.** - Studies strategies to apply in different kinds of situations. - Sequential (turn-based) vs Simultaneus Games - Co-operative vs non-cooperative games. - Zero-Sum vs non-zero-sum Games. Dominant Strategy: Best strategy no matter what other players choose. A strategy that is the best choice for a player, no matter what others do. Equilibrium: Outcome where no player can benefit from switching. A situation where no player can improve their payoff by changing their strategy alone. Pareto Optimal Outcome: There is no other outcome that ALL platers would agree to switch to. An outcome where no player can be better off without making someone else worse off. **Game Theory:** Studies strategic interactions between players, helping us understand and predict decision-making in competitive and cooperative scenarios across various fields. A Nash equilibrium is a strategy assignment for each plater, where each player cannot do better by switching strategy unilaterally. - "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?" - If the answer of every player is yes, then it is a **Strict Nash Equilibrium.** - If there is an alternative which gives the same payout, then it is a **Weak Nash Equilibrium.** - Every game has at least one Nash Equilibrium if: 1. It has a finite number of players 2. It has a finite number of pure strategies 3. Mixed Strategies are allowed -- a player can choose based on probability distribution over pure strategies. In the absence of a dominant strategy, we must resort to a **Mixed Strategy**; choosing based on some probability, (p). A mixed strategy is when players randomize their choices instead of always picking the same one. This prevents the opponent from predicting your moves and gaining an advantage. Mixed strategies make it impossible for opponents to exploit your choices. Issus with mixed strategies include: - Assume that the agents always want to be rational - The source of randomness needs to be kept secret. **Mechanism Design --** The set of techniques involved in designing the rules of the games. -- Ensures fairness. -- Incentivize agents to behave in a certain way. It is the **reverse of game theory** -- Instead of analysing existing games, it designs games to achieve specific goals. For example, in auctions, designing rules to ensure fair bidding and maximize seller revenue. The goal is to align individual incentives with the systems objectives for optimal outcomes. **Second-Price Auctions:** The price paid by the highest bidder is that of the second highest bidder -- this incentivizes bidders to bid the actual value they are ready to pay. **AI and Robotics.** Characteristics include: - Partially observable: There is a limit of how much info the sensors can perceive from the environment. - Stochastic - Continuous: Measurements are not discrete (distance, speed) - Noisy Input **Perception:** The process where machines or systems interpret data from the environment using sensors. Various issues include: - Reliability: Can we verify this information? - Observability: Can we infer unobservable facts from observable information? - Persistence: How long can readings be assumed to be valid for? ![A diagram of a system Description automatically generated](media/image8.png) 1. Sensors capture raw data from the environment. 2. AI algorithms process this raw data to extract meaningful information. 3. The system interprets the processed data to make decisions or take actions. **Online State Estimation**: - Figuring out the most likely current state of a system in real time. - Filter: An algorithm that estimates the belief state, which is a probability distribution over all possible states. "The robot is 90% likely to be in room A and 10% to be in room B". - Bayes Filter: Uses Bayes Rule to update the belief state Inputs of this include: The most recent sensor measurement, most recent action taken, previous belief state. **Particle Filters:** Help a robot figure out where it is by using many small guesses (particles) and refining them over time. 1. Belief State: The robot's possible locations are represented by particles. Each particle represents a guess about where the robot might be. 2. Using Sensor Data: The robot uses its ultrasonic range sensor to measure distances to nearby walls or objects. 3. Particles that are consistent with the sensor data are kept and duplicated, over time these particles cluster around the robot's actual location. **State Representation:** Refers to how the current state of a system or environment is described. A state captures all the relevant information the agent needs to make decisions or take actions. Kinematic and dynamic states refer to different aspects of how an object or system moves. 1. Kinematic: The position and motion of an object without considering the forces or masses involved. It includes position, orientation, velocity and acceleration. "Where is it and how fast?" Useful for planning motion trajectories (finding a path) 2. Dynamic: Includes the same kinematic information, but also accounts for the forces and masses causing the motion. "Why is it moving this way?" Essential for controlling actual movement (applying forces to follow the path) **Monte Carlo Localization:** A probabilistic algorithm used in robotics to estimate a robot's location in a map by using particles to represent possible positions and updating them based on sensor data. - Applies Particle Filters to approximate the kinematic state of a robot as it moves. - Each particle represents a possible state of the robot. - When the robot moves, each particles position and orientation are updated. - Noise is added to simulate uncertainty in the robot's movement. (stochasticity) - After the motion update, particles that align better with sensor measurements are given higher weights. - Resampling: Keeps particles that are likely and removes unlikely ones. Planning with Uncertainty: - Conformant Planning: Find a plan that is successful for all possibilities in our belief state. - Contingent Planning: Builds a decision tree for possible outcomes - Markov Decision Processes: Models actions with probabilities. - Plan Supervision and Monitoring: Determine if the plan is still valid. Planning to Perceive: 1. Sensing Actions: Actions taken to gather information from the environment. 2. Examples: Configuring sensors or running an object recognition algorithm on camera data. 3. Importance: Critical in robotic systems for accurate decision-making and interaction Anchoring -- Is the process of maintaining the correspondence of the symbolic state of an object to the sensor data that refers to the same physical object. - Anchoring is about keeping the link between a robot's symbolic representation of an object (what it "knows") and the real-world data from its sensors (what is "sees") **Computer Vision** Computer Vision is the automatic extraction, analysis, and interpretation of images or videos. Computer Vision converts photos and videos into numerical arrays, enabling ML algorithms to draw inferences, make predictions and even generate new images based on user-defined inputs. Applications of Computer Vision include: - Preview of a digital image - MRI - Processing Scanned Images - Image Database Query Stages of Computer Vision Include: - Acquisition - Pre-Processing - Low- and High-Level Processing - Decision Making **Thresholding:** is a simple method to make decisions or classify data by comparing it against a predefined value. It is widely used in applications like image processing, classification and anomaly detection. Examples of Computer Vision Applications: 1. Manufacturing Sector - Defect Detection - Product Assembly Automation - Barcode Analysis 2. Transportation - Detecting Traffic and Traffic Signs - Pedestrian Detection **Law and Ethics in AI:** Key Concepts in AI: - Fairness and algorithmic bias - Transparency and Explainability - Privacy and Data Rights - Accountability and Responsibility 1. Fairness and bias: AI systems can perpetuate and amplify existing societal biases: In 2023, multiple studies found that large language models showed gender and racial biases in their outputs. 2. Transparency and Explainability: Modern AI systems, especially deep learning models, often operate as "black boxes". This raises concerns about: Understanding how decisions are made etc. 3. Privacy and Data Rights: AI systems require vast amounts of data, raising questions about: Data collection and consent, Personal information protection etc. 4. Accountability and Responsibility: When AI systems make mistakes or cause harm, who's responsible? Software Developers? Companies deploying the AI? The EU AI Act: - Risk-Based approach to AI regulation - Strict rules for high-risk AI applications - Transparency Requirements - Penalties for non-compliance - Implementation timeline and requirements

Use Quizgecko on...
Browser
Browser