[CSCI111] Reviewer.pdf
Document Details
Uploaded by SmootherLyric
Tags
Full Transcript
Introduction to Artificial Intelligence CSCI 111 - K Notes ○ Introspection Personal...
Introduction to Artificial Intelligence CSCI 111 - K Notes ○ Introspection Personal thoughts AI and Agents ○ Brain imaging ★ Human Performance vs Rationality Brain in action ○ Rationality = doing the right thing ○ Act like humans ★ Machine learning vs AI [Alan] Turing test: testing machines ○ ML: subfield of AI. Ability to improve based if they can respond like humans on experience Natural language processing ○ Some AI use machine learning, some dont ○ communication ★ Artificial intelligence is the study of systems that Knowledge representation ○ Memory storage Automated reasoning ○ Answer questions ○ Draw conclusions Machine learning ○ Think like humans ○ Adapt and patterns Two approaches ○ Think rationally Cognitive Science and Rational is the ideal intelligence Psychology Precise law of thought ○ Psychological Syllogisms experiments ○ Premise: all birds fly. Person in action Tweety is a bird Cognitive Neuroscience Introduction to Artificial Intelligence CSCI 111 - K ○ Conclusion: tweety ★ AI: study of rational agents flies ○ A rational agent carries out an action with Notation and logic the best outcome after considering past ○ logic : knowledge of and current percepts the world that is Percept: info that was perceived certain Uncertainty and probability ○ Act rationally actions to achieve the best outcome May or may not involve rational thinking Reflex actions Our main definition ★ Agent function: a = F(p) Working definition: ○ p: current percept AI is the study of systems that act rationally ○ a: action carried out AI is the study and construction of agents that do the right thing ○ F: Agent function ★ F: P -> A ★ Agent: anything that perceives and acts on its ○ P is set of all percepts environment ○ A set of all actions ○ Agent = architecture + program ○ An action depends on all percepts Archi: device with sensors and observed (not just current) actuators Program: implementation of archi Introduction to Artificial Intelligence CSCI 111 - K ○ Next state is determined by current state PEAS: Specifying task environment Deterministic ★ Performance Measure: Agent’s aspiration If the next state of the ★ Environment: Context, Restrictions environment is completely ○ Access determined by the current Fully observable state and the action Can detect all aspects that executed by the agent(s) are relevant to the choice of Vacuum world action Stochastic Partially observable Previous actions don’t affect parts of the state are simply the next state missing Poker ○ Agent ○ Relational Single Episodic Cleaning vacuum 1 episode = percept and Individual puzzle solver single action Multi Next episode does not If one entity’s behavior is best depend on previous described as maximizing a episodes performance measure Not thinking ahead whose value depends on Sequential agent A Current decision affects Taxi driving is a partially future decisions cooperative multiagent Thinks ahead Introduction to Artificial Intelligence CSCI 111 - K ○ Changes in environment ★ Actuators: What can be moved Static ○ Action: What can be done the agent need not keep ○ Action (for part-picking robot): pick up looking at the world/time and drop part Crossword puzzles ★ Sensors: What can perceive Semidynamic ○ Percept: What can be perceived agent performance changes ○ Percept (for part-picking robot): type and but environment doesn’t position of part Chess with a clock Agent P E A S Dynamic Part picking Percentage Conveyor Jointed arm Camera, Changing environment while robot of parts in belt with and hand tactile, joint perceiving correct bins parts; bins angle sensors Driving Taxi Driver Safe, fast, Roads, other Steering Cameras, ○ Measurements of state, time, percepts legal, traffic, polic, wheel, radar, Discrete comfortable pedestrians accelerator, speedometer, trip brake, signal, GPS Finite moves Continuous Environment Obserable Agents Deterministic Episodic Static Discrete Infinite moves ○ Rules of Environment Part picking Partially Single Stochastic Episodi Dyna Continu robot c mic ous Known Taxi Driver Partially Multi Stochastic Sequenti Dyna Continuo Outcomes for all actions are al mic us given Unknown Agents learns environment Introduction to Artificial Intelligence CSCI 111 - K Types of Agents ★ Reflex Agent with State (Model-based Reflex Agent) ★ Reflex Agent ○ maintains part of the environment’s ○ Acts solely based on the current percept history (what it senses right now). ○ Decision-making: Uses both current ○ Decision-making: Simple, rule-based percepts and a model of how the world responses (condition-action pairs). evolves to make decisions. ○ Example: A thermostat, which turns ○ Example: A robot vacuum that heating on or off based on the current remembers where it has cleaned so far temperature. and avoids those areas. ○ Limitation: Lacks memory or awareness of ○ Advantage: Can handle environments the world beyond immediate perceptions, where the agent's actions affect future so it may make suboptimal decisions in states by keeping track of past events. complex environments. Introduction to Artificial Intelligence CSCI 111 - K ★ Utility-Based Agent ★ Goal-based Agent ○ maximize a utility function, which ○ Focuses on achieving a specific goal and quantifies how "desirable" different states selects actions that lead to that goal. are. ○ Decision-making: Evaluates different ○ Decision-making: It doesn’t just aim for action sequences and chooses the one any goal but strives to achieve the "best" that is expected to achieve the goal. outcome, maximizing satisfaction or utility. ○ Example: A navigation system that plans ○ Example: A self-driving car that not only a route to a destination (goal) by aims to reach a destination but also evaluating paths from the current factors in speed, safety, and comfort to location. optimize the trip. ○ Advantage: More flexible and adaptable, ○ Advantage: Handles trade-offs between as it can make decisions by considering different competing objectives, leading to future states and long-term outcomes. the most desirable overall outcome. Introduction to Artificial Intelligence CSCI 111 - K Problem Solving ★ Occurs in an environment where there is a goal but the correct action is not obvious ○ Requires a method to search for this sequence of actions ★ State: Environment with the agent ○ State space: set of all possible states Must have clearly defined initial and goal states ○ Actions: set of actions from each state ★ Learning Agent ○ Transition model: result of an action ○ Improves its performance over time by Successor function learning from experience ○ Action cost function ○ Decision-making: Starts with some initial Cost of applying function rules but adapts its behavior as it gains ○ Solution space: set of all goal states more knowledge about the environment. ★ Problem Solving Process ○ Example: A recommendation system that ○ Goal formulation: Establish goal learns user preferences and improves ○ Problem formulation: define states, suggestions over time. actions, and transition model ○ Advantage: Can perform well in dynamic ○ Search: simulate sequences of actions to or unknown environments where the find a solution that brings agent from agent's knowledge evolves with initial to goal experience. ○ Execution: execute actions Introduction to Artificial Intelligence CSCI 111 - K ★ State Space Graph: states and successors (No definite initial state) Agent P E A S Vacuum Clean the 2 rooms, Wheels, Dirt Sensor World environment dirt, vacuum suction ★ Search tree ○ Has initial state (root node) ○ Terminates once a goal node is found Introduction to Artificial Intelligence CSCI 111 - K ★ Strategy: specifies the order of node expansion ○ Initial state, successors of initial state, ○ Completeness successors of the successors ○ Time complexity ○ Frontier is FIFO queue ○ Space compelexity ○ Complete ○ Optimality ○ Optimal if path-cost = node depth ★ State space: collection of all possible states of an ★ Uniform-Cost search agent in an environment ★ Problem solving: seeking a solution that brings the agent from an initial state to a goal state ★ Node: initial state ★ Frontier: data structure (stack of queue) Uninformed Search Strategies ★ no additional information beyond states and successors ○ Different action costs ○ Prioritize least path cost Priority queue based on path-cost ★ Depth-First search ○ Frontier is a stack (LIFO) ○ As deep as possible ○ Recursion ○ Might not terminate ★ Breadth-First Search Introduction to Artificial Intelligence CSCI 111 - K Informed Search Strategies ★ Informed or heuristic search: expands “more promising” states according to problem-specific information ★ Heuristic: learning method, interactive method, process ★ Depth-Limited Search ★ Greedy Best-First Search ○ DFS but with pre-determined depth limit ○ Expand node closest to goal (L) ○ Route finding ○ Incomplete Direct distance from goal to node ★ Iterative Deepening Search Partial significance ○ Depth LImited Search for L = 0,1,2, so on ○ Not always optimal ★ A* search ○ Looks at the smallest g+H ★ Bidirectional ○ Run two simultaneous searches Forward From initial state Backward from goal state ○ Stops when frontiers clash Introduction to Artificial Intelligence CSCI 111 - K ★ Local search ○ Incremental improvements ○ N-queens ★ Quiz 1. Think like human being 2. Because they think rationally 3. Frontier/fringe 4. Route-finding agent that would like to go from a to b, roves, drive around, senses the roads. Become a utility based on what it prefers 5. State space: all states, solution: all goal states 6. Goal: achieve, solution: steps to goal 7. Nodes 8. No.