Search Algorithms PDF
Document Details
Uploaded by Deleted User
Tags
Related
Summary
This document provides a lecture on search algorithms, covering basic ideas, search trees (What-if trees), and backtracking. It explains how these algorithms work, focusing on differences between approaches like depth-first search, breadth-first search, and iterative deepening. The examples focus on finding solutions within a given state space.
Full Transcript
Lec 10 Search: Basic idea 1. you begin from a start state and it expands to a list of all possible successor states 2. successor states: second state to the start state 3. The successor states are put in something called frontier or some...
Lec 10 Search: Basic idea 1. you begin from a start state and it expands to a list of all possible successor states 2. successor states: second state to the start state 3. The successor states are put in something called frontier or sometimes called fringe 4. frontier: They are considered to be boundary of the states that I can reach currently in situation 5. frontier: are states that I can reach, but I didn’t go the them and I do not know if they are the goal state or not. Also, I do not know who are there children or even if they have a children or not. 6. Each search algorithm we select a state in the fringe different from the other searching algorithm. 7. If state is the goal we return the goal with the complete path from the start to the end, If it is not the goal we expand its children 8. the main aim of any search is not to move on a lot of states several times to reach the goal. Search Tree (theWhat-if tree) 1. The difference is that each search algorithm selects the node or the state from the fringe 2. What- if tree: They are not actions you are doing in the environment, but you are thinking about the possibilities. 3. You build a tree for all your possibilities in order to reach the goal. 4. “What if” tree of sequences of actions and outcomes; 5. The root node corresponds to the starting state. 6. The children of a node correspond to the successor states of that node’s state. 7. A path through the tree corresponds to a sequence of actions. 8. A solution is a path ending in the goal state 9. Nodes vs. States..? مهممممم A state is a representation of the world, A node is a data structure that is part of the search tree. 10. Node must keep pointer to parent, path cost, possibly other info. 11. State is the current representation of the world →In tic-tac-toc the game has X or O is called the state Action Successor State 12. Node is the representation of the state →When I represent the state in a matrix or a board with 1’s to X or 0 to the O’s is called node Tree Search Algorithm Outline 1. Initialize the frontier using the starting state. 2. While the frontier is not empty: Choose a frontier node according to search strategy and take it off the frontier. If the node contains the goal state, return solution. Else expand the node and add its children to the frontier. 3. Stopping criteria will be based on two options: →First is to reach the goal →Second is when the frontier or fringe is empty. handle repeated states: 1. Every time you expand a node, add that state to the explored set; do not put explored states on the frontier again. 2. Every time you add a node to the frontier, check whether it already exists with a higher path cost, and if yes, replace that node with the new one. Backtracking Search 1. Backtracking is a technique for systematically trying all paths through a state space. 2. backtracking search is considered as blind search or uninformed search algorithms 3. Backtracking begins at the start state and moves in a path till reaching or finding a goal then quit and brings back the solution path. 4. If Backtracking find a dead end then it back track to the most recent unexamined node and continue down one of its branches. 5. Backtracking algorithm is known as the depth first search. 6. Backtracking algorithm and the depth first search Both move on the tree in the same way. 7. The difference between Backtracking algorithm and the depth first search is in the data structure used in to build the algorithm. Backtracking Implementation 1-Backtracking is implemented using Three lists: 1. State list: states in the current path being tried if you found the goal then the state list is the path of your solution 2. New state list: nodes awaiting evaluations which are fringe or frontier or sometime called open list 3. Dead end List: list that have states and its descendants failed to contain the goal node 4. Current State: pointer to the current state. 2- The dead end is important because we need to check it first before adding in the new state list we need to see if it is in the dead end to prevent cycles Backtracking Applications 1. Parsing problems if you have statement in the programming language and it is fixed or matched with a Grammer of the language. 2. Constraint satisfaction problems: Problems that have several constraints and you try to meet them such as Sudoku or crosswords. Lec11 Blind vs. Heuristic Strategies مهمممم Blind (or Un-Informed / Exhaustive / Brute-Force) strategies do not exploit any of the contained in a state. 1. information They are called Blind, Uninformed, exhaustive, Brute- force 2. Blind: He is moving in a blind way 3. Uninformed: because you move with no guided information. →You are doing everything in a systematic way with a sequence of orders 4. Exhaustive: because you are trying every possible options. →Every possible nodes 5. Brute-Force: It takes a large computational power so that you can execute or run strategies or ways on the state space Heuristic (or Informed) strategies exploits such information to assess that one node is “more promising” than another. 1. You have information come from the domain. 2. Heuristics: information tells you what node is move promising than the other 3. It does not grantee the optimal solution like the blind search 4. You reduce the amount of computational power 5. in most case you cannot find the optimal solution or reach the sub- optimal solution may come so close 6. In the blind search you move on the entire paths in contrast with the informed strategies you ignore some paths. المالحظات الي جايه مهمه جدا جدا جدا جدا 1. Depth-first search is like the back tracking as a sequence (Same way on moving on the nodes), but the difference is the data structure inside each one of them. 2. In back-tracking it consists of three lists and a pointer on the node, but the depth-first search does not include that. 3. Depth can be made based on two main variations (Depth limited search or iterative-deepening search) 4. Behavior of iterative deepening search can be seen as a mix between the depth and breadth. 5. Bidirectional search it is not under the breadth-first search exactly, but its complexity is smaller than the depth and breadth first search. 6. Breadth-first search and depth-first search and there variations the cost of the step is one. 7. In the uniform-cost search the cost differs from an edge to another (not all the edges have the same cost) Like the salesman problem we took before. 8. The edges has different costs. The value from one node to another node has different cost 9. You can use the bidirectional search especially on trees that have high branching factor. 10. In depth-first search New nodes are inserted at the front of the FRINGE. 11. In Breadth-first search New nodes are inserted at the end of the FRINGE. 12. The breath-first search is more complex than depth first search. 13. The fringe in the breadth is more bigger than the fringe in the depth. This is considered one of the disadvantages of the breadth first search. →Because in the fringe does not move to the next level till it catch or keeps all the nodes in this level in it’s fringe. 14. It can make fringe or the frontier increase exponentially. especially if the branching factor is high. 15. In the real world problem a node can have 2 or 3 or 1 or 5 and to calculate the branching factor in this cases we take the average 16. Main difference between depth and breadth first strategy Is that in the depth the fringe acts as a stack, while in the breadth first strategy the fringe acts as a Queue. Lec12 Depth-Limited Strategy 1. You are working depth, but to a limited level. Your depth is limited. 2. Depth-first with depth cutoff k (maximal depth below which nodes are not expanded). 3. You do not exceed maximum depth. 4. Three possible outcomes: Solution. (you can find it in the area range of k) Failure (no solution). (you cannot find it) Cutoff (no solution within cutoff, but you can find it with a bigger value to k). Iterative-Deepening Strategy 1. In these strategies, you are applying depth till a specific level. 2. You are not going further in the subtrees. Your controlling your self to a specific level and if you did not find the goal then you go the nodes next to me. 3. It is like you are moving on the breadth of the tree. 4. iterative deepening strategy is a mix between the depth and breadth searching strategies. 5. you start with a k equal to 0 for example, is the node the goal (NO). Then try to increase the k with 5 for example, did you find the goal (NO). Then try to increase the k with 8 for example, did you find the goal (NO) then increase the value of the k and so on till you find the goal. Comparing Depth, Breadth, & Iterative Deepening Search Algorithms 1. Breadth-first and iterative deepening guarantee shortest solution. 2. Breadth-first: high space complexity. 3. Depth-first: low space complexity, but may search below solution depth. 4. Iterative deepening: best performance in terms of orders of complexity. Bidirectional Search 1. Bidirectional search is a graph search algorithm that finds a shortest path from an initial start state to a goal state in a directed graph. 2. It runs two simultaneous searches: one forward from the initial state, and one backward from the goal, stopping when the two meet. 3. The reason for this approach is that in many cases it is faster than depth and breadth. 4. the nodes investigated by the bidirectional search is less than the number of nodes the depth or breadth are investigating. 5. You begin searching from the start and expand the children and move in a direction, and in the same time you move from the goal and try to find the parents and the parents of these parent till we meet at specific node. Complexity of Basic Search Algorithms 1. One way is to calculate the number of nodes at the last level. 2. Lets consider this as our tree, and the branching factor is b (Each node has b children) if each node has two children then the branching factor is 2. 3. The height of the tree is called solution depth d. 4. Number of nodes at a specific level is b^d. 5. Total complexity of bidirectional search is : b^(d/2) + b^(d/2) and it is less that b^d the complexity of breadth 6. For example if b = 2 and d = 8 it will give 2^8 = 256 size of the last level in the tree for breadth , For bidirectional search it will be 2^4 + 2^4 =16 +16 = 32 Avoiding Repeated States Depth and Breadth-first strategy: Solution 1: Keep track of all states associated with nodes in current path If the state of a new node already exists, then discard the node Solution 2: Keep track of all states generated so far If the state of a new node has already been generated, then discard the node Uniform-Cost Strategy 1. How to search in a state space or tree. The values on the edges is not equal to 1, but it have different values that represent the cost of moving from one node to another node. 2. You have some cost and this cost must be a positive number. It should be greater than Zero. 3. Each step has some cost > 0. 4. The total cost from the initial state to any other state is based on the summation of the cost of the steps. 5. The cost of the path to each fringe node N is g(N) = costs of all steps. 6. Try to find a path with a minimal cost from the initial state to the goal state. There can be several paths, but we need the path with the lowest cost. 7. The fringe is queue, but this queue is sorted in an increasing cost. (priority queue) 8. The lowest cost is added at the beginning, while the highest cost is added at the end. Lec13 Best-FirstSearch 1. Define a function: f : node N → real number f(N) called the evaluation function, whose value depends on the contents of the state associated with N. 2. To apply heuristic search We need a heuristic value or an information we can use or we can go using it. We need information that give us guidance as we walk in the state space or in the tree. 3. This function is called heuristic function or evaluation function. It depends on the contents of the state. It is a function you apply on each state in the state space that you have and then it gives you a value. 4. Based on this value you can guide your search algorithm 5. In the best first search it is like uniform cost in its strategy. 6. Uniform cost: We sort the node based on the cost of the it. 7. In best first search: Each node will be associated with a heuristic value. It represents how each node will much good to move using or through it. How this node will make me reach the goal. So we sort the nodes of the tree based on the heuristic value. 8. Order the nodes in the fringe in increasing values of f(N). 9. Fringe in best first search is a priority queue: It means that the nodes are sorted based on the highest priority (High priority means have better heuristic value.) 10. f(N) can be any function you want. 11. Heuristic function can be penalty or something you want to decrease or a profit or accuracy that you want to increase Heuristic Function 1. we sometimes need to build complex heuristic functions. 2. You do not need to examine it perfectly how this heuristic function is so much good than another during your search. 3. You must understand the domain you are searching in carefully so that you can find one or more heuristic functions. 4. It can be in most cases based on trial and error. To see the effect of the heuristic function you obtained, this can be done by using them in your search (search using the heuristic functions and see the affect of each one of them to make you reach the goal in a few time or a few number of steps). 5. Finally, then you can compare between different heuristic functions and see or check the difference between them. الحته دى مهمه بقى 1. Let g(N) be the cost of the best path found so far between the initial node and N 2. f(N) = h(N) → greedy best-first 3. f(N) = g(N) + h(N) →A* 4. To depend only on the heuristic function h(N) and ignore the g(N) is called greedy best-first. 5. It is important to mention that each node has a heuristic value obtained from h(N) and if each node has the same heuristic value with different cost value from g(N). You will select the node N with the lowest cost value from the initial node. Moreon Heuristic Search & Functions Best-First Search 1. Use open list to keep track of the current edge of the search and closed list to record states already visited. 2. Order the states on open list according to some heuristic estimate of their closeness to a goal and the most promising state is considered in each iteration. 3. The best state to be examined in each iteration may be from any level of the state space. 4. Open sorted list is represented as a priority queue. Chapter 4 LEC 14 Adversarial Search (Heuristics in Two-Person Games) 1- How to search in a tree given you have opponents or adversarial 2- You have a player that plays against you or an opponent against you. This player is doing decisions that try to minimize your profit 3- The type of search used in these cases can be applied in several applications especially the games or game playing. 4- Also, to build a game player or computer player or an artificial game player, these adversarial search are the basic algorithms used to develop game players. } بيلعبو ضد بعض وكل واحد2 باختصار نوع السيرش ده بيسيرش جوه ترى بيبقى فيها بيحاول يقلل فرص التانى في المكسب زى فرقتين لما بيلعبو ماتش كوره كل واحد عايز يكسب عن طريق انو يسجل اهداف ويمنع الفريق التانى انو يسجل ونوع السيرش ده بستخدمو في في األلعاب والعاب الكومبيوتر الى فيها ذكاء اصطناعى { 5- Imagine you have 2 players. 6- These two players are (you and your opponent). 7- Both of them have the same knowledge of the state space. They are seeing the same state space. 8- Both of them are trying to apply the same knowledge to win the game. 9- The idea is how to account the actions (consider) of the player and the opponent in the What if-tree built. 10- You do an action and the opponent do another action, and based on the actions performed by him you do other actions and so on. 11- How can we search in a tree like this to determine best possible option for the player. } هنا بقى نفس الى فوق ناخد مثال يفهمنا لعبه اكس او اتنين بيلعبو قدام بعض االتنين قدامهم نفس بيئه اللعب ونفس القوانين وكل حاجه وكل واحد بيحاول يطبق المعرفه بتاعتو ويسخرها في سبيل انو يكسب وكل قرار لالعب هو معتمد على لعبه التانى انا بقى بدرس اللعب والحركات المتوقعه لكل العب علشان ابنى الوت اف ترى بتاعتى واطلع احسن الحلول لالعب بتاعى (انا) علشان اكسب { 12- Two heuristic pruning procedures :- ➔ Minimax procedure. - Exhaustive Minimax for Nim(Splitting Stacks) - Minimax to a Fixed Ply Depth ➔ Alpha-beta procedure. - Alpha-Beta Example Minimax Procedure 13- Opponents are referred to by MIN you the player is called MAX. 14- MAX player is trying to Maximize his opportunity to win. 15- MIN player is trying to Minimize MAX’s opportunity to win Implementation: 16- If you are working exhaustive: ➔ Build the tree entirely till reaching the leaves. ➔ Assign the leaves 0 or 1 depending on who won (Max or Min). ➔ If Max win then the leaf is 1 if Max lose then the leaf is 0. ➔ Then, you move backward assigning 0’s and 1’s to the up nodes using depth first search. ➔ It is based on the actions of Max and Min together. 17- If you are building the tree till a specific depth (and you do not complete the entire tree) then you put values that represent the probability that Max will win. Cannot add 0’s or 1’s. If 0’s or 1’s are added I guarantee that Max wins or Min loses 18- If I don’t have the entire tree, and assume if I completed or moved on the tree I have indication that say I will win. 19- There are heuristic values. They give an indication how these paths or subtrees are good for Max. 20- Because Max want to win, then Max always selects the best option in his children. Also, Min wants Max to lose, then he selects the least option in his children. 21- In other words, let’s consider this tree, if the turn is for min now. Min will select the minimum value for max to reduce his chance of wining if the turn is for max. Max will select the maximum value to win. } 0 هنا بقى ازاى بعمل السيرش ده ؟ بعمل الترى بتاعتى لحد ما أوصل لليفز وبحط بعدها قيم لو الماكس خسر وببدا ارجع علشان احط قيم0 لو الماكس كسب وقيمه1 قيمه1 و االصفار والوحايد على باقى نودذ في الترى بتاعتى طب لو انا الترى بتاعتى مش كامله ابدا اننا افرض قيم بحيث انها تخليلى الماكس يكسب )(انا الماكس ديما بيختار االختيار األفضل ليه علشان عايز يكسل وديما المين بيختار االختيار االسوا للماكس علشان عايز الماكس يخسر { Minimax to a Fixed Ply Depth 22- Here, it is the minimax You have 2 player games. You build the tree to 4 plies or level 4 only From level 0 to level 4. You stop after 4 levels. 23- We try to apply heuristic value 24- The heuristic value represents the how favorable the state or good the state for player max. 25- Max tries to find the good state for himself, while Min tries to find the worst states for Max. 26- There exist cases we cannot build the tree till the end as we have seen in cases such as (chess, or go game). بلغ العال بكماله كشف الدجى بجماله حسنت جميع خصاله صلو عليه و اله Lec 15 Alpha–Beta Pruning Procedure 1- Alpha-beta pruning is an adversarial search algorithm that is applied on the same problems as minimax 2- Alpha-beta pruning is applied using minimax. It works on a small number of states this is the difference. 3- It prunes the branches that does not influence the final decision. 4- It returns the final choices that minimax returns however, with less amount of investigations because we move on a small number of states. 5- Aim of alpha-beta is to decrease the amount of nodes evaluated by the minimax algorithm 6- It is done by searching the state space using depth-first search. 7- Then, we apply minimax, we have two values alpha and beta. 8- Alpha values related to the max nodes, and beta are values related to the min nodes. 9- Alpha represent the best option found in a specific path. Alpha is a value cannot decrease because Max is always try to maximize. So alpha cannot be decreased, but it should increase. 10- Beta cannot increase, it should be decreased. It represent the best option to Min found till now and this option is considered the worst option to Max. 11- Alpha and Beta are used to determine if there are parts of the tree we do not need to move through or do any investigation at it because it will not affect the final decision. } بص بقى هنا في االلجورزم ده بيستخدم المين ماكس برضو بس على مساحه في االستيت اسبيس قليله النو بيقص ويقطع كل النودذ الى في الترى الى مش بتاثر فى اللعبه هالقى عنى اتين فاليو الفا وبيتا الفا (دماغك متروحش لمعامل الفا بتاع التحاليل ركز هنا) دى بقى بتاع الماكس وديما بتذيد مش بتقل وبتطلعلى افضل اوبشن للماكس اما بيتا دى للمين وديما بتق ومش بتذيد وديما بتطلعلى احسن اوبشن للمين الى هو االسوا للماكس { Search..What’s the Issue? 12- There exist advanced search algorithms such as genetic algorithms and simulated annealing. 13- Search is an iterative local procedure. so you iteratively search locally for a good solutions. 14- Good heuristics should provide some global look-ahead. So it tries to find globally from the beginning if this path I moved through is promising and potentially makes you go to a solution. 15- When to Use Search Techniques? ➔ The search space is small, and There is no other available techniques, or It is not worth the effort to develop a more efficient technique ➔ The search space is large, and There is no other available techniques, and There exist “good” heuristics وكان هللا بالسر عليم التنسونا من صالح دعائكم وال تنسوا دكتور.... النهايه احمد عز رحمه هللا