AI: Solving Problems by Searching (Informed) Lecture Notes PDF

Summary

This document presents a lecture on informed search algorithms in artificial intelligence. Topics covered include best-first, greedy best-first, and A* searches, along with local search methods like hill-climbing and simulated annealing. The lecture material also explores the concepts of admissible heuristics.

Full Transcript

AI: Solving Problems by Searching (Informed) Prof. Satchidananda Dehuri, Ph.D. (CS), Dept. of Computer Science, Fakir Mohan University, Vyasa Vihar, Balasore-756019, ODISHA, INDIA. Outline Best-first search Greedy best-first search A* search Hill-climb...

AI: Solving Problems by Searching (Informed) Prof. Satchidananda Dehuri, Ph.D. (CS), Dept. of Computer Science, Fakir Mohan University, Vyasa Vihar, Balasore-756019, ODISHA, INDIA. Outline Best-first search Greedy best-first search A* search Hill-climbing search Simulated annealing search Review: Tree search A search strategy is defined by picking the order of node expansion Best-first search Idea: use an evaluation function f(n) for each node – estimate of "desirability"  Expand most desirable unexpanded node Implementation: Order the nodes in fringe in decreasing order of desirability Special cases: – greedy best-first search – A* search Romania with step costs in km Greedy best-first search Evaluation function f(n) = h(n) (heuristic) = estimate of cost from n to goal e.g., hSLD(n) = straight-line distance from n to Bucharest Greedy best-first search expands the node that appears to be closest to goal Greedy best-first search example Greedy best-first search example Greedy best-first search example Greedy best-first search example Properties of greedy best-first search Complete? No – can get stuck in loops, e.g., Iasi  Neamt  Iasi  Neamt  Time? O(bm), but a good heuristic can give dramatic improvement Space? O(bm) -- keeps all nodes in memory Optimal? No A search * Idea: avoid expanding paths that are already expensive Evaluation function f(n) = g(n) + h(n) g(n) = cost so far to reach n h(n) = estimated cost from n to goal f(n) = estimated total cost of path through n to goal A search example * A search example * A search example * A search example * A search example * A search example * Admissible heuristics A heuristic h(n) is admissible if for every node n, h(n) ≤ h*(n), where h*(n) is the true cost to reach the goal state from n. An admissible heuristic never overestimates the cost to reach the goal, i.e., it is optimistic Example: hSLD(n) (never overestimates the actual road distance) Theorem: If h(n) is admissible, A* using TREE- SEARCH is optimal Optimality of A (proof) * Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) = g(G2) since h(G2) = 0 g(G2) > g(G) since G2 is suboptimal f(G) = g(G) since h(G) = 0 f(G2) > f(G) from above Optimality of A (proof) * Suppose some suboptimal goal G2 has been generated and is in the fringe. Let n be an unexpanded node in the fringe such that n is on a shortest path to an optimal goal G. f(G2) > f(G) from above h(n) ≤ h*(n) since h is admissible g(n) + h(n) ≤ g(n) + h*(n) f(n) ≤ f(G) Hence f(G2) > f(n), and A* will never select G2 for expansion Consistent heuristics A heuristic is consistent if for every node n, every successor n' of n generated by any action a, h(n) ≤ c(n,a,n') + h(n') If h is consistent, we have f(n') = g(n') + h(n') = g(n) + c(n,a,n') + h(n') ≥ g(n) + h(n) = f(n) i.e., f(n) is non-decreasing along any path. Theorem: If h(n) is consistent, A* using GRAPH-SEARCH is optimal Optimality of A * A* expands nodes in order of increasing f value Gradually adds "f-contours" of nodes Contour i has all nodes with f=fi, where fi < fi+1 Properties of A* Complete? Yes (unless there are infinitely many nodes with f ≤ f(G) ) Time? Exponential Space? Keeps all nodes in memory Optimal? Yes Admissible heuristics E.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h1(S) = ? h2(S) = ? Admissible heuristics E.g., for the 8-puzzle: h1(n) = number of misplaced tiles h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) h1(S) = ? 8 h2(S) = ? 3+1+2+2+2+3+3+2 = 18 Dominance If h2(n) ≥ h1(n) for all n (both admissible) then h2 dominates h1 h2 is better for search Typical search costs (average number of nodes expanded): d=12 IDS = 3,644,035 nodes A*(h1) = 227 nodes A*(h2) = 73 nodes d=24 IDS = too many nodes A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes Relaxed problems A problem with fewer restrictions on the actions is called a relaxed problem The cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem If the rules of the 8-puzzle are relaxed so that a tile can move anywhere, then h1(n) gives the shortest solution If the rules are relaxed so that a tile can move to any adjacent square, then h2(n) gives the shortest solution Local Search General method to solve combinatorial optimization problems. Integer Programming also can get benefit. Principle: Start with initial configuration. Repeatedly search neighborhood and select a neighbor as candidate. Evaluate some cost function (or fitness function) and accept candidate if "better"; if not, select another neighbor. Stop if quality is sufficiently high, if no improvement can be found or after some fixed time. Local Search-Parametric Requirements are: A method to generate initial configuration A transition or generation function to find a neighbor as next candidate A cost function An Evaluation Criterion A Stopping Criterion Hill Climbing Simple Iterative Improvement or Hill Climbing: Candidate is always and only accepted if cost is lower (or fitness is higher) than current configuration Stop when no neighbor with lower cost (higher fitness) can be found Disadvantages: Local optimum as best result Local optimum depends on initial configuration Generally no upper bound on iteration length Hill climbing How to cope with disadvantages Repeat algorithm many times with different initial configurations Use information gathered in previous runs Use a more complex Generation Function to jump out of local optimum Use a more complex Evaluation Criterion that accepts sometimes (randomly) also solutions away from the (local) optimum Simulated Annealing Use a more complex Evaluation Function: Do sometimes accept candidates with higher cost to escape from local optimum Adapt the parameters of this Evaluation Function during execution Based upon the analogy with the simulation of the annealing of solids An Illustration AT INIT_TEMP Unconditional Acceptance HILL CLIMBING Move accepted with probability = e-(^C/temp) COST FUNCTION, C HILL CLIMBING HILL CLIMBING AT FINAL_TEMP NUMBER OF ITERATIONS Other Names Monte Carlo Annealing Statistical Cooling Probabilistic Hill Climbing Stochastic Relaxation Probabilistic Exchange Algorithm Annealing Annealing is a thermal process for obtaining low energy states of a solid in a heat bath. The process contains two steps: – Increase the temperature of the heat bath to a maximum value at which the solid melts. – Decrease carefully the temperature of the heat bath until the particles arrange themselves in the ground state of the solid. Ground state is a minimum energy state of the solid. The ground state of the solid is obtained only if the maximum temperature is high enough and the cooling is done slowly. Simulated Annealing The process of annealing can be simulated with the Metropolis algorithm, which is based on Monte Carlo techniques. We can apply this algorithm to generate a solution to combinatorial optimization problems assuming an analogy between them and physical many-particle systems with the following equivalences: – Solutions in the problem are equivalent to states in a physical system. – The cost of a solution is equivalent to the “energy” of a state. Analogy Slowly cool down a heated solid, so that all particles arrange in the ground energy state At each temperature wait until the solid reaches its thermal equilibrium Probability of being in a state with energy E : Pr { E = E } = 1/Z(T).exp (-E /kB.T) E Energy T Temperature kB Boltzmann constant Z(T) Normalization factor (temperature dependant) Simulation of cooling (Metropolis 1953) At a fixed temperature T: Perturb (randomly) the current state to a new state E is the difference in energy between current and new state If E < 0 (new state is lower), accept new state as current state If E  0 , accept new state with probability Pr (accepted) = exp (- E / kB.T) Eventually the systems evolves into thermal equilibrium at temperature T; then the formula mentioned before holds. When equilibrium is reached, temperature T can be lowered and the process can be repeated. Simulated Annealing Same algorithm can be used for combinatorial optimization problems: Energy E corresponds to the Cost function C Temperature T corresponds to control parameter c Pr { configuration = i } = 1/Q(c). exp (-C(i) / c) C Cost c Control parameter Q(c) Normalization factor (not important) Notations Allows moves to inferior solutions in order not to get stuck in a poor local optimum. c = F(Snew) - F(Sold) F has to be minimised c  inferior solution (c > 0) still accepted if U  e t U is a random number from (0, 1) interval t is a cooling parameter: t is initially high - many moves are accepted t is decreasing - inferior moves are nearly always rejected As the temperature decreases, the probability of accepting worse moves decreases. c > 0 inferior solution c  c -c < 0 t   e t  t Algorithm Step 1. k=1 Select an initial schedule S1 using some heuristic and set Sbest = S1 Select an initial temperature t0 > 0 Select a temperature reduction function (t) Step 2. Select ScN(Sk) If F(Sbest) < F(Sc) If F(Sc) < F(Sk) then Sk+1 = Sc else Generate a random uniform number Uk F ( Sc ) F ( S k ) e t If Uk < then Sk+1 = Sc else Sk+1 = Sk else Sbest = Sc S =S Step 3. tk = (t) k = k+1 ; If stopping condition = true then STOP else go to Step 2 Exercise. Consider the following scheduling problem for minimizing the total Weighted tardiness (tardiness is amount a job exceeds deadline). jobs 1 2 3 4 pj (processing time) 9 9 12 3 dj (deadline) 10 8 5 28 wj (weight) 14 12 1 12 Apply the simulated annealing to the problem starting out with the 3, 1, 4, 2 as an initial sequence. Neighbourhood: all schedules that can be obtained through adjacent pairwise interchanges. Select neighbours within the neigbourhood at random. Choose (t) = 0.9 * t t0 = 0.9 Use the following numbers as random numbers: 0.17, 0.91,... Sbest = S1 = 3, 1, 4, 2 F(S1) = wjTj = 1·7 + 14·11 + 12·0+ 12 ·25 = 461 = F(Sbest) t0 = 0.9 Sc = 1, 3, 4, 2 F(Sc) = 316 < F(Sbest) Sbest = 1, 3, 4, 2 F(Sbest) = 316 S2 = 1, 3, 4, 2 t = 0.9 · 0.9 = 0.81 Sc = 1, 3, 2, 4 F(Sc) = 340 > F(S  340 ) best316 e 0.81 U1 = 0.17 > = 1.35*10-13 S3= 1, 3, 4, 2 Sc = 1, 4, 3, 2 F(Sc) = 319 > F(Sbest) 319 316  U3 = 0.91 > e 0.729 = 0.016 S4= S4 = 1, 3, 4, 2 t = 0.6561... Practical Considerations Initial temperature must be "high" acceptance rate: 40%-60% seems to give good results in many situations Cooling schedule a number of moves at each temperature one move at each temperature t =  ·t  is typically in the interval [0.9, 0.99] t t  is typically close to 0 1  t Stopping condition given number of iterations no improvement has been obtained for a given number of iteration Thank You

Use Quizgecko on...
Browser
Browser