Smart Systems Lec 10 PDF
Document Details
Uploaded by WellIntentionedMarigold6024
Alexandria University
Tags
Summary
This document is a lecture on smart systems and computational intelligence, focusing on swarm algorithms and particle swarm optimization (PSO). It covers basic concepts, rules, and characteristics of swarm behaviour and describes the PSO algorithm with mathematical examples and interpretations. It discusses advantages and disadvantages. Additionally, it touches upon the Ant Colony Optimization and Bee Algorithm.
Full Transcript
# Smart Systems and Computational Intelligence ## Lecture 10 Smart Systems and Computational Intelligence Course ### Swarm Algorithms #### Swarming - The Definition - aggregation of similar animals, generally cruising in the same direction - Termites swarm to build colonies - Birds swarm to find...
# Smart Systems and Computational Intelligence ## Lecture 10 Smart Systems and Computational Intelligence Course ### Swarm Algorithms #### Swarming - The Definition - aggregation of similar animals, generally cruising in the same direction - Termites swarm to build colonies - Birds swarm to find food - Bees swarm to reproduce #### Swarming is Powerful - Swarms can achieve things that an individual cannot #### Powerful... but simple - All evidence suggests: - No central control - Only simple rules for each individual - Emergent phenomena - Self-organization #### Harness this Power out of Simplicity - Technical systems are getting larger and more complex - Global control hard to define and program - Larger systems lead to more errors - Swarm intelligence systems are: - Robust - Relatively simple #### Swarming - Example - Bird Flocking - "Boids" model was proposed by Reynolds - Boids = Bird-oids (bird like) - Only three simple rules #### Collision Avoidance - Rule 1: Avoid Collision with neighboring birds #### Velocity Matching - Rule 2: Match the velocity of neighboring birds #### Flock Centering - Rule 3: Stay near neighboring birds #### Define the neighborhood - Model the view of a bird - Only local knowledge, only local interaction - Affects the swarm behavior (fish vs. birds) #### Swarming - Characteristics - Simple rules for each individual - No central control - Decentralized and hence robust - Emergent - Performs complex functions ### Swarm Algorithms - Inspiration from swarm intelligence has led to some highly successful optimization algorithms. - Ant Colony (-based) Optimization - a way to solve optimization problems based on the way that ants indirectly communicate directions to each other. - Particle Swarm Optimization - a different way to solve optimization problems, based \on the swarming behaviour of several kinds of organisms. ### Particle Swarm optimization ### Introduction to Optimization - The optimization can be defined as a mechanism through which the maximum or minimum value of a given function or process can be found. - The function that we try to minimize or maximize is called as objective function. - Variable and parameters - Statement of optimization problem - Minimize f(x) - Subject to g(x) <= 0 - h(x) = 0. ### Introduction to Optimization - Application to optimization: Particle Swarm Optimization - Proposed by James Kennedy & Russell Eberhart (1995) - Combines self-experiences with social experiences ### Particle Swarm Optimization(PSO) - Inspired from the nature social behavior and dynamic movements with communications of insects, birds and fish. ### Particle Swarm Optimization(PSO) - Uses some agents (particles) that constitute a swarm moving around in the search space looking for the best solution. - Each particle in search space adjusts its "flying" according to its own flying experience as well as the flying experience of other particles. - Each particle has three parameters position, velocity, and previous best position, the particle with the best fitness value is called as global best position. ### Particle Swarm Optimization(PSO) - Collection of flying particles (swarm) - Changing solutions Search area - Possible solutions. - Movement towards a promising area to get the global optimum. - Each particle adjusts its traveling speed dynamically corresponding to the flying experiences of itself and its colleagues. - Each particle keeps track: - its best solution, personal best, pbest. - the best value of any particle, global best, gbest. - Each particle modifies its position according to: - its current position - its current velocity - the distance between its current position and pbest. - the distance between its current position and gbest. ### Algorithm - Parameters - f: Objective function - Xi: Position of the particle or agent. - Vi: Velocity of the particle or agent. - A: Population of agents. - C1: cognitive constant. - R1, R2: random numbers. - C2: social constant. ### Algorithm - Steps 1. Create a 'population' of agents (particles) uniformly distributed over X. 2. Evaluate each particle's position according to the objective function - (say Y=F(x) = -x^2+5x+20) 3. If a particle's current position is better than its previous best position, update it. 4. Determine the best particle (according to the particle's previous best positions). ### Algorithm - Steps 5. Update particles' velocities: - vt+1 = vi + c₁U₁(pb; - p₁) + c2U(gb-p) 6. Move particles to their new positions: - pt+1 = pi + vt+1 7. Go to step 2 until stopping criteria are satisfied. ### Contd… - Particle’s velocity - vt+1 = vi + c₁U₁(pb-p₁) + c2U(gb-p) - 1. Inertia: - Makes the particle move in the same direction and with the same velocity. - 2. Personal Influence: - Improves the individual. - Makes the particle return to a previous position, better than the current. - Conservative. - 3. Social Influence: - Makes the particle follow the best neighbors direction. ### Acceleration coefficients - When, c1=c2=0 then all particles continue flying at their current speed until they hit the search space's boundary. Therefore, the velocity update equation is calculated as: v₁+1 = v₁ - When c1>0 and c2=0, all particles are independent. The velocity update equation will be: v₁+1 = v₁ + c₁r₁j [Pbest,i - x₁] - When c1>0 and c2=0, all particles are attracted to a single point in the entire swarm and the update velocity will become v₁+1 = v₁ + c₂r₂ j'[g best - x₁] - When c1=c2, all particles are attracted towards the average of pbest and gbest. ### Contd… - Intensification: explores the previous solutions, and finds the best solution for a given region. - Diversification: searches new solutions, and finds the regions with potentially the best solutions. - In PSO: - v{+1 = v{ + c₁U₁(pb{ - p₁) + c2U₂(g{ - p{) ### Flow chart of Algorithm The following diagram is a flowchart of the algorithm: ![Flowchart of Algorithm](flowchart.png) ### Mathematical Example and Interpretation #### Example 1 - Find the minimum of the function f(x) = -x^2 + 5x + 20 - Using PSO algorithm. Use 9 particles with \initial positions X₁ = -9.6, X₂ = 6, X₃ = -2.6, X₄ = -1.1, X₅ = 0.6, X₆ = 2.3, X₇ = 2.8, X₈ = 8.3, X₉ = 10 - Solution Choose the number of particles X₁ = -9.6, X₂ = -6, X₃ = -2.6, X₄ = -1.1, X₅ = 0.6, X₆ = 2.3, X₇ = 2.8, X₈ = 8.3, X₉ = 10 - Evaluate the objective function: - f₀ = -120.16, f₂₀ = -46, f₃₀ = 0.24 - f₄₀ = 13.29, f₅₀ = 22.64, f₆₀ = 26.21, - f₇₀ = 26.16, f₈₀ = -7.39, f₉₀ = -30 - Let c1=c2=1 and set initial velocities of the particles to zero. - V⁰₁ = 0, V⁰₂ = 0, V⁰₃ = 0, V⁰₄ = 0, V⁰₅ = 0, V⁰₆ = 0, V⁰₇ = 0, V⁰₈ = 0, V⁰₉ = 0 - Step2: Set the iteration no as t=0+1 and go to step 3. - Step 3: Find the personal best for each particle by: - Pbest,i+1 = {Pbest,i if ft+1 > Pbest,i; X₁ if ft+1 <= Pbest,i} - So P¹best,1 = -9.6, P¹best,2 = -6, P¹best,3 = -2.6 - P¹best,4 = -1.1, P¹best,5 = 0.6, P¹best,6 = 2.3 - P¹best,7 = 2.8, P¹best,8 = 8.3, P¹best,9 = 10 - Step 4: Gbest =max(Pbest) so gbest =(2.3). - Step 5: updating the velocities of the particle by considering the value of random numbers r1 = 0.213, r2=0.876, c1=c2=1. - v{+1 = v{ + c₁r₁[Pbest,i - x₁] + c₂r₂[Gbest - x₁]; i = 1, ...,9. - v₁¹ = 0 + 0.213(-9.6+9.6) + 0.876(2.3+9.6) = 10.4244 - v₂¹ = 7.2708, v₃¹ = 4.2924, v₅¹ = 1.4892, v₆¹ = 0, v₇¹ = -0.4380, v₈¹ = 5.256, v₉¹ = -6.7452 - Step 6: update the values of positions as well x{+1 = x{ + v{+1 - So X¹₁ = 0.8 244, X¹₂ = 1.2708, X¹₃ = 1.6924 - X¹₄ = 1.8784, X¹₅ = 2.0892, X¹₆ = 2.3 - X¹₇ = 2.362, X¹₈ = 3.044, X¹₉ = 3.2548 - Step7: Find the objective function values of: - f¹₁ = 23.4424, f¹₂ = 24.739, f¹₃ = 25.5978 - f¹₄ = 25.8636, f¹₅ = 26.0812, f¹₆ = 26.21 - f¹₇ = 26.231, f¹₈ = 25.9541, f¹₉ = 25.6803 - Step 8: Stopping criteria - if \the terminal rule is satisfied, go to step 2. - Otherwise stop the iteration and output the results - Step2: Set the iteration no as t=1+1=2 and go to step 3. - Step 3: Find the personal best for each particle by: - P²best,1 = 0.8244, P²best,2 = 1.2708, P²best,3 = 1.6924 - P²best,4 = 1.87884, P²best,5 = 2.0892, P²best,6 = 2.3 - P²best,7 = 2.362, P²best,8 = 3.044, P²best,9 = 3.2548 - Step 4: find the global best - Gbest = 2.362 - Step 5: by considering the random numbers in range (0,1) as - r¹² = 0.113, r₂² = 0.706 - Find the velocities of the particles: - v{+1 = v{ + c₁r₁[Pbest,ix] + C₂r₂ [Gbest - x₁]; i = 1, ...,9. - v₁² = 11.5099, v₂² = 8.0412, v₃² = 4.7651, v₄² = 3.3198 - v₅² = 1.6818, v₆² = 0.0438, v₇² = -0.4380, v₈² = -5.7375, v₉² = -7.3755 - Step 6: update the values of positions as well: - X¹² = 12.3343, X²₂ = 9.312, X²₃ = 6.4575 - X²₄ = 5.1982, X²₅ = 3.7710, X²₆ = 2.3438 - X²₇ = 1.9240, X²₈ = -2.6935, X²₉ = -4.12078 - Step7: Find the objective function values of - f¹² = -70.4644, f²² = -20.1532, f³² = 10.5882 - f⁴² = 18.9696, f⁵² = 24.6346, f⁶² = 26.2256 - f⁷² = 25.9182, f⁸² = -0.7224, f⁹² = -17.5839 - Step 8: Stopping criteria - if the terminal rule is satisfied, go to step 2. - Otherwise stop the iteration and output the results. - Step2: Set the iteration no as t=1+2 =3 and go to step 3. - Step 3: Find the personal best for each particle by: - P³best,1 = 0.8244, P³best,2 = 1.2708, P³best,3 = 1.6924 - P³best,4 = 1.87884, P³best,5 = 2.0892, P³best,6 = 2.3 - P³best,7 = 2.362, P³best,8 = 3.044, P³best,9 = 3.2548 - Step 4: find the global best - Gbest = 2.362 - Step 5: by considering the random numbers in range (0,1) as - r³₁ = 0.178, r³₂ = 0.507 - Find the velocities of the particles - v³₁ = 4.4052, v³₂ = 3.0862, v³₃ = 1.8405, v³₄ = 1.2909 - v³₅ = 0.6681, v³₆ = 0.053, v³₇ = -0.1380, v³₈ = -2.1531, v³₉ = -2.7759 - Step 6: update the values of positions as well: - X³₁ = 16.7395, X³₂ = 12.3982, X³₃ = 8.298 - X³₄ = 6.4862, X³₅ = 4.4391, X³₆ = 2.3968 - X³₇ = 1.786, X³₈ = -4.8466, X³₉ = -6.8967 - Step7: Find the objective function values of - f¹³ = -176.5145, f²³ = -71.7244, f³³ = -7.3673 - f⁴³ = 10.3367, f⁵³ = 22.49, f⁶³ = 26.2393 - f⁷³ = 25.7402, f⁸³ = -27.7222, f⁹³ = -62.0471 - Step 8: Stopping criteria - if the terminal rule is satisfied, go to step 2. - Otherwise, stop the iteration and output the results. ### Pseudocode - P=particle initialization(); - For I=1 to max - For each particle \p in P do - fp=f(p) - If fp is better than f(pbest) - pbest =p; - end - end - gbest = best p in P. - For each particle \p in P do - vt+1 = vi + c₁U₁(pb-p) + c2U₂(gbt-p²) - pt+1 = pi + vt+1 - end - end ### Advantages and Disadvantages of PSO - Advantages - Insensitive to scaling of design variables. - Simple implementation. - Easily parallelized for concurrent processing. - Derivative free. - Very few algorithm parameters. - Very efficient global search algorithm. - Disadvantages - Slow convergence in \refined search stage (weak local search ability). ### Ant Colony Optimization #### Metaheuristics - Metaheuristics are semi-stochastic approaches to solving a variety of hard optimization problems. They are iterative algorithms that improve candidate solutions. - Simulated annealing (SA) - Tabu search (TS) - Genetic algorithms (GA) - Ant Colony Optimization (ACO) #### Overview - "Ant Colony Optimization (ACO) studies artificial systems that take inspiration from the behavior of real ant colonies and are used to solve discrete optimization problems." #### Almost blind. - Incapable of achieving complex tasks alone. - Rely on the phenomena of swarm intelligence for survival. - Capable of establishing shortest-route paths from their colony to feeding sources and back. - Use stigmergic communication via pheromone trails. #### Follow existing pheromone trails with high probability. - What emerges is a form of autocatalytic behavior: the more ants follow a trail, the more attractive that trail becomes for being followed. - The process is thus characterized by a positive feedback loop, where the probability of a discrete path choice increases with the number of times the same path was chosen before. #### Naturally Observed Ant Behavior The following image shows naturally observed ant behavior: ![Naturally Observed Ant Behavior](ant_behavior.png) #### Naturally Observed Ant Behavior The following image shows naturally observed ant behavior: ![Naturally Observed Ant Behavior](ant_behavior_with_obstacle.png) #### Naturally Observed Ant Behavior The following image shows naturally observed ant behavior: ![Naturally Observed Ant Behavior](ant_behavior_with_obstacle_and_coin.png) #### Naturally Observed Ant Behavior The following image shows naturally observed ant behavior: ![Naturally Observed Ant Behavior](ant_behavior_with_obstacle_and_shorter_path.png) #### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_1.png) #### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_2.png) #### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_3.png) #### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_4.png) ##### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_5.png) #### Foraging behavior of Ants The following image shows the foraging behavior of ants: ![Foraging behavior of Ants](foraging_behavior_6.png) #### “Stigmergic?” - Stigmergy, a term coined by French biologist Pierre-Paul Grasse, is interaction through the environment. - Two individuals interact indirectly when one of them modifies the environment and the other responds to the new environment at a later time. This is stigmergy. #### A key concept: Stigmergy - Stigmergy is: indirect communication via interaction with the environment. - A problem gets solved bit by bit… - Individuals communicate in the above way, affecting what \each other does on the task. - E.g. sorting things into piles - Individuals leave markers or messages - these don't solve the problem in themselves, \but they affect other individuals in a way that helps them solve the problem… - E.g. this is how ants find shortest paths. #### Stigmergy in Ants - Ants are behaviourally unsophisticated, but collectively they can perform complex tasks. - Ants have highly developed sophisticated sign-based stigmergy: - They communicate using pheromones; - They lay trails of pheromones that other ants can follow. - If an ant has \a choice of two pheromone trails to follow, one to the NW, and one to the NE, but the NW one is stronger, which one will it follow? #### Stigmergy - Real ants \use stigmergy. How??? PHEROMONES!!! #### Pheromone Trails - Individual ants lay pheromone trails while travelling from the nest, to the nest or possibly in both directions. - The pheromone trail gradually evaporates over time. - But pheromone trail strength accumulate \with multiple ants using path. #### Initial state: no ants The following diagram shows the initial state of an ant colony: ![Ant colony initial state](ant_colony_initial_state.png) #### Ant Colony Optimisation Algorithms: Basic Ideas - Ants are agents that: - Move along between nodes in a graph. - They choose where to go based on pheromone strength (and maybe other things) - An ant's path \represents a specific candidate solution. - When an ant has finished a solution, pheromone is laid on its path, according to quality of solution. - This pheromone trail affects behaviour of other ants by `stigmergy'… #### Ant Colony Metaheuristic - Algorithm 1 The Ant Colony Optimization Metaheuristic - Set parameters, initialize pheromone trails - While Termination condition not met do - ConstructAntSolutions - ApplyLocalSearch (optional) - Update Pheromones - End while - Construct Ant Solutions: Partial solution extended by adding \an edge based on stochastic and pheromone considerations. - Apply Local Search: problem-specific, used in state-of-art ACO algorithms. - Update Pheromones: increase pheromone of good solutions, decrease that of bad solutions (pheromone evaporation). #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_initial_state.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_ant_placed.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_ant_chooses_edge.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_ant_chooses_next_edge.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_ant_almost_done.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_ant_done_and_pheromone_increased.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_pheromone_decreased.png) #### E.g. A 4-city TSP The following image is a simplified example of a Traveling Salesman Problem (TSP) with four cities: ![Simplified TSP](tsp_another_ant.png) #### Principle - Thanks to vaporization, the paths which are not the shortest are not so attractive for the ants - In the TSP: - The probability that an ant k at node r will choose the destination node s is Pk(r,s) = {τ(r,s)αη(r,s)β/sum(u∈Mkt(r,u)αη(r,u)β) for seMk; 0, otherwise } #### Principle The following equation is the principle for pheromone in the TSP: ![Pheromone principle](pheromone_principle.png) #### Principle of Pheromones recalculation I - When all ants finished their journey, the pheromones must be recalculated: - 1. Vaporization - Tr,s = Tr,s * p - 2. Calculation with pheromones left by all ants during their journey - Tr,s = Tr,s + Q/f(s) - Where: - Q... is a constant (usually 1) - f ... the value of the objective function of the solution s #### Principle of Pheromones recalculation II - When ant k passing the edge, it will leave a pheromone on this edge. The amount of pheromone contained in \the segment ij after passed by ant k is given by: - Tij ← Ti,j + Ark - Where: - Δτk i,j = {c*fbest if (ij) ∈ best global path; 0, otherwise} - Where: - fbest... is the best value of the objective function - c... constant - fworst ... is the worst value of the objective function #### Principle of Pheromones recalculation II - With increasing value of pheromone on segment i, j the probability of this segment to be chosen by \ants at the next iteration increases. - When a node is passed, \then the pheromone evaporation will occur using the following equation: - Ti,j ← (1-p) ti,j, ∀(i, j) ∈ A - ρ∈ [0,1] ... evaporation rate parameter - A ... segments that have been passed by ant k as part of the path from the nest to the food ### Bee Algorithm #### unload from source A The following image shows a bee algorithm with ants unloading from a source: ![Bee algorithm](bee_algorithm.png) #### Bees Foraging - Recruitment Behaviour : - Waggle Dancing - series of alternating \left and right loops - Direction of dancing - Duration of dancing - Navigation Behaviour : - Path vector represents knowledge representation of path by inspect - Construction of PI. #### Algorithm - It has two steps : - Manage BeesActivity() - CalculateVectors() - Manage BeesActivity: It handles agents activities based on their internal state. That is it \decides action it has to take depending on the knowledge it has. - CalculateVectors: It is used for administrative purposes and calculates Pl vectors for the agents. #### Uses of Bee Algorithm - Training neural networks for pattern recognition - Forming manufacturing cells. - Scheduling jobs for a production machine. - Data clustering #### Comparisons - Ants use pheromones for back tracking route to food source. - Bees instead use Path Integration. Bees are able to compute their present location from past trajectory continuously. - So bees can return to home \through direct route instead of back tracking their original route. - Does path emerge faster in this algorithm. #### Results - Experiments with different test cases on these algorithms show that: - Bees algorithm is more efficient when finding and collecting food, that is it takes less number of steps. - Bees algorithm is more scalable it requires less computation time to complete task. - Bees algorithm is less adaptive than ACO. #### Any Questions??? #### Thanks