Optimisation Methods PDF
Document Details
Uploaded by NimbleMoldavite5906
University of Luxembourg
Tags
Summary
This document provides an overview of optimization methods, covering exact, approximate, and heuristic algorithms. It discusses concepts such as the time complexity of algorithms and the trade-offs between solution quality and computation time.
Full Transcript
Optimisation methods ◼ Exact methods obtain optimal solutions and guarantee their optimality. ◼ For NP-complete problems, exact algorithms are nonpolynomial time algorithms Optimisation methods ◼ Approximate methods generate high quality solutions in a reasonable ti...
Optimisation methods ◼ Exact methods obtain optimal solutions and guarantee their optimality. ◼ For NP-complete problems, exact algorithms are nonpolynomial time algorithms Optimisation methods ◼ Approximate methods generate high quality solutions in a reasonable time for practical use. ◼ However there is no guarantee of finding a global optimal solution Approximate algorithms ◼ Approximate algorithms or Heuristics - The word heuristic has its origin in the old Greek word heuriskein: art of discovering new strategies (rules) to solve problems - Generate « high quality » solutions in a reasonable time for practical use. - No guarantee to find the global optimal solution ◼ Meta (introduced by F. Glover in 1986) - “upper level methodology” ◼ Metaheuristic: - “Upper level general methodologies that can be used as guiding strategies in designing underlying heuristics to solve specific optimization problems” Genealogy of Metaheuristics Metaheuristics ◼ No “super method” exists for all problems -> NFL Theorem ◼ Exploitation vs. Exploration (Intensification vs Diversification) - Exploitation of the best solutions found (Intensification). Good solutions are clue for promising regions. - Exploration of the search space (Diversification) to discover unseen regions Metaheuristics – Classification based on population size Metaheuristics – classification criteria ◼ Nature inspired / non-nature inspired ◼ Single-solution based vs. Population-based ◼ Deterministic vs. Stochastic - Deterministic: Optimization problem is solved by making deterministic decisions (rules). Same initial solution will lead to the same final solution, - Stochastic: Optimization problem is solved by some random rules. Different final solutions may be obtained from the same initial solution. ◼ Iterative vs. Greedy How to search? ◼ Greedy Heuristic: - starts from an empty solution and constructs a complete solution through specific stages - makes locally optimum choices at each stage - intends to find a local optimum solution ◼ Local Search Method: - starts from a single complete solution - manipulates the current solution with the hope for improvements - moves from a complete solution to another complete solution - moves only to neighbouring solutions (locally optimum solutions) ◼ Iterated Local Search (ILS) Metaheuristic: - starts from a locally optimum solution - perturbs the solution to escape local optima - uses a Local Search method to find the new local optima - accepts non improving solutions along the process Greedy Heuristic ◼ Two design questions - Definition of the set of elements - Element selection heuristic : which gives the best “profit” at each ◼ Assignment of decision variables one by one ◼ At each iteration, choose the optimal decision for the current decision variable ◼ Take an optimal decision at each step does not guarantee global optimality ◼ Popular (Simplicity) ◼ Reduced complexity Greedy Heuristic - Template TSP example ◼ Size of the search space ◼ Classical example – Traveling Salesman Problem (TSP) - N cities - All cities must be visited only once before returning home - Find the tour that minimises the total distance - i.e. find the Hamilton circuit with the least cost Greedy Heuristic for TSP ◼ Nearest Neighbour Algorithm Step 1: Select a node randomly (if not given) and name it current node Step 2: Go to the nearest unvisited node Step 3: Update current node Step 4: If no unvisited node, go to Step 5; o.w. go to Step 2. Step 5: Connect current node to the starting node and return the tour ◼ Complexity: O(n2) Greedy Heuristic: Constructive Mechanism Greedy Heuristic: Constructive Mechanism Greedy heuristics for TSP ◼ Other heuristics Greedy algorithm for bin packing When using Metaheuristics? ◼ An easy problem (e.g. P class) with VERY large instances ◼ An easy problem with hard real time constraints ◼ Difficult problem (e.g. NP complete) with moderate size and/or difficult input structures ◼ Optimization problems with time consuming objective functions and/or constraints ◼ Non analytical models of optimization problems: black box scenario (objective function) ◼ Non deterministic complex models : uncertainty, robust optimization, dynamic, multi objective, … Common design concepts for iterative metaheuristics ◼ Representation (Encoding) = search space ◼ Objective function ◼ Search space + objective function = landscape Representation: Characteristics ◼ Completeness : All solutions associated to the problem must be represented. ◼ Connexity : A search path may exist between any two solutions (related also to the search operators), especially the global optimal solution. ◼ Efficiency : easy to manipulate by the search operators (time and space complexities). Types of Representations ◼ Linear representation : strings of symbols of a given alphabet Non-linear Representation ◼ Non-linear representation : based generally on graph structures (e.g. trees) Representation – TSP Example (Permutation) ◼ TSP Problem of n cities : permutation of integer numbers 1, …, n. ◼ Element = city. ◼ Size of the search space = (n-1)! / 2 Fixed starting city Symmetric TSP Exercise: 8 Queens ◼ Problem: Place 8 queens on a 8x8 chess such that two queens don’t overlap. Exercise: 8 Queens ◼ A permutation of the numbers from 1 to 8 8 2 5 3 1 7 4 6 Exercise: 8 Queens – Reducing the search space ◼ If encoding using cartesian positions The number of possibilities with all 8 queens on the board is 648 = 2,81 ^10+14. ◼ The solution of the problem prohibits more than one queen per row, so we may assign each queen to a separate row, now we’ll have 88 > 16 million possibilities. ◼ Same goes for not allowing 2 queens in the same column either, this reduces the space to 8!, which is only 40,320 possibilities. Representation / solution mapping ◼ Transforms the encoding (genotype) to a problem solution (phenotype). ◼ One-to-one: single solution = single encoding ◼ One-to-many: one solution = multiple encodings (redundancy): e.g. symmetry in grouping problems ◼ Many-to-one: one encoding = multiple solutions Objective function ◼ The objective function 𝑓 formulates the goal to achieve. ◼ It associates with each solution of the search space a real value that describes the quality or the fitness of the solution 𝑓 ∶ 𝑆 → ℝ ◼ It represents an absolute value and allows a complete ordering of all solutions of the search space Self-sufficient objective functions ◼ Straightforward objective function = original formulation of the problem ◼ TSP : minimize the total distance ◼ SAT : boolean formulae satisfied (TRUE). ◼ NLP : minimize the function G2(x). Guiding objective functions ◼ Transform the original goal for a better convergence (guide the search) ◼ SAT : Non optimal solution = FALSE → No information on the quality of solutions, No indication on the improvement of solutions to guide the search towards good solutions,... ◼ New objective function = Number of satisfied clauses Constraint handling -> not trivial ◼ Reject strategies: only feasible solutions are kept ◼ Penalizing strategies: penalty functions ◼ Repairing strategies: repair infeasible solutions ◼ Decoding strategies: only feasible solutions are generated ◼ Preserving strategies: specific representation and search operators which preserve the feasibility Parameter Tuning ◼ Cf. Lecture of Hedieh Haddad (HPO) Performance analysis of metaheuristics ◼ Experimental design: goals of the experiments, selected instances (real- life, constructed) and factors to be defined ◼ Measurements: measures to compute → statistical analysis ◼ Reporting in a comprehensive way Measurements - Criteria ◼ Quality of solutions ◼ Computational effort: search time, … ◼ Robustness (instances, problems, parameters, …) ◼ Ease of use, simplicity, flexibility, development cost, … Measurements – Quality of Solutions ◼ Global Optimal Solution: Found via exact algorithms or "constructed" instances with known solutions (e.g., TSP). ◼ Bounds: Obtained using relaxation techniques (e.g., Lagrangian, continuous relaxations). ◼ Best Known Solution: Libraries of standard problem instances track and update the best solutions available online. ◼ Requirements: Decision maker (in real life prob.) may define a requirement on the quality of the solution to obtain Measurements – Statistical Analysis ◼ Performance assessment and estimate the confidence of the results to be scientifically valid Reporting: Interaction plots ◼ Interaction plots: analyse the effect of two factors (parameters, e.g., mutation probability, population size in evolutionary algorithms) on the obtained results (e.g., solution quality, time). ◼ Scatter plots: analyse the trade-off between the different performance indicators (e.g., quality of solutions, search time, robustness). Single-solution based metaheuristics ◼ “Improvement” of a single solution ◼ Walks through neighbourhoods or search trajectories in the landscape ◼ Exploitation Oriented : Iterative exploration of the neighbourhood (intensification) Single-solution based metaheuristics Single-solution based metaheuristics Common concepts for S-metaheuristics ◼ Neighbourhoods ◼ Initial Solution Common concepts for S-metaheuristics - Neighborhood ◼ Fundamental property: locality - The effect on the solution (phenotype) when performing a move (small perturbation) to the representation (genotype). ◼ Strong locality -> small changes in the representation imply small changes in the solution -> S-metaheuristic will perform a meaningful search. ◼ Weak locality -> large effect on the solution when a small change is made in the representation. In the extreme case of weak locality, the search will converge toward a random search in the search space Common concepts for S-metaheuristics - Neighborhood Common concepts for S-metaheuristics - Neighbourhood Common concepts for S-metaheuristics - Neighbourhood ◼ Permutation-based representations -> usual neighborhood is based on the swap operator ◼ For a permutation of size n, the size of this neighborhood is n(n − 1)/2 Local optimum Common concepts for S-metaheuristics – Initial Solution ◼ Two main strategies: Random solution Heuristic solution (e.g. Greedy) ◼ Tradeoff: quality –computational time ◼ Random initial solution: quick operation, but the metaheuristic may take much larger number of iterations to converge ◼ Using greedy heuristics often leads to better quality local optima. ◼ Using better initial solutions will not always lead to better local optima ◼ Generating random solutions may be difficult for highly constrained problems Local Search Method History of Hikers ◼ 4 hikers are lost in a foggy mountain at night ◼ No map and only a headlamp to see immediate surroundings ◼ An altimeter and a compass ◼ Objective: to reach the valley of lower altitude, where the reliefs pass regularly ◼ At trail crossing, each time they can try only one trail Local Search Method History of Hikers ◼ Hiker 1 ◼ Strategy: “I am sportive and I explore all the possible trails” ◼ Result: He is hiking infinitely in the mountain. ◼ Note: Enumeration search is not logical, it is stupidity. Local Search Method History of Hikers ◼ Hiker 2 ◼ Strategy: “As long as I can go down, I do it” ◼ Result: He will get stuck in a local valley. ◼ Note: Greedy approaches lead to local optimum. Local Search Method History of Hikers ◼ Hiker 3 ◼ Strategy: “I go down, but sometimes I may take paths that do not go up too much” ◼ Result: He arrives to lower valleys compared to Hiker 2, but he may still get stuck in a local valley. ◼ Note: Accepting worse solutions may help us to escape from local optima. Local Search Method History of Hikers ◼ Hiker 4 ◼ Strategy: “As long as possible, I take paths with high slope, but sometimes I may take paths that do not go up too much. I also memorize and avoid the trails already taken” ◼ Result: Same as the 3rd hiker but may spend less time to arrive. ◼ Note: short memories avoids infinite loops and may save time. Local Search Method History of Hikers ◼ Local search methods are heuristic methods for solving computationally hard optimization problems. ◼ Local search algorithms move from solution to solution in the search space by applying local changes, until a solution deemed optimal is found or a time bound is elapsed. ◼ Requirements: - Initial solution - Local search operators (searching in a neighbourhood) Local Search Method Advantage vs. Disadvantage ◼ Advantage: - Powerful in Intensification (They exploit the neighbourhood carefully) - Use very little memory - Providing better solutions comparing to greedy methods ◼ Disadvantage: - Get stuck in local optima (weak in diversification) - Require problem-specific operators (sometimes) - Their performance highly depends on the initial solution (They need good enough initial solution to work well) Local Search Method Major components of a LS Method ◼ 1: Initial Solution: - The solution by which the search starts - Solutions can be represented in any form: matrices, vectors, lists, etc. ◼ Example on TSP with 10 cities: - Solution: A permutation of 10 cities: [1, 4, 8, 7, 5, 2, 10, 9, 3, 6] - TSP route: 1 ➣ 4 ➣ 8 ➣ 7 ➣ 5 ➣ 2 ➣ 10 ➣ 9 ➣ 3 ➣ 6 ◼ 2: Intensification Operator(s): - When a promising solution is found, these operators MOVE the solution to search its neighbourhood - The MOVE are small changes in the solution - By each MOVE, a new solution is created Local Search Method Intensification MOVE for TSP ◼ 2Opt-Swap MOVE: - Selects two cities i and j, and swaps the cities between i and j - ([i : j] = [j : i : -1]) Solution before 2Opt-Swap (i = 8; j = 10): [1, 4, 8, 7, 5, 2, 10, 9, 3, 6] Solution after 2Opt-Swap. (i = 8; j = 10): [1, 4, 10, 2, 5, 7, 8, 9, 3, 6] ◼ Insertion MOVE (also called Relocate MOVE): - Selects two cities i and j, and inserts j next to i Solution before Insertion (i = 8; j = 10): [1, 4, 8, 7, 5, 2, 10, 9, 3, 6] Solution after Insertion (i = 8; j = 10): [1, 4, 8, 10, 7, 5, 2, 9, 3, 6] Local Search Method Intensification MOVE for TSP ◼ Reversion MOVE: - Selects two cities i and j and reverses their positions. ◼ Solution before Reversion (i = 8; j = 10): [1, 4, 8, 7, 5, 2, 10, 9, 3, 6] ◼ Solution after Reversion. (i = 8; j = 10): [1, 4, 10, 7, 5, 2, 8, 9, 3, 6] Local Search Method Major components of a LS Method ◼ 3: Diversification Operator(s): - When the algorithm gets stuck in a local optima, it PERTURBS the solution to escape it - It should be strong enough to kick out the solution from the local optima ◼ 4: Acceptance Policy: - It decides whether to ACCEPT the new solution or not - Sometimes it is good to accept the solutions that worsen (not too much) the objective function. - Accepting worse solutions may help to escape the local optima (Diversification aspect) Local Search Method Diversification PERTURBATION for TSP ◼ Double Bridge PERTURBATION: - Selects 2 edges (dashed blue arcs) at random and changes their position in the solution. ◼ Solution before Double Bridge: [1, 2, 3, 4, 5, 6, 7, 8] ◼ Solution after Double Bridge: [1, 6, 7, 4, 5, 2, 3, 8] S-metaheuristic - Escaping from Local Optima ILS - Three basic elements ◼ Local search Any S-metaheuristic (deterministic or stochastic) ◼ Perturbation Method May be seen as a large random move of the current solution. ◼ Acceptance criteria Defines the conditions the new local optima must satisfy to replace the current solution Iterated local search Extra resource – Symmetric TSP search space size For n cities: 1. There are n! possible permutations of n cities. 2. However, because the TSP is symmetric: Any tour and its reverse are identical (e.g., A→B→C→A is the same as A→C→B→A) This reduces the number of unique tours by a factor of 2 3. Additionally, the starting city is arbitrary (any city can serve as the start). Fixing one city as the starting point eliminates n redundant tours. ◼ Thus, the size of the solution space for the Symmetric TSP is: (n−1)!/2 ◼ Example: For n=4 cities, the solution space size is: (4−1)!/2 = 6/2 = 3 unique tours For n=10 cities, the solution space size is: (10−1)!/2 = 362880/2=181,440 unique tours Where are we Population based algorithms Main framework of population based algo Outline Common concepts on Population based metaheuristics Evolutionary algorithms Swarm intelligence Nature–inspired algorithms Simulated annealing Tabu search Quantum computing Neural networks … Evolutionary Algorithms Swarm intelligence Outline Common concepts on Population based metaheuristics Evolutionary algorithms Swarm intelligence Overview of existing initialization methods Pseudo-random VS Parallel diversification Stopping criteria Static procedure Number of iteration Computation time … Adaptive procedure Number of iterations without improvements Diversity of the population Optimal or satisfactory solution is reached Outline Common concepts on Population based metaheuristics Evolutionary algorithms Swarm intelligence Principle of evolution Evolution through mutations, crossovers for each generations →Best offsprings are kept for next generations A bit of history Evolutionary Programming: L. Fogel (1962) Genetic Algorithms: J. Holland (1962) Evolution Strategies: I. Rechenberg & H.-P. Schwefel (1965) Genetic Programming: J. Koza (1989) Evolutionary algorithms Domains of application Numerical, Combinatorial Optimisation System Modeling Planning and Control Data Mining Machine Learning … Performance Acceptable performance at acceptable costs on a wide range of problems Intrinsic parallelism (robustness, fault tolerance) Superior to other techniques on complex problems with Lots of data, many free parameters Complex relationships between parameters Many (local) optima Adaptive, dynamic problems Advantages No presumptions w.r.t. problem space Widely applicable Low development & application costs Easy to incorporate other methods (hybridization) Solutions are interpretable (unlike NN) Provide many alternative solutions Robust regards any change of the environment (data, objectives, etc) disadvantages No guarantee for optimal solution within finite time (in general) May need parameter tuning Often computationally expensive, i.e. slow (when fitness evaluation is expensive) Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Types of EA representations Genetic Algorithm: one Genetic Programming: one individual is a list individual is a program Used in discrete optimisation Individual Operators Individual Gene Terminals 𝑥 × −3 + (1 − 𝑦) Population Evolution strategies, Evolutionary programming, Differential evolution.. Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Objective function Quantify the quality of an individual SUPER IMPORTANT: represent the desired traits of an individual; discriminating factor during selection. Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Selection strategy roulette Tournament Size, e.g. k=3: 𝑖 VS 𝑗 VS 𝑘 → best fitness(𝑖, 𝑗, 𝑘) Stochastic universal sampling, Rank based selection Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Reproduction strategy Depend highly on the representation of an individual Mutation: which modifies an Crossover: which combines two or individual. more individuals to generate new ones Ergodicity: every solution in the search space should be reached Heritability: should inherit characteristics from both parents Locality: minimal change (related to neighborhood) Valid: provide valid solution Valid: provides valid solution High probability 0.45 ≤ 𝑝 ≤ 0.95 Low probability 0.001 ≤ 𝑝 ≤ 0.01 Usual mutations of GA Binary representation: flip operator. Discrete representation: changing the value associated with an element by another value: 𝑥𝑖′ = 𝑥𝑖 + 𝑁(0, 𝜎) for instance tree representation: growing, shrink the tree Crossovers in binary representation Arithmetic crossover for real number Crossover in tree representation Components of an EA Representation of an individual Initialization method Objective function Selection strategy Reproduction strategy Replacement strategy Termination criterion Replacement strategy Represents the survivor selection of both the parent and the offspring populations. Generational replacement: The replacement will concern the whole population. The offspring population will replace systematically the parent population. Steady-state replacement: At each generation of an EA, only one offspring is generated. For instance, it replaces the worst individual of the parent population. Can include elitism: reintroducing the best solution found so far. Evolutionary algorithms Also… things to keep in mind EAs are stochastics: don’t draw any conclusions from a single run EA’s core evolutionary component is about comparison : do fair competitions. The objective function should be in intelligently chosen Offsprings should have a chance to be better than the parents Small exercise 1 Suppose a genetic algorithm uses chromosomes of the form 𝑥 = 𝑎𝑏𝑐𝑑𝑒𝑓𝑔ℎ: a fixed length of eight genes with each gene being any digit between 0 and 9. Let the fitness of individual x be calculated as: 𝑓 𝑥 = (𝑎 + 𝑏) − 𝑐 + 𝑑 + 𝑒 + 𝑓 − (𝑔 + ℎ) Initial population is 𝑥1 = 65413281 𝑥2 = 12342901 1. Evaluate the fitness of 𝑥1 , 𝑥2. 2. Evaluate the two offspring, considering crossover: 1. using the one-point crossover at the middle point; 2. using the two-point crossover at b-c and e-f points. Small exercise 2 The knapsack problem is defined by: Model the problem and define a solution of the problem: How is defined a gene and an individual How is defined the fitness Small exercise 2 SOLUTION Set of items 𝑥𝑖 1≤𝑖≤𝑛 ; each item 𝑥𝑖 has a weight 𝑤𝑖 with a value 𝑣𝑖. Select 𝑆 ⊂ [1, 𝑛] the set 𝑥𝑖 𝑖∈𝑆 : Maximize σ𝑖∈𝑆 𝑣𝑖 such that σ𝑖∈𝑆 𝑤𝑖 ≤ 𝑊 the maximum weight Small exercise 2 SOLUTION The representation of a solution is a binary list, where 1 means that it is included in the bag, 0 otherwise Fitness is defined by the value of the selected items if it respects the weights, 0 otherwise Small exercise 2 How to define a mutation a crossover Small exercise 2 SOLUTION Mutation is bit flipping Crossover, classical crossovers on binary lists works Small exercise 3 How look like an EA if an individual represents a permutation of the 𝑛 first alphabetical letters (for Travelling Salesman Problem for instance) Representation of an individual Proposition of a crossover, mutation operator Small exercise 3 SOLUTION One solution can be defined by an ordering of the alphabetical list: 𝑎𝑏𝑐𝑑𝑒𝑓𝑔ℎ𝑖𝑗 Mutation can be defined by a swap removing an element and place it elsewhere Order crossover for permutation Properties: From parent 1, the relative order, the adjacency, and the absolute positions are preserved. From parent 2, only the relative order is preserved. Outline Common concepts on Population based metaheuristics Initial population Stopping criteria Evolutionary algorithms Swarm intelligence Ant colonies Particle swarm optimization What is swarm intelligence Collective system capable of accomplishing difficult tasks in dynamic and varied environments without any external guidance or control and with no central coordination Simple elements that move in the decision space Indirect communicate with each other at each generation Inherent features Inherent parallelism Stochastic nature Adaptivity Use of positive feedback (reinforcement learning) Outline Common concepts on Population based metaheuristics Initial population Stopping criteria Evolutionary algorithms Swarm intelligence Ant colonies Particle swarm optimization Ant-colonies Proposed by Dorigo (1992) Imitate the cooperative behavior of ant colonies to solve optimization problems Use very simple communication mechanism: pheromone Ant colonies framework Definition of ant behavior 𝑡 A ant travel into a graph, with pheromones being 𝜏𝑖𝑗 for an edge 𝑖, 𝑗 at time 𝑡 At time 𝑡, the ant 𝑘 is at position 𝑖 have possible next directions 𝑁𝑖𝑘. The probability to go to a node 𝑗 is: 𝑡 𝑘 𝜏𝑖𝑗 𝑝𝑖𝑗 =σ 𝑡 if 𝑗 ∈ 𝑁𝑖𝑘 , else, 0 𝜏 𝑘 𝑖𝑙 𝑙∈𝑁𝑖 The next move is made randomly according to these probabilities. Updates of pheromones Initially, a constant amount of pheromone is assigned to all arcs. Then: update of the pheromones according to ant behaviors evaporation of the pheromones: 𝜏𝑖𝑗 = 𝜏𝑖𝑗 (1` − 𝜌) Updates of the pheromones Multiple strategies, according to the defined problem: Online step-by-step: The pheromone trail is updated by an ant at each step of the solution construction Off-line: The pheromone train update is applied once all ants generate a complete solution. This is the most popular approach where different strategies can be used e.g, quality based: the (k) best candidates add Λ to all edges traversed 𝜏𝑖𝑗 = 𝜏𝑖𝑗 + Λ. Simple Ant-colony construction A graph A mission to accomplish (e.g., going to one/multiple points) A reward according to the path made (duration of the travel) Initially, a constant amount of pheromone is assigned to all arcs Main issues in the design Pheromone information: should reflect the relevant information in the construction of the solution for a given problem. Pheromone update: the reinforcement learning strategy for the pheromone information has to be defined to guide without leading to premature convergence. Solution construction: after the run of the algorithm, how to build the solution output. Can be done using a greedy method: the ant that follow each time the path that has the most pheromones. Outline Common concepts on Population based metaheuristics Initial population Stopping criteria Evolutionary algorithms Swarm intelligence Ant colonies Particle swarm optimization Particle Swarm Proposed by Dr. Eberhart and Dr. Kennedy (1995) Inspired by social behavior of bird flocking or fish schooling Represent an element by its position and veolicity Particle swarm Representation of a particle This problem solve problem that can be represented by a vector of k dimension: analogous to genetic algorithm solution. A particle is composed of The x-vector: current position of the particle 𝑥𝑖 (𝑡 − 1) The p-vector: best solution found so far by the particle 𝑝𝑖 The v-vector: a gradient for which particle will travel in if undisturbed 𝑣𝑖 (𝑡 − 1) g-vector represent the position of the best candidate (locally, or globally) 𝑝𝑔 Template of the PSO algorithm 𝑣𝑖 𝑡 = 𝑣𝑖 𝑡 − 1 + 𝜌1 𝑝𝑖 − 𝑥𝑖 𝑡 − 1 + 𝜌2 (𝑝𝑔 − 𝑥𝑖 𝑡 − 1 ) Update of a particle 𝑣𝑖 𝑡 = 𝑣𝑖 𝑡 − 1 + 𝜌1 𝑝𝑖 − 𝑥𝑖 𝑡 − 1 + 𝜌2 (𝑝𝑔 − 𝑥𝑖 𝑡 − 1 ) Particle swarms: keep in mind These methods are more constrained to the structure of the solution in its vanilla phase →Can inspired for more advanced methods defined for specific problems How to build a meta-heuristic: takeaways Lots of methods exists, but each depends on: The structure of the solution (binary, list, tree, other?) How to evaluate a solution (costly, explicit..) The link between components of a solution The influence of one good solution to others One strategy won’t win in all case Not all methods are fitted to a problem (hard to define crossover for instance) Try multiple approaches For one approach, try multiple settings These methods are vanilla methods: should work in most cases Adapting the method to a precise problem can improve performance