Swarm Intelligence PDF
Document Details
Uploaded by ConfidentAgate1880
Anand Nayyar and Nhu Gia Nguyen
Tags
Summary
This document introduces the concept of swarm intelligence, including biological foundations and various algorithms related to swarm intelligence, such as metaheuristics. The text explores the concept of swarm and collective intelligence of natural animals, presenting different types of collective intelligence and how they work. It further discusses the principles and characteristics of collective intelligence relating to social insects and the difference between social intelligence and collective intelligence. The document also contains information on self-organization, adaptability, and diversity in swarm intelligence, along with future topics such as swarm robotics and related issues.
Full Transcript
chapter seven The Particle Swarm This chapter introduces the particle swarm rules of the algorithm, which say nothing in its binary and real-numbered forms. The about the existence of a problem or how to book so far has b...
chapter seven The Particle Swarm This chapter introduces the particle swarm rules of the algorithm, which say nothing in its binary and real-numbered forms. The about the existence of a problem or how to book so far has been preparing a context, de- solve it. Yet through reciprocal social influ- scribing related paradigms in computer sci- ence each individual betters its “fitness” (the ence and social science, discussing culture term is less appropriate here than in discus- and norms and language and other scien- sion of evolutionary algorithms), and the per- tific and philosophical developments that, if formance of the population improves. We we have been successful, will make the parti- would not say that the adaptive culture algo- cle swarm seem like an obvious thing to rithm is an especially powerful way to solve propose. problems, but it is a good introduction to The Adaptive Culture Model in the previ- some social algorithms that are. ous chapter hints at what can happen as a re- The particle swarm algorithm is intro- sult of the simplest imaginable interactions duced here in terms of social and cogni- of the simplest imaginable agents—if these tive behavior, though it is widely used as a can even be called “agents.” Given a large problem-solving method in engineering and space of possibilities, the population is often computer science. We have discussed binary able to find multivariate solutions, patterns encoding of problems, and the first version that solve problems, through a stripped- of the particle swarm we present here is de- down form of social interaction. signed to work in a binary search space. Later It is worth emphasizing that individuals in in the chapter we introduce the more com- the culture model are not trying to solve monly used version, which operates in a problems. They are only following the simple space of real numbers. 287 Team LRN 288 Chapter Seven—The Particle Swarm Sociocognitive Underpinnings: Evaluate, Compare, and Imitate A very simple sociocognitive theory underlies the Adaptive Culture Model and particle swarms. We theorize that the process of cultural adaptation comprises a high-level component, seen in the formation of patterns across individuals and the ability to solve problems, and a low-level com- ponent, the actual and probably universal behaviors of individuals, which can be summarized in terms of three principles (Kennedy, 1998): Evaluate Compare Imitate Evaluate The tendency to evaluate stimuli—to rate them as positive or negative, attractive or repulsive—is perhaps the most ubiquitous behavioral char- acteristic of living organisms. Even the bacterium becomes agitated, run- ning and tumbling, when the environment is noxious. Learning cannot occur unless the organism can evaluate, can distinguish features of the environment that attract and features that repel, can tell good from bad. From this point of view, learning could even be defined as a change that enables the organism to improve the average evaluation of its environment. Compare Festinger’s social comparison theory (1954) described some of the ways that people use others as a standard for measuring themselves, and how the comparisons to others may serve as a kind of motivation to learn and change. Festinger’s theory in its original form was not stated in a way that was easily tested or falsified, and a few of the predictions generated by the theory have not been confirmed, but in general it has served as a backbone for subsequent social-psychological theories. In almost every- thing we think and do, we judge ourselves through comparison with others, whether in evaluating our looks, wealth, humor, intelligence (note that IQ scales are normed to a population average; in other words, Team LRN A Model of Binary Decision 289 your score tells you how you compare to others—which is really the point, isn’t it?), or other aspects of opinion and ability. Individuals in the Adaptive Culture Model—and in particle swarms—compare themselves with their neighbors on the critical measure and imitate only those neighbors who are superior to themselves. The standards for social be- haviors are set by comparison to others. Imitate You would think that imitation would be everywhere in nature; it is such an effective way to learn to do things. Yet, as Lorenz has pointed out, very few animals are capable of real imitation; in fact, he asserts that only humans and some birds are capable of it. Some slight variations of social learning are found among other species, but none compare to our ability to mimic one another. While “monkey see, monkey do,” well describes the imitative behavior of our cousins, human imitation comprises taking the perspective of the other person, not only imitating a behavior but re- alizing its purpose, executing the behavior when it is appropriate. In The Cultural Origins of Human Cognition, Michael Tomasello argues that social learning of several kinds occurs in chimpanzees, but true imitation learn- ing, if it occurs at all, is rare. For instance, an individual’s use of an object as a tool may call another individual’s attention to the object; this sec- ond individual may use the same object, but in a different way. True imi- tation is central to human sociality, and it is central to the acquisition and maintenance of mental abilities. The three principles of evaluating, comparing, and imitating may be combined, even in simplified social beings in computer programs, en- abling them to adapt to complex environmental challenges, solving ex- tremely hard problems. Our view diverges from the cognitive viewpoint in that nothing besides evaluation, comparison, and imitation takes place within the individual; mind is not found in covert, private cham- bers hidden away inside the individual, but exists out in the open; it is a public phenomenon. A Model of Binary Decision Consider a bare-bones individual, a simple being with only one thing on its mind, one set of decisions to make, yes/no or true/false, binary deci- sions, but very subtle decisions, where it is hard to decide which choices Team LRN 290 Chapter Seven—The Particle Swarm to make. For each decision, this supersimplified individual can be in one state or the other, either in the yes state, which we will represent with a 1, or the no = 0 state. It is surrounded by other yes/no individuals, who are also trying to decide. Should I say yes? Should I say no? They all want to make the best choices. Two important kinds of information are available to these primitive beings. The first is their own experience; that is, they have tried the choices and know which state has been better so far, and they know how good it was. But these social beings have a second consideration; they have knowledge of how the other individuals around them have per- formed. In fact they are so simple that all they know is which choices their neighbors have found most positive so far and how positive the best pattern of choices was. If these stripped-down beings are anything like people, they know how their neighbors have done by observing them and by talking with them about their experiences. These two types of information correspond to Boyd and Richerson’s individual learning and cultural transmission. The probability that the individual will choose “yes” for any of the decisions is a function of how successful the “yes” choice has been for them in the past relative to “no.” The decision is also affected by social influence, though the exact rule in humans is admittedly not so clear. Social impact theory states that the individual’s binary decisions will tend to agree with the opinion held by the majority of others, weighted by strength and proximity. But even that rule is somewhat vague, given ambiguities in the concepts of strength and proximity. For the present introductory model we will just say that individuals tend to be influenced by the best success of anyone they are connected to, the member of their sociometric neighborhood that has had the most success so far. While we admit this is an oversimplification, it has a kernel of truth that justifies the parsimony it brings to the model. Individuals can be connected to one another according to a great number of schemes, some of which will be mentioned in Chapter 8. Most particle swarm implementations use one of two simple sociometric prin- ciples (see Figure 7.1). The first, called gbest, conceptually connects all members of the population to one another. The effect of this is that each particle is influenced by the very best performance of any member of the entire population. The second, called lbest (g and l stand for “global” and “local”), creates a neighborhood for each individual comprising itself and its k nearest neighbors in the population. For instance, if k = 2, then each individual i will be influenced by the best performance among a group made up of particles i − 1, i, and i + 1. Different neighborhood to- pologies may result in somewhat different kinds of effects. Unless stated Team LRN A Model of Binary Decision 291 The “lbest” neighborhood with k = 2. Each individual's neighborhood contains itself and its two adjacent neighbors. The first and last are connected. 1 2 3 4 5 6 7 8 Individual #3 has found the best position so far in #4's neighborhood. Therefore, #4's velocity will be adjusted toward #3's previous best position and #4's own previous best position. 1 2 3 4 5 6 7 8 0.72 0.33 0.54 Best fitness so far The “gbest” neighborhood. Assuming that #3 has found the best fitness so far in the entire population, all others' velocities will be attracted toward its previous best position. Figure 7.1 The two most common types of neighborhoods. Team LRN 292 Chapter Seven—The Particle Swarm otherwise, the following discussions will presume lbest neighborhoods with k = 2 (sometimes described as “neighborhood = 3”). In a sociocognitive instance the individual must arrange an array of decisions or judgments in such a way that they all fit together, what we call “making sense” or “understanding” things. The individual must be able to evaluate, compare, and imitate a number of binary choices simultaneously. Evaluation of binary strings can be accomplished in one step. In the psychological case, that is, if we are talking about humans, we can again use the concept of cognitive dissonance to evoke the sense of tension that exists when an array of decisions contains inconsistencies. We expe- rience the state as discomfort and are motivated to change something to reduce the tension, to improve the evaluation. Dissonance as described by Festinger provides a single measure of cognitive evaluation, exactly as “fitness” is a single measure of genetic or phenotypic goodness. How do we improve cognitive fitness? Of course there are plenty of theories about this. In Ajzen and Fishbein’s Reasoned Action Model, (1980) intent is seen as a function of two kinds of things that should be getting familiar by now (see Figure 7.2). On the one hand, intent is affected by the person’s attitude toward the behavior; for instance, if they believe vi- olence is harmful or immoral, then they may intend not to act violently. This attitude is formed, in Ajzen (pronounced “eye-zen”) and Fishbein’s theory, by a linear combination of beliefs that the behavior will result in some outcomes (bi) times the individual’s evaluation of those out- comes (ei): n Ao = ∑b i e i i=1 This kind of expectancy-value model of attitude has existed in some form for many years, and we will not criticize its linearity or asymptotic issues here (never mind the decades-old debate about summing versus averaging). We are interested in the fact that intent has a second cause, which Ajzen and Fishbein call the subjective norm. The subjective norm regarding a behavior is also built up, in their theory, as a linear sum of products, but this time the factors entering into the formula are social. The individual’s subjective norm toward a behavior is a sum of the prod- ucts of their beliefs that certain others think they should or should not perform the behavior, multiplied by the motivation to comply with each of those others: n SNo = ∑b i mi i=1 Team LRN A Model of Binary Decision 293 Attitude toward behavior Intention Behavior Subjective norm Figure 7.2 According to the Reasoned Action Model, behavior is a function of intention and only remotely of the individual’s attitude about the behavior. To point out the obvious, these two components of the theory of rea- soned action map easily onto the components of Boyd and Richerson’s cultural transmission model; that is, there is an individual term (individ- ual learning or attitude toward a behavior) and a social term (cultural transmission or subjective norm). These two kinds of concepts are found in other theories as well and are represented in our decision model as the two terms that make up the change formula. We theorize that the coexis- tence of these two modes of knowledge, that is, knowledge acquired by the senses through experience in the world and knowledge acquired from others, gives humans the intellectual advantage; it is the source of our intelligence. Besides their past experience and inputs from the social environ- ment, another factor that affects the individual’s decision is their cur- rent propensity or position regarding the issue. They may start with a strongly negative attitude and have subsequent positive experiences re- garding the choice or attitude object—but still have a negative feeling about it. The positive experiences may make the individual more likely to choose the positive alternative, but in order to shift the individual’s gen- eral propensity into the positive domain, the decision threshold would still have to shift upwards. If the individual’s initial position is extreme, the probability is lower of its changing—for one thing, the individual is less likely to try the other alternative. In mathematical terms, we are proposing a model wherein the proba- bility of an individual’s deciding yes or no, true or false, or making some Team LRN 294 Chapter Seven—The Particle Swarm other binary decision, is a function of personal and social factors (Ken- nedy and Eberhart, 1997): P(xid (t ) = 1) = f (xid (t − 1), v id (t − 1), pid , p gd ) where P(xid(t)=1) is the probability that individual i will choose 1 (of course the probability of their making the zero choice is 1 − P) for the bit at the dth site on the bitstring xid(t) is the current state of the bitstring site d of individual i t means the current time step, and t − 1 is the previous step vid(t − 1)is a measure of the individual’s predisposition or current probability of deciding 1 pid is the best state found so far, for example, it is 1 if the individ- ual’s best success occurred when xid was 1 and 0 if it was 0 pgd is the neighborhood best, again 1 if the best success attained by any member of the neighborhood was when it was in the 1 state and 0 otherwise The decisions themselves will be stochastic, if for no better theoretical reason than that we never know all the forces involved—it is very un- likely that any decision is made based solely on isolated facts pertaining directly to that decision alone. A lot of randomness allows exploration of new possibilities, and a little bit allows exploitation by testing patterns similar to the best one found so far; thus we can balance between those two modes of search by adjusting the uncertainty of decisions. The parameter v id (t ),an individual’s predisposition to make one or the other choice, will determine a probability threshold. If v id (t ) is higher, the individual is more likely to choose 1, and lower values favor the 0 choice. Such a threshold needs to stay in the range [0.0, 1.0]. We have already seen one straightforward function for accomplishing this, when we talked about neural networks. The sigmoid function 1 s(v id ) = 1+ exp(−v id ) squashes its input into the requisite range and has properties that make it agreeable to being used as a probability threshold (though there is noth- ing magical about this particular function). Team LRN A Model of Binary Decision 295 We wish to adjust the individual’s disposition toward the successes of the individual and the community. To do that we construct a formula for each vid in the current time step that will be some function of the differ- ence between the individual’s current state or position and the best points found so far by itself and by its neighbors. We want to favor the best position, but not so much that the individual ceases searching pre- maturely. If we simply added ( pid − xid(t−1)) and ( pgd − xid(t−1)) to vid (t), it would move upward when the difference between the individual’s pre- vious best and most recent states, or the difference between the neigh- borhood’s best and the individual’s most recent states, equaled 1, and would be attracted downward if either difference equaled −1. The proba- bility threshold moves upward when the bests are ones and downward when they are zeroes. In any situation we do not know whether the individual-learning or the social-influence terms should be stronger; if we weight them both with random numbers, then sometimes the effect of one, and sometimes the other, will be stronger. We use the symbol ϕ (the Greek letter phi) to represent a positive random number drawn from a uniform distribution with a predefined upper limit. In the binary version the limit is some- what arbitrary, and it is often set so that the two ϕ limits sum to 4.0. Thus the formula for binary decision is v id (t ) = v id (t − 1) + ϕ1 ( pid − xid (t − 1)) + ϕ 2 ( p gd − xid (t − 1)) if ρ id < s(v id (t )) then xid (t ) = 1; else xid (t ) = 0 where ρid is a vector of random numbers, drawn from a uniform distribu- tion between 0.0 and 1.0. These formulas are iterated repeatedly over each dimension of each individual, testing every time to see if the cur- rent value of xid results in a better evaluation than pid, which will be up- dated if it does. Boyd and Richerson varied the relative weighting of indi- vidual experience and social transmission according to some theoretical suggestions; the current model acknowledges the differential effects of the two forces without preconceptions about their relative importance. Sometimes decisions are based more on an individual’s personal experi- ence and sometimes on their perception of what other people believe, and either kind of information will dominate sometimes. One more thing: we can limit vid so that s(vid) does not approach too closely to 0.0 or 1.0; this ensures that there is always some chance of a bit flipping (we also don’t want vi moving toward infinity and overloading the exponential function!). A constant parameter Vmax can be set at the Team LRN 296 Chapter Seven—The Particle Swarm start of a trial to limit the range of vid. In practice, Vmax is often set at ±4.0, so that there is always at least a chance of s(Vmax) ≈ 0.0180 that a bit will change state. In this binary model, Vmax functions similarly to mutation rate in genetic algorithms. Individuals make their decision in a population, where they are influ- enced by the successes of their neighbors. As each individual’s decision is affected by (pgd − xid(t − 1)), that is, (usually) some other individual’s suc- cess, they influence one another and tend to move toward a common po- sition. As an individual begins to approximate its neighbor’s best posi- tion, it may perform better and influence its neighbors, and on and on; good decisions spread through the population. We are comfortable call- ing this the formation of a culture in a computational population. In this section we have developed an extremely parsimonious model of binary choice as a function of individual learning and social influence. Individuals tend to gravitate probabilistically toward the decisions that have resulted in successes for themselves and their colleagues. The result is optimization of each individual’s decision vector and convergence of the population on an optimal pattern of choices. The entire algorithm, maximizing goodness, is shown in pseudocode: Loop For i = 1 to number of individuals r r if G ( x i ) > G ( pi ) then do //G() evaluates goodness For d = 1 to dimensions pid = xid //pid is best so far Next d End do g=i //arbitrary For j = indexes of neighbors r r If G ( p j ) > G ( pg ) then g = j //g is index of best performer in the neighborhood Next j For d = 1 to number of dimensions v i (t ) = v id (t − 1) + ϕ1( pid − x id (t − 1)) + ϕ2 ( pgd − x id (t − 1)) v id ∈ (−Vmax, + Vmax ) if ρid < s (v id (t )) then x id (t ) = 1; else x id (t ) = 0 ; Next d Next i Until criterion Team LRN 3 Introduction to Swarm Intelligence Anand Nayyar and Nhu Gia Nguyen CONTENTS 3.1 Biological Foundations of Swarm Intelligence.................... 53 3.2 Metaheuristics.............................................. 56 3.2.1 The History of Metaheuristics....................... 56 3.2.2 Introduction to Metaheuristics....................... 57 3.2.3 Characteristics of Metaheuristics..................... 59 3.2.4 Classification of Metaheuristics...................... 60 3.3 Concept of Swarm........................................... 62 3.4 Collective Intelligence of Natural Animals...................... 64 3.4.1 Principles of Collective Intelligence................... 65 3.4.2 Characteristics of Collective Intelligence............... 65 3.4.3 Types of Collective Intelligence...................... 66 3.4.4 Difference between Social Intelligence and Collective Intelligence............................. 68 3.5 Concept of Self-Organization in Social Insects.................... 68 3.6 Adaptability and Diversity in Swarm Intelligence................ 70 3.7 Issues Concerning Swarm Intelligence.......................... 71 3.8 Future Swarm Intelligence in Robotics – Swarm Robotics.................................................... 73 3.8.1 Advantages of Swarm Robotics...................... 73 3.8.2 Difference between Swarm Robotics and Multi-Robot Systems.............................. 74 3.9 Conclusion................................................. 75 3.1 Biological Foundations of Swarm Intelligence Swarm intelligence has attracted lots of interest from researchers in the last two decades. Many swarm intelligence-based algorithms have been intro- duced to various areas of computer science for optimization, gaining lots of popularity. The reasons behind the success of swarm intelligence–based algorithms are their dynamic and flexible ability and that they are highly 53 54 Anand Nayyar and Nhu Gia Nguyen efficient in solving nonlinear problems in the real world. Bonabeau, Dorigo, & Theraulaz (1999) defined swarm intelligence as “The Emergent Collective Intelligence of groups of simple agents”. Swarm intelligence is regarded as a study of intelligent computational systems derived from “collective intelligence”. Collective Intelligence (CI) is defined as the group intelligence that emerges from the cooperation or collaboration of a large number of swarm agents, acting out the same protocol within the same environment. Examples of homogeneous swarm agents are ants, fish, ter- mites, cockroaches, bats, and more. The two fundamental aspects, which are necessary properties of swarm intelligence are: self-organization and division of labour. Self-organization is regarded as a significant property of natural systems; it is organization that takes place without any requirement of central coordinating authority to perform required tasks. Swarm agents exploit self-organization to attain the best work coordination, the agility to perform work at high speeds and, above all, fault tolerance. Self Organiza- tion is based on four fundamental elements: positive feedback, negative feedback, fluctuations, and multiple interactions as defined by Bonabeau et al. (1999). Positive feedback (amplification) is basically defined as the promotion of smart solutions by allocating more agents to the same work. Negative feedback (counterbalance) enables the swarm agents to avoid the same behaviour or involvement in the same state. Fluctuations are highly useful towards randomness, errors and random walks. Multiple interactions (sharing information) is the condition of sharing the infor- mation among all other agents of search area. There exists some con- tinuous agitation between positive and negative feedback, and it usually happens in various self-organizing conditions such as market analysis, the study of complex networks, neural networks, etc. The second prop- erty of swarm intelligence is termed division of labour, which is regarded as a parallel execution of various tasks by swarm agents. In general terms, swarm intelligence, which is regarded as sub-area of artificial intelligence (AI), is primarily concerned with a design and an implementation of intelligent multi-agent systems following the footsteps of behaviour of natural social insects and animals like ants, termites, bees, bats, cockroaches, fish, and more. Every single member of the colony is termed as unsophisticated, but they tend to accomplish com- plex tasks by cooperation with other swarm agents. Example: Ants roam in arbitrary fashion manner in an open environment searching for food, and after locating a food source, return to their nest to communicate with other members of the colony regarding the food source. Members of other colonies make a suitable and optimized path by laying pher- omone trail from their nest to the food source. Termites and wasps coordinate with each other to build sophisticated nests. The same is true for the ability of bees to orient themselves in an environment by generating awareness among other honey bees regarding a food source via the waggle dance. Introduction to Swarm Intelligence 55 The term swarm intelligence was first coined by Beni (1988) with regard to cellular robotic systems, where simple agents utilize the principle of self-organization via the nearest-neighbour interaction. Swarm intelligence has led to the development of various models. From the point of view of computational intelligence, swarm intelligence models are computing models that have formulated algorithms for designing optimized solutions for distributed optimization problems. The primary objective of swarm intelligence is to design probability-based search algorithms. All the swarm agents working collectively have a number of general properties Bai and Zhao (2006). Communication among swarm agents is highly indirect and of short duration. Agents communicate in a distributed manner without any central coordinating authority. There is a high degree of robustness among agents working in an orderly fashion. Swarm intelligence models and methods are highly successful, especially in the area of optimization, which is suitable for various applications of computer science, industrial, and other scientific applications. Examples: student time table management systems, train transportation scheduling, the design of efficient telecommunication networks, computational biology, neural networks, artificial intelligence, machine learning, and deep learning problems. The most basic optimization problem, which has led to the easiness of various test cases is the travelling salesman problem (TSP) Lenstra, Kan, and Shmoys (1985). The problem is simple: a salesman has to traverse a large number of cities, ensuring that the total distance between all traversed cities is minimal. It has been applied to various problems of biology, chemistry, and physics, and results were successful. In a general methodology, any optimization problem P can be defined as integration of three components: S, Ω, f, where: S: A search space defined over set of decision variables. These variables can deal with discrete or continuous optimization problems. Ω: The set of constraints on the variables. f: S → IR+ is the objective function used to assign a positive value to search element of S. The main objective of this chapter is to start with a brief discussion of the biological foundations and the concept of swarm intelligence. After that, various concepts revolving around swarm intelligence such as metaheuristics, collective intelligence of animals, and concepts revolving around swarm-self organization will be discussed. The chapter will embark a comprehensive coverage to the issues concerning swarm intelligence as well as concluding 56 Anand Nayyar and Nhu Gia Nguyen with a future direction of swarm intelligence by applying swarm intelligence to robotics i.e. Swarm Robotics. 3.2 Metaheuristics 3.2.1 The History of Metaheuristics Metaheuristics is a somewhat under-appreciated but widely used and adapted term. It is also regarded as a “search” or “optimization” algo- rithm. The true “metaheuristic” is regarded as a specific design pattern which instills knowledge and is applied to develop various optimization algorithms. Metaheuristics is a sub-area of “stochastic optimization”. Sto- chastic optimization is a combination of algorithms and techniques deploy- ing the highest degree of randomness to find optimal solutions to hard problems. It is highly important to understand the history of metaheuristics (Sörensen, Sevaux, & Glover, 2018). Since the 1940s, many different metaheuristics have been proposed, and a few are in development today. Different problem-solving techniques have evolved since the inception of human beings, but using metaheuristics is regarded as scientific approach to solve the problems. The first technique under metaheuristics was designed and developed by Igno Rechenberg, Hans-Pail Schwefel and their colleagues from Germany called it “evolutionary strategy” Osman and Laporte (1996). The main objective of this technique is to find a solution to various optimization problems via computers. In the 1960s, Lawrence Fogel and his colleagues proposed “evolutionary programming” to use simulated evolution as learning process. In the 1970s, the genetic algorithm was proposed in a book titled Adaption in Natural and Artificial Systems (Holland, 1992). Later in 1970s, Fred Glover (1977) proposed the scatter search method, which created a significant foundation for the design and development of various methods derived from recourse to randomization. These developments are called “evolutionary algorithms or computation”. The decades of the 1980s and 1990s saw a rapid development of various metaheuristic algorithms. Fred Glover, in 1980, proposed a metaheuristic algorithm called “tabu search” (Talbi, 2009), “simulated annealing” was proposed by S. Kirkpatrick, Gelatt, and Vecchi (1983). The artificial immune system (AIS) technique was developed by Farmer, Packard, and Perelson (1986). The major breakthrough came in the area of metaheuristics in 1992: Marco Dorigo proposed the ant colony optimization (ACO), based on a nature-inspired technique for optimization of problems, in his PhD thesis (1992). The technique is based on ants using a pheromone-based chemical trail to find the Introduction to Swarm Intelligence 57 optimal path from their nest to a food source. In 1995, James Kennedy and Russel Eberhart proposed “particle swarm optimization (PSO)”. It is based on the movement of organisms such as a flock of birds or a school of fish. The PSO algorithm works on the basis of a population called “swarm agents” within candidate solutions, known as particles. In 1994 in a book titled Genetic Programming (Koza, 1994), John Koza laid the foundations of the optimization technique called “machine learning,” which is highly utilized today in various branches of computer science. In 1997, R. Storn proposed “differential evolution” (Storn & Price, 1997), which is regarded as the best optimization technique in various applications, compared to genetic algorithms. Various significant improvements have been seen from 2000 onward by various researchers proposing various metaheuristic-based algorithms. The following is the list of proposed algorithms. Significant improvements have been observed since 1983 onwards. The following Table 3.1 enlists metaheuristic-based and swarm intelligence- based algorithms proposed by researchers since year 1983 to 2017. 3.2.2 Introduction to Metaheuristics The word “metaheuristics” is combination of two words “meta” and “heur- istics”. The Greek word “meta” means “beyond” or “upper level”. The word “heuristics” is derived from Greek word “heuriskein”, which means “to find” or “to discover”. The term metaheuristics can be derived as “higher level discovery” which can perform better than “simple heuristics/discovery”. The word “metaheuristics” was coined by Dr. Fred Glover in his paper “Heuristics for integer programming using surrogate constraints” in Decision Sciences (Glover, 1977). In this article, he proposed “metaheuristics” as a powerful strategy which lays the foundation for other heuristics to produce optimal solutions as compared to normal solutions. Before his research, algorithms having stochastic components were called heuristics, but these days, they are termed “metaheuristics”. To date, the most clear and comprehensive definition of metaheuristics was proposed by Sörensen and Glover (2013), as: A metaheuristic is a high-level problem-independent algorithmic fra- mework that provides a set of guidelines or strategies to develop heuristic optimization algorithms. The term is also used to refer to a problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in such a framework. The word “metaheuristic” is utilized for two different pruposes. One is a high-level framework in which varied concepts and methodologies are combined and lay a strong foundation for the development of various 58 Anand Nayyar and Nhu Gia Nguyen TABLE 3.1 Metaheuristics & Swarm Intelligence based algorithms Year Name of Algorithm Proposed by 1983 Simulated annealing Kirkpatrick et al. 1986 Tabu search Glover 1989 Memetic algorithms Moscato 1992 Ant colony optimization (ACO) Dorigo 1995 Particle swarm optimization (PSO) Kennedy & Eberhart 2000 Bacteria foraging algorithm Kevin Passino 2001 Harmony search Geem, Kim & Loganathan 2005 Artificial bee colony algorithm Karaboga Glowworm swarm optimization Krishnanand & Ghose Bees algorithm Pham 2006 Shuffled frog leaping algorithm Eusuff, Lansey & Pasha 2007 Imperialist competitive algorithm Atashpaz-Gargari & Lucas River formation dynamics Rabanal, Rodriguez & Rubio Intelligent water drops algorithm Shah-Hoseeini Monkey Search Mucherino & Seref 2009 Gravitational search algorithm Rashedi, Nezamabadi-pour & Saryazdi Cuckoo search Yang & Deb Group search optimizer He, Wu & Saunders 2010 Bat Algorithm Yang 2011 Spiral optimization algorithm (SPO) Tamura & Yasuda Galaxy based search algorithm Hameed Shah-Hosseini 2012 Flower pollination algorithm Yang Differential search algorithm (DS) Civicioglu 2013 Artificial cooperative search algorithm Civicioglu Cuttlefish optimization algorithm Eesa, Mohsin, Brifcani & Orman 2014 Artificial swarm intelligence Rosenberg 2016 Duelist Algorithm Biyanto Killer whale algorithm Biyanto 2017 Rain water algorithm Biyanto Mass and Energy balances algorithm Biyanto Hydrological cycle algorithm Wedyan et al. Introduction to Swarm Intelligence 59 novel optimization algorithms. The second denotes specific implementa- tion of algorithm proposed on a framework to find solution to a specific problem. According to Osman & Laporte (1996), Metaheuristics is formally defined as an iterative generation process that guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space; different learn- ing strategies are used to structure the information in order find efficient near-optimal solutions. In the last few decades, there has been significant development and innovation of approximation solution methods – heuristics and metaheuristics. Various algorithms have been proposed on the basis of metaheuristics such as: simulated annealing (SA), tabu search (TS), the memetic algo- rithm (MA), the artificial immune system (AIS), ant colony optimization (ACO), genetic algorithms (GA), particle swarm optimization (PSO), and differential evolution (DE), which are utilized by researchers for opti- mizing and finding solutions to various problems of computer science. A metaheuristic algorithm is regarded as an iterative process that lays paths and innovates operations of sub-heuristics to define efficient solutions. It manipulates a complete or incomplete single solution or set of solutions at each iteration. Today, the scope of metaheuristics has widened, and various methods have been proposed such as: ant colony systems, greedy randomized adaptive search, cuckoo search, genetic algorithms, differential evolution, bee algorithms, particle swarm opti- mization (PSO), etc. 3.2.3 Characteristics of Metaheuristics In the olden days, problem solving was done via heuristic or metaheuristic approaches – by trial-and-error methodology. Many important discoveries, even those still utilized today, are the result of “thinking outside of the box”, and sometimes merely arrived at by a simple accident; this approach is called heuristics. In our day-to-day life, we handle almost every problem via heuristics. But now, heuristics are being taken to the next level, i.e. metaheuristics. The major reason behind the popularity of metaheuristics is that all the algorithms are designed and developed by considering nature-based sys- tems, i.e., swarm agents, which include biological systems, and physical and chemical processes. The most important component of metaheuristics is that all the algorithms imitate the best features of nature. The two major components of all metaheuristic algorithms are: intensifi- cation and diversification, or exploration and exploitation. The 60 Anand Nayyar and Nhu Gia Nguyen intensification, or exploration, phase revolves around searching the best possible solutions available and selecting the best solution among the available options. The diversification, or exploitation, phase ensures that the search space is more efficiently explored. The overall efficiency of the algorithm depends on the balance of these two components. If there is a little exploration, the system will have limited boundaries and it will be difficult to locate the global optimum. If there is too much exploration, the system will find issues in reaching convergence, and the performance of search, in the overall algorithm will be reduced. The basic principle of best search is “only the best survives”, so a proper mechanism is required to achieve the best algorithm. The mechanisms should be run repeatedly in order to test the existing solution and locate new options to verify whether or not they are the best solutions. The primary foundations of metaheuristic algorithms are “nature based” and apply either a population or a single solution to explore the search space. Methods using single solution utilize local search-based metaheuristics such as tabu search and simulated annealing, which share the property of describ- ing a state in the search space during the search process. On the contrary, population-based metaheuristics, such as genetic algorithms, explore the search space through the evolution of a set of solutions in the search space. The following are some of fundamental properties which characterize metaheuristics in highly simple manner: Metaheuristics lay the stepping stone that “guides” the entire process of search. The ultimate goal is to explore the search space efficiently in order to determine the optimal solutions. Metaheuristic algorithms are approximate and non-deterministic in nature. Techniques integrating metaheuristic algorithms range from simple local search procedures to high-end complex learning processes. Metaheuristics are not problem-specific and avoid getting trapped in confined areas of search space. Metaheuristic algorithms make use of domain-specific knowledge via heuristics and are controlled by upper-level strategy, and today’s meta- heuristics make use of search experience to achieve optimal solutions. 3.2.4 Classification of Metaheuristics A metaheuristic algorithm can provide a highly optimized and efficient solution to any optimization problem, but only if there exists a proper balance between the exploitation of the search experience and the search space exploration to identify the areas with high-quality solutions with Introduction to Swarm Intelligence 61 regard to problems. The only difference between various metaheuristic techniques is how they provide the balance. According to Talbi (2009), the following are different ways to classify metaheuristic algorithms: Nature-inspired versus non-nature-inspired algorithms The most active field of research in the area of metaheuristics is design and development of nature-based algorithms. The most recent proposed meta- heuristics is “evolutionary computation-based algorithms”, which are based on natural systems. Examples include: simulated annealing, ant colony optimization (ACO), particle swarm optimization (PSO), and evolu- tionary algorithms. Non-nature-inspired metaheuristic algorithms include: tabu search (TS) and iterated local search (ILS). Population-based versus single-point search Another important basis that is used for classification of metaheuristic algorithms is the total number of solutions used at the same time. Algo- rithms working on single solutions are termed as “trajectory methods” and include local search-based metaheuristics. Examples: tabu search, variable neighbourhood search, and iterated local search. Population-based methods are highly focused on the exploration of the search space. Examples: Genetic algorithms and particle swarm optimiza- tion (PSO). Dynamic versus static objective function Metaheuristics can also be classified on the basis of the methodology of which they make use; dynamic versus static objective function. Algorithms with a static objective function such as PSO keep the objective function in the problem representation. Algorithms with dynamic objective function like guided local search can modify the function during the search process. One versus multiple neighbourhood structures Most of the metaheuristic algorithms work on one single neighbourhood structure and don’t change their fitness landscape topology during search. Example: iterative local search. Some metaheuristics make use of set of neighbourhood structures which can change the fitness landscape topology during search. Example: vari- able neighbourhood Search (VNS). 62 Anand Nayyar and Nhu Gia Nguyen Iterative versus greedy Iterative metaheuristic algorithms are those that start from a particular set of solutions and keep on manipulating as the search process continues, such as particle swarm optimization (PSO) and simulated annealing (SA). Greedy metaheuristic algorithms are those that start from empty solu- tion, and at every stage a decision variable is assigned a value and is added to the solution set. Example: ant colony optimization. Memory Usage versus memory-less The most significant parameter used to classify metaheuristic algorithms is memory usage or memory-less. Metaheuristic algorithms make use of search history, and it is important to know, whether they make use of memory or not. Memory-less algorithms are those that make use of Markov process to determine the next course of action in the current state. Example: simu- lated annealing. Memory-usage algorithms are those that keep track of all the moves and decisions undertaken, and they make use of memory. Nowadays, all the metaheuristic algorithms make use of memory, and they are termed powerful algorithms. Example: tabu search. 3.3 Concept of Swarm The concept of swarm is particularly a biological one and its deep roots can be found in ethology (the study of animal behaviour). Which has been engaged in since the nineteenth century. Ethology is a mixture of diverse fields, including zoology (classification), anatomy (structure), and the study of ecosystems (context). Ethology brings together these three areas in order to understand how the living beings exist and interact with one another. Ethology involves not only studying specific animals, but also interactions within groups and between individuals, and also how they live with regard to environmental constraints. Ethology is primarily concerned with a study of natural living organisms such as ants, bees, fireflies, bats, and many more. The study of social insects is somewhat complicated, but they make “intelligent social interactions cum structures” by performing division of labor, hierarchy of control, and task coordination. In ant colonies and bee colonies, there is ant queen or queen bee, respectively, as well as the proper top-down management of task allocation and all sorts of social functions, such as: army ants, worker ants, search ants; and likewise in the case of bees, onlooker bees, worker bees, etc. In ant colonies, various functions are performed like- foraging, pheromone split, optimal path trail. Other intelligent swarms also perform varied functions Introduction to Swarm Intelligence 63 like Nest building by wasps, coordinated flashing by fireflies and waggle dance in case of honey bees. In the same manner, the study of bird flocking is another example of how efficient organization happens without any sort of centralized control. It is a combination of the ethology and computer science which leads to the study of behaviour of birds and finding suitable applications for proposing solutions to complex problems. Swarm (Glover, 1977; Lévy, 1997) is generally regarded as group of similar animals having qualities of self-organization and decentralized control. According to Bonabeau et al. (1999) the following are typical character- istics of a swarm: A swarm consists of many individuals working collectively and has the characteristic of self-organization. Individuals of swarm are homogeneous (identical) such as a robotic swarm or some species of same animals. Individuals of the swarm, when compared with the abilities of whole swam, are of limited intelligence and possess fewer capabilities. The swarm of combined individuals is much more powerful than the individual members. Individuals in a swarm act and perform their respective tasks as per the rules and tasks allocated. The rules sometimes create complex beha- viour as in bird flocks, where each individual stays within a certain distance to its neighbouring bird and flies slightly to the rear of it. According to Bonabeau and Meyer (Bonabeau & Meyer, 2001) the follow- ing are the three main reasons of success of swarms: 1. Flexibility 2. Robustness 3. Self-organization Ants are highly flexible living organisms in terms of adapting to changing environments, and ants soon find an optimal path between a food source and the nest if any obstacle arises. Due to a large number of individuals in a swarm, the swarm has the characteristic of robustness. The swarm as a whole continues to exist, even if single individuals fail or become extinct. Self-organization is a property of many complex systems that consist of a great amount of interacting subsystems. One characteristic of self-organization is a spontaneous forming of structures and patterns under specific condi- tions,wherefore it is suitable to start analyzing self-organizing processes by studying these so-called “pattern formations”. In the honeybees’ colony it 64 Anand Nayyar and Nhu Gia Nguyen is easy to find a multiplicity of such patterns. The organization in the hive arises from the partial systems as well, because the co-operating bees (the sub-systems) give structure to the system though they neither have knowl- edge of the global situation nor having a blueprint for the complete pattern in mind. Each individual works (blind for the holistic pattern) to the local rules, the global pattern results from the interaction of these single indivi- duals. Every single bee has the ability to perceive only a little part of the entirety of influencing stimuli, but the whole of bees appears as a “collective intelligence”. 3.4 Collective Intelligence of Natural Animals The concept of “collective intelligence” is very hard to understand. In order to develop and innovate information systems, it is highly important to gain complete control over the environment. The term “collective intelligence” is related to “intelligence” as a phenomenon. In order to develop a theory of collective intelligence, it is highly desirable to under- stand the general terminology of intelligence. A system is said to have “collective intelligence” when two or more “intelligent” sub-systems combine and interact with each other in an intelligent manner. The word intelligence is closely related to intelligent behaviour. Intelli- gent behaviour is regarded as various actions and results in response to queries to ensure the system’s reliability. Collective intelligence systems must have intelligent behaviour. The most obvious examples of collective intelligence are collective efforts of social organisms like ants, bees, fish, wasps, and termites when doing a particular task like collecting food, building nests, etc. Collective intelligence has two dimensions: the quality of intelligence of individual swarm agents operating in an environment, which is also termed “bio-intelligence”, and the number of swarm intelligence, which can range from billions to trillions, depending on the society. In addition to these, collective intelligence is also enforced by one more dimension, which involves a methodology of the systems integrated with and operat- ing in an environment. The concept of collective intelligence of an insect is not determined by its genetic behaviour, rather it emerges from continuous interactions of a large number of individuals that exhibit simple, stochastic behaviour and limited knowledge of the environment. Examples: the collective intelli- gence of ants – how the ants act as workers to locate the food dynami- cally in an environment, then come back to the nest, and other ants follow the trail from the nest to the food source in efficient manner. The collective intelligence of bees – the waggle dance, which provides the Introduction to Swarm Intelligence 65 direction and distance from the nest of food, and its quality. The most important point is how chemical messages change the operating environ- ment of insects in colonies, and how they repeat the behaviour to accom- plish a particular task. Therefore, collective behaviour is simply a sum of individual actions; it shows new characteristics that emerge from self-amplifying interactions among individuals and between individuals and an environment. Another important aspect in collective intelligence is self-organization – how an individual swarm agent knows the work to perform without any coordinating authority. 3.4.1 Principles of Collective Intelligence According to Don Tapscott and Anthony D. Williams (2008), collective intelligence is also termed “mass collaboration”. The following are the four principles of collective intelligence: Openness: The foremost principle of collective intelligence is sharing ideas. All the swarm agents share common information with their respective colonies, which provides them with an edge and more benefits from sharing information, and agents gain significant control and improvement via collaboration. Peering: Peering lays a strong foundation for “self-organization”, a unique style of production that works more effectively than a hier- archy for performing all sorts of complex tasks like ant trailing, optimal route selection, etc. Sharing: With the sharing of ideas among swarm agents, such as the waggle dance of honeybees, the glowing capacity of fireflies, etc., the operations in the colonies become more effective and easy. Acting globally: With global integration, there exists no geographi- cal boundaries, allowing swarm agents to operate freely in an environment and be more effective in collaborating with other agents for food search, trailing, and food collection from source to nest. 3.4.2 Characteristics of Collective Intelligence The behaviour and work of colonies of social insects integrate many characteristics of collective intelligence and are listed here (Wolpert & Tumer, 1999): 1. Flexibility: The environment changes dynamically, so individuals should adapt to changing environmental conditions via an 66 Anand Nayyar and Nhu Gia Nguyen amplification by positive feedback so that the entire system adapts to new circumstances. 2. Complexity: The entire colony behaviour adapts to an ever-changing environment, which could be quite complex in nature, and also involves solving the problem of coordination among different swarm agents to operate effectively in the environment. 3. Reliability: The system can function in an efficient manner even though some units fail and even without any central coordination utility to reassign duties in colonies. The following are some of the real-time examples involving distributed intelligent computational systems, which require a centralized communica- tion and can operate effectively using collective intelligence. Control systems for operating communication satellites (GPS satel- lites, weather forecasting satellites, or military satellites) for collecting and communicating scientific data. Control systems for effective routing of data in large communication networks (WAN or MAN). Parallel algorithms for solving complex optimization numerical pro- blems such as scientific calculations, mainframe computing, or paral- lel systems for effective operations. Traffic control systems for large cities with an enormous number of vehicles operating at any point of day. Information grid control. Control of nuclear reactors or chemical plant for all sorts of logistics and control operations. Management of power electronics in large power grids based on wind mills, hydro power, etc. 3.4.3 Types of Collective Intelligence The research on collective intelligence revolves around the investigation for the design of intelligent infrastructures that enable agents in colonies to think and act intelligently and intriguingly as compared to individual agents. The following are the four different forms of collective intelligence which co-exist in natural animals and humans (Lévy, 1997): 1. Swarm 2. Original 3. Pyramidal 4. Holomidal Introduction to Swarm Intelligence 67 Swarm collective intelligence Swarm collective intelligence applies to a large number of individuals, from a few hundred to thousands to billions. Individuals don’t have a big difference from one another via design or by context. Their individual margin of actions remains limited and in most of the cases remains predictable. Swarm collective intelligence can show immense adaptive and learning capacities. Examples: various social insects (ants, bees, termites, wasps), schools of fish, flocks of birds, and even traffic jams. Original collective intelligence Original collective intelligence consists of small groups composed of spe- cialized individuals such as wolf swarm, dolphin swarm, cat swarm, monkey swarm, etc. Pyramidal collective intelligence Pyramidal collective intelligence comprises a hierarchy-based intelli- gence structure, which provides the best coordination and effectively utilizes the power of the masses. It is mostly used in human organiza- tions because of their civilization. In civilization, a human intelligence has defined criteria, like: urban concentrations, work specialization, accounting and memorizing, large geographical area, and centralized power. Pyramidal collective intelligence has the following four dynamic princi- ples (Bastiaens, Baumöl, & Krämer, 2010): 1. Labor Division: Everyone in civilization has a predefined role and can be interchangeable. 2. Authority: Determines the rules; assigns rights and prerogatives. 3. Limited Currency: Medium of exchange and a store of value. 4. Standards and Norms: Circulation and interoperability of knowledge within the community. Holomidal collective intelligence It is one kind of intelligence that is at a developing stage and succeeds pyramidal collective intelligence as it has become powerless to address the tomorrow’s high-stake complexity of intelligence. Holomidal col- lective intelligence is showing an enormous amount of growth in various forms of technical and scientific applications, and innovations in arts. 68 Anand Nayyar and Nhu Gia Nguyen 3.4.4 Difference between Social Intelligence and Collective Intelligence Collective Intelligence, as mentioned above, highlights the two main points: First, there exist some species of natural animals, whose collective intelligence is regarded as much more sophisticated and intelligent than human beings’. Second, some animals, such as dolphins and monkeys, share a sense of group behaviour, i.e., social intelligence. So, it is highly important to mention a clear distinction between two terms, “social intelligence” and “collective intelligence”. Social intelligence is regarded as an advanced form of intelligence, which makes an animal capable of performing complex behaviour patterns to analyze and respond to situations correctly with other members in the group. Social intelligence is regarded as a situation in which mammals behave in a way that is termed as “collective consciousness”. Collective intelligence is also regarded as an advanced intelligence, in which the group intelligence of individual swarm agents copes correctly with a social and physical as well as biological environment. Ants have a higher degree of collective intelligence as compared to social intelligence. Social intelligence has a genetic capability to respond to all sorts of signals from conspecifics. The low degree of social intelligence of ants allows interlopers to confuse the ants completely. Example: Beetle begging behaviour causes ants to regurgitate droplets of food for them. The parasites can even perform the best mimicry of ants, not only in terms of behaviour but also with pheromone interference, to take food to their nests. Low social intelligence can also be found in bee hives. Italian and Australian bees share almost the same signals and confuse each other. Although, social insects that rank very low in social intelligence rank on the top in collective intelligence. Collective intelligence is mostly based on the mutual understanding and self-organization behaviour of agents. 3.5 Concept of Self-Organization in Social Insects In recent years, the concept of self-organization (Bell & Cardé, 2013; Bonabeau, Theraulaz, Deneubourg, Aron, & Camazine, 1997) has been used to derive the emergence and maintenance of complex stable states from simple component systems. It was primarily introduced with a rele- vance to physics and chemistry to define how various microscopic processes give birth to macroscopic structures in out-of-equilibrium systems. The theory of “self-organization” can even be further enhanced to ethological systems like social insects and animals (ants, bees, bats, fish, etc.) to demonstrate how the collective behaviour, which is complex, can emerge from interactions among individuals that exhibit simple behaviours. Introduction to Swarm Intelligence 69 Self-organization (Uyehara, 2008) is regarded as a process in which complex patterns originate without any sort of directions internally or any sort of supervising source. The self-organization is observed in varied systems like phase-transitions, chemical reactions, body muscles, and schools of fish. Self-organizing systems, with reference to biology, are composed of group of individuals working collectively to create complex patters via positive and negative feedback interactions. Various social insects’ beha- viour is derived from self-organization, as the social insects don’t have a cognitive power to generate their observed complex structures and activ- ities. Examples: ant foraging, honey bee thermoregulation, ant corpse- aggregation, etc. Examples of self-organization in the real world: The example of sporting oarsmen in a rowing team working collectively by pulling up oars in perfect synchronization with each one another with proper adjustments and same frequency. The rhythmic contractions of muscle fibers in the heart is the best example of self-organization. The concept of self-organization in physical systems differs from that in biological systems in two ways: Complexity of sub-units: In physical systems, the sub-units are inanimate objects. In biological systems, there is some greater inher- ent complexity as sub-units are living organisms like ants, bees, fish, etc. Rules governing interactions: In physical systems, patterns are cre- ated via interactions based on physical laws. Biological systems do have behaviours based on laws of physics, in addition to physiologi- cal and behavioural interactions among living components. The following are the four constituents of self-organization in social insects: Positive feedback (amplification): Simple behavioural patterns that leads to a creation of structures. Examples: recruitment and reinforce- ment. In ants, a food source is determined randomly by ants search- ing in environment and then a trail is laid and is followed from a source to the nest; indicating the location of a food source by honey bees via the waggle dance. Negative feedback counterbalances positive feedback and stabilizes collective patterns like saturation, exhaustion, and competition. Examples: foragers, food source finish, food source crowding, etc. Self-organization is based on the amplification of fluctuations like random walks, errors, random task-switching, etc. Example: loss of foragers in an ant colony. All self-organization relies on multiple interactions. 70 Anand Nayyar and Nhu Gia Nguyen 3.6 Adaptability and Diversity in Swarm Intelligence Swarm intelligence is the collective behaviour of decentralized, self-orga- nized systems, natural or artificial (Bonabeau et al., 1999). The emergent collective intelligence of groups of simple agents is termed swarm intelli- gence. Individually, these agents are not intelligent but they have self- organization and division of labor in their nature. Collectively, these agents follow simple rules while interacting with others without any centralized control. Adaptation and diversity in swarm intelligence-based algorithms can be seen as a proper balance between intensification and diversification, con- trolling and fine-tuning of parameters, and the identification of new solu- tions and the precise aggregation of uncertainty. The success of a swarm intelligence-based algorithm depends on two contradictory process: exploration of search space and exploitation of best feasible solution. The first process is called diversification, and the second is known as intensification, both are driving forces for swarm intelligence. Most of the population-based techniques are diversification oriented, whereas single solution-based techniques are intensification oriented. The local search strategies lead to better intensification, and random search strategies lead to better diversification. The main criterion that one needs to deal with while initializing a population-based swarm intelligence strategy is diversification. A weakly diversified initial population leads to premature convergence. Initialization strategies with decreasing diversity capabilities are as follows (Talbi, 2009): parallel diversification, sequential diversification, the quasi-random approach, the pseudo-random approach, and the heuristic approach. Thus, it is essential to maintain a balance between intensification and diversification. These, swarm-based algorithms have some parameters that directly or indirectly control the diversity of a population. A fine-tuning of these control parameters is highly required along with adaption to the environment. The crucial model of adaptation is a balance between intensification and diversification in swarm intelligence, and diversity is inherently associated with it. A problem-specific search process with strong exploitation leads to an intensive search and concentrated diversity that results in a rapid loss of diversity and subsequently premature convergence. While completely randomized process can enhance exploration and locate more diversified solutions. However a higher degree of diversity may mislead the search process and results in very slow convergence. That’s why it is very crucial to make this search process adaptive. Most of the swarm-intelligence-based algorithms use a real number solution vector. The population size in these algorithms may be static or capricious. A varying size of population can maximize performance of algorithm. The adaptation has the capability to adjust algorithm-dependent parameters as a requirement of environment. Introduction to Swarm Intelligence 71 Likewise, a change in the radius of the search space, according to the iteration number, can lead to better search efficiency. This type of para- meter tuning is known as a real parameter adaptation (Yang, 2015). Bansal et al. (2014) proposed a self-adaptive ABC with adaptation in the step size and limit parameter. In the same way, the diversity in swarm-intelligence-based algorithms also takes various forms. Most of the swarm-based algorithms update a solution using specific equations. These equations have some random components and steps. These random components control the size of steps, and the performance of an algorithm depends on the step size. If step size is too large, then there are chances to skip true solutions, and if the step size is very tiny, then it may lead to a slow convergence. Thus, it is very crucial to maintain a proper amount of randomness. In order to maintain a proper diversification, Bansal et al. (2013) added a local search phase in ABC algorithm. Adaptation and diversity in swarm intelligence are also related to the selection solution, i.e., which solution is to be forwarded for the next iterations and which one will be abandoned. Population-based algorithms perform this task using a fitness function. A highly fitted solution will be selected and the worst-fitted solution discarded during this selection process. In this case, a good adaptation policy is highly required can also maintain the diversity of solutions. A self-balanced particle swarm optimi- zation developed by Bhambu et al. (2017) allows one to maintain a proper balance between the diversification and convergence abilities of the popu- lation. Xin-She Yang (2015) discussed adaptation and diversity, exploration and exploitation, and attraction and diffusion mechanisms in detail. It is not possible to study adaptation and diversity independently; they are interlinked with each other, and it is essential to retain an appropriate balancing between them. 3.7 Issues Concerning Swarm Intelligence Swarm intelligence has become an important subject these days for addressing many issues in the research of engineering problems. It has wide applications in wireless communications, robotics, surveillance, bio- medical applications, etc. As part of research it is very important to understand issues that arise while adopting the swarm intelligence prin- ciples and applying them to the subject considered. The modern engineering applications adapt the swarm intelligence for improving the overall reliability of the system performance. Swarm intelli- gence is the technology in which many members are associated in order to obtain the prime objective of overall standardization of the system (Banzi, Pozo, & Duarte, 2011). Obviously, when many members are associated to 72 Anand Nayyar and Nhu Gia Nguyen each other, there are chances for security issues related to the information. The intruder’s association with the swarm may cause serious issues in data communication between members and may corrupt the data (Dai, Liu, & Ying, 2011). It is very important to think about the security challenges that may arise in the areas like communications, robotics, and cloud-computing-based Internet of Things in particular. Swarm intelligence is implemented based on the data accumulated from many swarm members associated in the network. So, because the information must be carried from one member to others, we should consider security. One should consider using a special type of encryption and decryption codes for encoding and decoding data that is communicated over the channels between swarm members; other- wise, an intruder might change or control the data. In mobile sensor networks like MANET, a set of the devices or nodes is connected by a wireless network. The flying robots, such as drones, are associated with sensor networks used for some security applications and use swarm intelligence for the association and effective decisions. The communication in the preceding applications is through a wireless medium. Here, swarm intelligent networks go beyond the capabilities of mobile sensor networks because the latter are not capable of duplicating a developing behaviour of swarm intelligence networks. The confidentiality of the system members and their data distributed in the swarm is becoming an important issue. The protection of the data is a first and foremost priority in the swam-intelligence-based applications. The next concern is to protect the integrity of the data. Data communicated among the members can be accessed only in authorized ways. Service availability is also an important issue for applications of swarm intelli- gence. Denial of service (DoS) is about losing the accessibility of service. It is very important that most of the security issues arise are same as we see in other systems. The resource constraints are more important in the case of swarm intelligence. The smaller the system that is considered, the tougher the security is for it. This is because of the resource constraints such as memory, communication bandwidth, power of computation, and energy. In the robotic swarm, if a certain attack on a particular robot happens, then the behaviour of an entire swarm may be affected. Suppose the robot is influenced and tampered with, then reintroduced into the swarm. An attacker may then be influencing the complete swarm. Control- ling an object is a very sensitive issue in swarm intelligence. Because there is no central control in the swarm intelligence concept, members of the swarm are solely dependent on others’ intelligence. The identity and authentication of the robot play a major role in the swarm intelligence. Different authentication schemes are in use such as group identity and individual identity. Intruder identification in the swarm is very important. The intruder in the group can be identified easily. The abnormal behaviour of a member as Introduction to Swarm Intelligence 73 compared to other members’ behaviour shows the difference. Once the intruder is identified, it must be segregated from the system. Finally, a swarm member should have efficient learning ability. 3.8 Future Swarm Intelligence in Robotics – Swarm Robotics Swarm robotics (Şahin, 2004; Brambilla et al., 2013; Gogna & Tayal, 2013) is regarded as a novel approach that is concerned with the coordination and cooperation of a large number of simple and powerful robots, which are autonomous, not even controlled centrally, capable of local communication and fully operational on the basis of the biological approach. The main characteristics which make up a swarm robotic system are: Autonomous approach. Operational in specific environment with ability to modify themse- leves as per changing environmental conditions. Sensing and communication capabilities in stored memory. No centralized controlling mechanism and global knowledge. Cooperation among other robotic systems to accomplish tasks in coordinated manner. In order to elaborate further, regarding definition of swarm robotics, we can split an entity as follows: Machine: Entity having mechanical operational behaviour. Automation: Entity having capability of informational behaviour, i.e., information transfer as well as processing. Robot: Mechanical automation, i.e., an entity having both mechanical and information behaviour. 3.8.1 Advantages of Swarm Robotics In order to understand the concept and advantages of swarm robotics in more deep manner, we compare advantages of swarm robotics with a single robot (Tan, & Zheng, 2013): 1. Accomplishing a Tedious Task: In order to accomplish a tedious task, single robot design and development can lead to a complicated structure and require high maintenance. A single robot can even face some problems with component failure and if, at any point of 74 Anand Nayyar and Nhu Gia Nguyen time, a single component fails, the entire robot can stand still and task is halted. As compared to single robot systems, swarm robotics can perform the same task with a high agility in less time with error- free operation and with less maintenance. 2. Parallel Task Execution: A single robot can do one single task at a time like a batch processing system in computer. As compared to this, swarm robotics have high population size in which many robots can work at single point of time accomplishing multiple tasks in a coordinated manner. 3. Scalability: Swarms have local interaction in which individuals can join or quit a task at any point of time without interrupting the whole swarm. The swarm can easily adapt with environmental change and re-allocate units from the task to perform external operations. 4. Stability: As compared to single robot system, swarm robotics are much stable and reliable, and their work is even not affected when a single swarm quits work. The swarm robotic system will keep on working even with few swarms, but having few swarms will impact the overall performance and cause task degradation. 5. Economical: When building an intelligent self-coordinating swarm robotic structure, the cost will be much lower with regard to design, manufacturing, and maintenance as compared to single robot. Also, swarm robots can be produced in a mass manner, whereas a single robot development is a high-precision task and takes much more time. 6. Energy Efficient: As compared to single robot system, swarm robotics are smaller and highly simple in operation; therefore, the energy consumption cost is far less as compared to single robot taking into consideration the battery size. Swarm robots can operate for a large time, doing all sorts of complex tasks as compared to a single robot system, which requires battery recharging after a cer- tain period. Considering all the benefits of swarm robotics, as mentioned above com- pared to a single robot system, swarm robotics can be applied to high-end applications such as military applications, disaster relief operations, mining, geological surveys, transportation, and even environmental mon- itoring control. 3.8.2 Difference between Swarm Robotics and Multi-Robot Systems The following Table 3.2 highlights the differences between swarm robotics and multi-robot systems: