Next Article in Journal
Improvement of Linear and Nonlinear Control for PMSM Using Computational Intelligence and Reinforcement Learning
Next Article in Special Issue
An Offline Weighted-Bagging Data-Driven Evolutionary Algorithm with Data Generation Based on Clustering
Previous Article in Journal
Preface to the Special Issue on Probability and Stochastic Processes with Applications to Communications, Systems and Networks
Previous Article in Special Issue
An Intrusion Detection System Based on Genetic Algorithm for Software-Defined Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Population Adaptive Differential Evolution Algorithm L-NTADE

by
Vladimir Stanovov
1,2,*,
Shakhnaz Akhmedova
3 and
Eugene Semenkin
1,2
1
School of Space and Information Technologies, Siberian Federal University, 660074 Krasnoyarsk, Russia
2
Institute of Informatics and Telecommunication, Reshetnev Siberian State University of Science and Technology, 660037 Krasnoyarsk, Russia
3
Independent Researcher, 12489 Berlin, Germany
*
Author to whom correspondence should be addressed.
Submission received: 21 November 2022 / Revised: 5 December 2022 / Accepted: 6 December 2022 / Published: 9 December 2022
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)

Abstract

:
This study proposes a dual-population algorithmic scheme for differential evolution and specific mutation strategy. The first population contains the newest individuals, and is continuously updated, whereas the other keeps the top individuals throughout the whole search process. The proposed mutation strategy combines information from both populations. The proposed L-NTADE algorithm (Linear population size reduction Newest and Top Adaptive Differential Evolution) follows the L-SHADE approach by utilizing its parameter adaptation scheme and linear population size reduction. The L-NTADE is tested on two benchmark sets, namely CEC 2017 and CEC 2022, and demonstrates highly competitive results compared to the state-of-the-art methods. The deeper analysis of the results shows that it displays different properties compared to known DE schemes. The simplicity of L-NTADE coupled with its high efficiency make it a promising approach.

1. Introduction

Currently, the area of evolutionary algorithms (EA) is rapidly developing along with other computational intelligence methods (CI) methods, such as neural networks (NN) and fuzzy logic systems (FL). The heuristic optimization approaches proposed within EA and swarm intelligence (SI) frameworks are aimed at finding the best possible algorithmic schemes capable of solving complex global optimization problems [1]. Specific versions of algorithms are developed for constrained, multi-objective, many-objective, Boolean, integer and bilevel optimization [2]. Nevertheless, the algorithms proposed for single-objective numerical problems often serve as a fundament for other directions of studies and are often applied for solving complex engineering problems [3,4].
In recent years, differential evolution (DE) [5] has attracted the attention of many researchers as, unlike other EA and swarm intelligence methods, such as genetic algorithms (GA) [6], evolutionary strategies (ES) [7], particle swarm optimization (PSO) [8] and many others [9,10], it is characterized by high efficiency and simplicity in implementation. This is reflected in the number of participants in the recent optimization competitions, such as the IEEE Congress on Evolutionary Computation (CEC), where most of the submitted algorithms and winners are DE-based methods [11].
The studies on differential evolution are mainly concentrated on the problem of parameter adaptation as DE is known to be highly sensitive [12,13] to the three main parameters, namely scaling factor, crossover rate and population size. Various adaptation schemes were proposed, starting with the SaDE [14] algorithm, where the scaling factor was sampled with normal distribution, while crossover rate was learned based on experience. Other approaches, such as [15,16,17], used a predefined pool of parameter values. A relatively simple randomization of parameter values has been shown to perform well, as jDE [18] has demonstrated, followed by similar approaches. The development of the JADE algorithm [19], where memory cells were used to store successful values, was followed by SHADE [20] as the most popular and the L-SHADE [21] with population size reduction, as well as many others, such as [22]. Recent studies on differential evolution have resulted in many approaches, such as TVDE (with time-varying strategy) [23], CSDE (with combined mutation strategies) [24], qlDE (with Q-learning based parameter tuning strategy) [25], MPPCEDE (with multi-population and multi-strategy) [26] and RL-HPSDE (with adaptation based on reinforcement learning) [27]. Attempts have been also made to realize the automatic design of parameter adaptation in DE using genetic programming [28] and neuroevolution [29].
However, the main algorithmic scheme of DE remains the same. In most studies, it has a single population and an optional external archive, and the replacement occurs only if an offspring is better than a parent. In some studies, there have been attempts to deviate from the prevalent schemes by introducing hierarchical archives in HARD-DE [30], big and small populations in j21 [31], junior and senior individuals in a DE-like AGSK [32], and global replacement in GRDE [33]. Recently, the Unbounded DE (UDE) has been proposed in [34], where the population may infinitely grow, and specific selection mechanisms are applied to drive the search.
In this study, we further develop the ideas of UDE and propose a two-population DE algorithm, with the first population called newest and the second population called top. The newest population has a specific update rule, keeping the last good solutions, and the top population keeping best found solutions during the entire search. The resulting L-NTADE algorithm (Linear population size reduction Newest and Top Adaptive Differential Evolution) is considered in several modifications with various mutation strategies. The algorithm is tested on the CEC 2017 [35] and CEC 2022 [36] benchmark sets and demonstrates high efficiency and specific properties on some of the functions. The main features of this study can be outlined as follows:
1.
The new dual-population DE scheme with a version of the current-to-pbest mutation strategy using individuals from the top population as one of the p % best is superior compared to other strategies;
2.
The new selection (replacement) rule for the newest population allows the algorithm to significantly improve performance, compared to the case when classical selection is used;
3.
The proposed L-NTADE algorithm performs better on complex multimodal test problems.
The Section 2 contains an overview of related work, the Section 3 describes the proposed approach, the Section 4 contains the experimental setup and results, then a discussion of the results is provided, and the Section 5 concludes the paper.

2. Related Work

2.1. Differential Evolution

Differential evolution is a popular heuristic numerical optimization method, originally proposed by Storn and Price [37]. DE is a population-based method, so it starts by randomly initializing a set of N individuals x i = ( x i , 1 , x i , 2 , , x i , D ) , i = 1 , , N within the search range:
S = { x i R D | x i = ( x i , 1 , x i , 2 , , x i , D ) : x i , j [ x l b , j , x u b , j ] }
where j = 1 , , D and D is the search space dimensionality. Each individual is generated using uniform distribution:
x i , j = x l b , j + r a n d × ( x u b , j x l b , j ) .
Although DE was proposed for numerical single-objective unconstrained problems, it can by modified for other types of problems [12]. The main feature of DE is the difference-based mutation operator, which is a key component of the search process. There exist several variants of mutation strategies, including rand/1, rand/2, best/1, best/2, current-to-best/1 and current-to-pbest/1 [13]. The original version, rand/1, generates new a solution as follows:
v i , j = x r 1 , j + F × ( x r 2 , j x r 3 , j ) ,
where v i is called the mutant or donor vector, x i , j , is the j-th coordinate of the i-th candidate solution, the indexes i, r 1 , r 2 and r 3 are all mutually different, and F is the scaling factor chosen from [ 0 , 2 ] . The scaling factor parameter is one of the most important for DE as the algorithm was shown to be highly sensitive to its values [5].
After the mutation, the crossover step is performed, which combines the generated donor vector and the target vector used as a baseline in the mutation, i.e., the i-th individual in the population. The resulting trial vector u i is usually generated with a binomial crossover operator:
u i , j = v i , j , if r a n d ( 0 , 1 ) < C r or j = j r a n d x i , j , otherwise .
In this formula, C r [ 0 , 1 ] is the crossover rate, and j r a n d is a randomly chosen index from [ 1 , D ] . The j r a n d index is required to make sure that at least one component is inherited from the donor vector. Otherwise, evaluating a copy of an individual would be a waste of computational resources. A recent study has shown that despite this fix, the problem of duplicate individuals may still occur in DE [38].
Applying a mutation operator may result in solutions that are outside of the search space boundaries. Hence, a specific bound constraint handling method (BCHM) should be applied in DE. In particular, a popular method for this is called midpoint-target, where each j-th ( j = 1 , , D ) coordinate of the i-th ( i = 1 , , N ) vector is returned to the interval [ x l b , j , x u b , j ] as follows:
u i , j = x l b , j + x i , j 2 , if v i , j < x l b , j x u b , j + x i , j 2 , if v i , j > x u b , j .
Here, if the j-th component of the mutant vector is greater than the upper boundary or smaller than the lower boundary of the corresponding interval [ x l b , j , x u b , j ] , then its parent vector x i is used to set the new value for this component. Note that this step can be applied after mutation or after crossover.
The last step in the classical DE scheme is called selection, but unlike selection in a genetic algorithm, it plays the role of a replacement operator. If the newly generated trial vector u i is better than the corresponding target vector, then the replacement occurs:
x i = u i , if f ( u i ) f ( x i ) x i , if f ( u i ) > f ( x i ) .
Although this selection mechanism is known to be simple and efficient, there have been some attempts to improve it, for example, by using the information about neighborhoods [39].

2.2. DE Modifications

Due to the high popularity of DE variants on evolutionary computation and the huge number of studies, a comprehensive survey of all existing methods here would be impractical. Interested readers are therefore advised to refer to surveys such as [12,13,40], specialized studies about certain types of DE, for example [22] or operators [41], as well as some of our previous studies on selective pressure [42] and parameter adaptation [43]. Nevertheless, here we will focus on some studies that are of particular interest for the current work.
One of the important milestones of DE development was the JADE algorithm, proposed by Zhang and Sanderson [44]. JADE introduced one of the most efficient mutation strategies current-to-pbest/1, which is used in most DE variants today, and can be described as follows:
v i , j = x i , j + F × ( x p b e s t , j x i , j ) + F × ( x r 1 , j x r 2 , j ) ,
where p b e s t is the index of one of the p b 100 % best individuals, different from i, r 1 and r 2 . The two brackets containing differences implement two main features, namely exploitation by moving towards one of the best solutions, and exploitation by adding a difference vector between two randomly chosen solutions. Moreover, increasing F to 1 means generating solutions closer to the best and at the same time making a larger step with a second difference, while smaller F values mean exploration close to the target vector. JADE has also introduced the concept of an external archive A, a set of solutions that were replaced by better ones during selection. The solutions from the archive are used in current-to-pbest/1 instead of the last vector x r 2 . The archive was shown to be of major importance for improving the search efficiency, and archive handling techniques are an important field of studies [30,45].
The efficiency of JADE has inspired other researchers to develop its improved versions. In particular, the SHADE algorithm proposed by Tanabe and Fukunaga [20] has improved the parameter adaptation of JADE by introducing a set of H memory cells ( M F , h , M C r , h ) , each containing a couple of F and C r values. For every mutation and crossover operator, the parameter values are sampled as follows:
F = r a n d c ( M F , k , 0.1 ) C r = r a n d n ( M C r , k , 0.1 ) .
Here, r a n d c is a Cauchy distributed random value, r a n d n is a normally distributed random number, and k is chosen from [ 1 , H ] for each individual. If the generated C r value is outside the [ 0 , 1 ] range, it is truncated to this range. If F is larger than 1, it is set to 1, and if F is smaller than 0, it is generated again until it becomes positive. At the end of every generation, the memory cell with the index h (iterated from 1 to H every generation) is updated using the successful F and C r values. The successful parameter values are the ones which delivered an improvement in terms of fitness, i.e., if an offspring replaced a parent, then F and C r are stored in the S F and S C r arrays, and the improvement value Δ f = | f ( u j ) f ( x j ) | is stored in S Δ f . The update of the memory cell is performed by first calculating the weighted Lehmer mean [46]:
m e a n w L = j = 1 | S | w j S j 2 j = 1 | S | w j S j ,
where w j = S Δ f j k = 1 | S | S Δ f k , S is either S C r or S F .
The values in the memory cell are updated as follows:
M F , k t + 1 = 0.5 ( M F , k t + m e a n ( w L , F ) ) M C r , k t + 1 = 0.5 ( M C r , k t + m e a n ( w L , C r ) ) ,
where t is the current iteration number.
In [43], the biased parameter adaptation was proposed, modifying the Lehmer mean by introducing an additional parameter p m :
m e a n w L = j = 1 | S | w j S j p m j = 1 | S | w j S j p m 1 ,
The additional parameter allows the adaptation of either F or C r to be skewed towards smaller or larger values. The standard setting in L-SHADE is p m = 2 , and in [43], it is shown that increasing this value and generating a larger F may lead to much better results in high-dimensional problems.
Another important modification of SHADE was L-SHADE [21], which introduced a simple control strategy for population size, called Linear Population Size Reduction (LPSR). The algorithm starts with N P m a x individuals in the population, and gradually reduces their number to N P m i n individuals:
N P g + 1 = r o u n d N P m i n N P m a x N F E m a x N F E + N P m a x ,
where N P m i n = 4 , N F E and N F E m a x are the current and total number of available function evaluations, respectively. At the end of every generation, the worst solutions are removed from the population if it is required, and the archive size is also decreased. Some recent studies have proposed L-SHADE variants with a very large population initialized by orthogonal design [47].
In [42], the effects of selective pressure on DE performance were studied, and it was shown that adding tournament or rank-based selection strategies may be beneficial. The exponential rank-based selection was implemented by selecting an individual depending on its fitness in a sorted array, with the ranks assigned as follows:
r a n k i = e k p · i N P ,
where k p is the parameter controlling the pressure, and i is the individual number. Larger ranks are assigned to better individuals, and a discrete distribution is used for selection.
The importance of the L-SHADE algorithm is proved by the number of modifications proposed for it. For example, jSO [48] proposed a modified mutation strategy and specific rules for parameter adaptation depending on the stage of the search, L-SHADE-RSP [49] introduced rank-based selective pressure, LSHADE-SPACMA [50] used a hybridization with CMA-ES, NL-SHADE-RSP [51] proposed non-linear population size reduction, crossover rate sorting and adaptive archive usage, MLS-LSHADE [52] added multi-start local search, and DB-LSHADE proposed distance-based parameter adaptation [53]. Although all these studies have shown different possibilities of modern DE methods, according to a recent study on Unbounded DE [34], “The notion of a population with individuals which are replaced by newly generated individuals is a pervasive idea in differential evolution”. In the next section, an algorithmic scheme that deviates from this concept is proposed.

3. Proposed Approach

Inspired by the experiments with unbounded population in UDE and supported by several preliminary tests with this setup, the L-NTADE algorithm is proposed. The L-NTADE maintains two populations, the first called the newest population, and the second named the top population. Unlike UDE, both populations are limited in size as the preliminary tests have shown that handling very large populations requires significant computational efforts.
The L-NTADE algorithm starts by initializing a population of N m a x individuals x i n e w , i = 1 , , N m a x . After that, the individuals in this population are copied to the top population x t o p .
The L-NTADE uses variants of the current-to-pbest mutation strategy and the parameter adaptation from SHADE algorithm, but without an external archive. The mutation strategies considered in this study are the following:
1.
r-new-to-ptop/t/t: v i , j = x r 1 , j n e w + F × ( x p b e s t , j t o p x i , j n e w ) + F × ( x r 2 , j t o p x r 3 , j t o p ) ,
2.
r-new-to-ptop/t/n: v i , j = x r 1 , j n e w + F × ( x p b e s t , j t o p x i , j n e w ) + F × ( x r 2 , j t o p x r 3 , j n e w ) ,
3.
r-new-to-ptop/n/t: v i , j = x r 1 , j n e w + F × ( x p b e s t , j t o p x i , j n e w ) + F × ( x r 2 , j n e w x r 3 , j t o p ) ,
4.
r-new-to-ptop/n/n: v i , j = x r 1 , j n e w + F × ( x p b e s t , j t o p x i , j n e w ) + F × ( x r 2 , j n e w x r 3 , j n e w ) .
Here, the following notation is used. The terms r-new and r-top stand for the choice of a random individual from the newest population or top population as a target solution, pnew and ptop for the choice of one of the p b % best individuals in the newest or top populations, and /t/n indicates the usage of individuals from either the top or new population in the second difference. Note that the target vector is not the i-th vector as in most DE, but a randomly chosen vector from one of the populations. These mutation strategies were chosen as they represent different scenarios of applying individuals from one of the populations, and here the two last variants are the extreme cases, when only one of the populations is used, and the others are intermediate. The number of possible combinations here is significant, and we only consider the cases for which the efficiency level is unclear. Additionally, note that as the indexes are chosen from different populations, equal indexes should be checked only if they are from the same population.
The control of the p b parameter is performed in the following way. At the beginning of the search, p b is set to p b m a x and it is linearly reduced down to p b m i n :
p b g + 1 = r o u n d ( p b m i n p b m a x N F E m a x N F E ) + p b m a x
where g is the current generation number, and will be omitted further for simplicity. Additionally, if the number of best individuals to choose from is less than 2, then it is set to 2. The linear decrease of p b during the search leads to increased greediness closer to the end of the search.
The crossover step in L-NTADE is unchanged, i.e., the classical binomial crossover is used to generate u i as in the L-SHADE algorithm. The bound constraint handling method used is the midpoint target, described in the previous subsection.
The selection step, however, is one of the main features of the L-NTADE algorithm. The main idea here is to imitate the behavior of the unbounded population from which the newest and best individuals are chosen by maintaining two populations. The selection step depends on the mutation strategy, in particular, if the target solution was chosen from the top or newest population. The main idea for an update is still the same: if the trial vector is better than the target, it should be saved. However, the new solutions are always saved to the newest population. The index of an individual n c to which the trial vector is copied is iterated from 1 to N c u r after every successful solution, and reset to 1 once it reaches N c u r . The selection step can be described as follows:
x n c = u i , if f ( u i ) f ( x r 1 t | n ) x n c , if f ( u i ) > f ( x r 1 t | n ) .
Here, t | n means that the target vector can be chosen from either the top or newest population according to the used mutation strategy. The successful trial vectors are copied to the current newest population immediately, and due to the random choice of solutions for mutation, can be used for generating other vectors within the same generation. Although the choice of the p b % best individuals’ indexes is performed only once a generation, we believe that there is not much sense in sorting and finding best solutions after every successful selection. Moreover, this could possibly be a problem only for the 5th mutation variant. All successful solutions are additionally stored in a temporary pool x t e m p , and at the end of the generation x t o p and x t e m p are joined, sorted and only N c u r individuals are saved to x t o p , where N c u r is the current size of both populations. In this way, the top population always contains N c u r individuals from the whole search.
The population size control strategy is the same as in L-SHADE, with the only difference being that the newest and top populations are both linearly decreasing their size, and with the same initial and final size. The following equation is true for both populations, with N m a x and N m i n :
N c u r g = r o u n d ( N m i n N m a x N F E m a x N F E ) + N m a x
The pseudocode of the L-NTADE algorithm is shown in Algorithm 1.
Algorithm 1 L-NTADE
1:
Input: D, N F E m a x , N m a x , goal function f ( x )
2:
Output: x b e s t t o p , f ( x b e s t t o p )
3:
Set N c u r 0 = N m a x , N m i n = 4 , H = 5 , M F , r = 0.3 , M C r , r = 1
4:
Set p b = 0.3 , k = 1 , g = 0 , n c = 1 , k p = 0 , p m = 2
5:
Initialize population ( x 1 , j n e w , , x N m a x , j n e w ) randomly, calculate f ( x n e w )
6:
Copy x n e w to x t o p , f ( x n e w ) to f ( x t o p ) N F E < N F E m a x
7:
S F = , S C r = , S Δ f =
8:
Rank either x n e w by f ( x n e w ) i = 1 to N c u r g
9:
r 1 = r a n d I n t ( N c u r g )
10:
Current memory index r = r a n d I n t [ 1 , H + 1 ]
11:
Crossover rate C r i = r a n d n ( M C r , r , 0.1 )
12:
C r i = m i n ( 1 , m a x ( 0 , C r ) )
13:
F i = r a n d c ( M F , r , 0.1 ) F i 0
14:
F i = m i n ( 1 , F i )
15:
p b e s t = r a n d I n t ( 1 , N c u r g · p b )
16:
r 2 = r a n d I n t ( 1 , N c u r g ) or with rank-based selection
17:
r 3 = r a n d I n t ( 1 , N c u r g ) indexes r 1 , r 2 , r 3 and p b e s t are different
18:
Apply mutation to produce v i with F i
19:
Apply binomial crossover to produce u i with C r i
20:
Apply bound constraint handling method
21:
Calculate f ( u i ) f ( u i ) < f ( x r 1 n e w )
22:
u i x t e m p
23:
F S F , C r S C r
24:
Δ f = f ( x r 1 n e w ) f ( u i )
25:
Δ f S Δ f
26:
x n c n e w = u i
27:
n c = m o d ( n c + 1 , N c u r g )
28:
Get N c u r g + 1 with LPSR
29:
Join together x t o p and x t e m p , sort and copy best N c u r g + 1 to x t o p N c u r g > N c u r g + 1
30:
Remove worst individuals from x n e w
31:
Update M F , k , M C r , k
32:
k = m o d ( k + 1 , H )
33:
g = g + 1
34:
Return x b e s t t o p , f ( x b e s t t o p )
The algorithm requires a goal function, problem dimension, total computational resource and initial population size to run, and returns the best solution along with its value, as the first two lines of Algorithm 1 show. After this, the main parameters are set in lines 3 and 4. Line 5 describes the initialization step, where the newest population is filled with random individuals. In line 6, the new population is copied to the top population, together with goal function values. Next, the main loop is started, where at the beginning of each generation in line 8, the sets of successful F, C r and Δ f values are emptied. In line 9, the population of new individuals is sorted according to the fitness values, and the ranks are assigned to the individuals. After this, the loop over individuals is started in line 10. As the mutation strategy requires random indexes, in line 11 the first of them is generated. Next, the current memory index is randomly chosen in line 12, and this index is used to generate C r and F values in lines 13–14 and 15–18, respectively. In lines 19–23, the rest of the indexes for mutation are generated until they are mutually different. In line 21, the r 2 index can be generated randomly or by using ranks calculated in line 9 depending on the mutation strategy used. Lines 24, 25 and 26 implement the main search operators of DE, such as mutation and crossover, as well as a bound-constraint handling method. After this, once the trial vector is generated, its fitness value is calculated in line 27. In lines 28–35, the selection is performed, i.e., if the trial vector is better than the randomly chosen one with index r 1 , then it is saved in the temporary population, and the current F and C r values are saved together with Δ f . Additionally, in line 33, one of the individuals from the new population is replaced by a trial vector, and the index of the individual to be replaced is updated in line 34. Line 36 finishes the loop over individuals, and in line 37 the population size is updated according to LPSR. Before shrinking the populations in lines 39–41, the top population and temporary population are joined together, sorted so that the best individuals are saved in the top population in line 38. At the end of the generation, in line 42 the memory cells are updated using S F , S C r and Δ f , in line 43 the memory cell index to be updated is incremented, and the generation number in line 44 does the same. Finally, line 45 finishes the main loop over function evaluations, and line 45 returns the result.
The flow chart of the L-NTADE algorithm is shown in Figure 1.

4. Experimental Setup and Results

4.1. Benchmark Functions and Parameters

The main idea of this study is to propose a different algorithmic scheme for DE, so the experiments with L-NTADE are aimed at evaluating the sensitivity to the newest and top populations’ size as well as to the used mutation strategy. The experiments are performed on two benchmark suites, namely the CEC 2017 [35] and 2022 [36] Single Objective Bound Constrained Numerical Optimization problems. These two benchmarks were chosen as they have different settings, in particular, the available number of function evaluations in CEC 2017 is smaller compared to CEC 2022, which makes it possible to evaluate the efficiency of the proposed algorithm in different usage scenarios.
The CEC 2017 benchmark consists of 30 test functions with dimensions 10, 30, 50 and 100, the computational resource is set to 10,000D function evaluations ( 1 × 10 5 , 3 × 10 5 , 5 × 10 5 and 1 × 10 6 correspondingly), and 51 independent runs are made for every dimension and function.
The CEC 2022 benchmark consists of 12 test functions with dimensions 10 and 20, with computational resource set to 2 × 10 5 and 1 × 10 6 evaluations, and 30 independent runs are made for every test function and dimension.
The proposed algorithm was implemented in C++, compiled with GCC, and ran on 8 AMD Ryzen 3700 PRO and 7 AMD Ryzen 1700 with 8 cores each under Ubuntu Linux 20.04. The computations were paralleled using OpenMPI 4.0.3, and the network file system (NFS) was used to store the results. The post-processing of the results, statistical tests and visualizations were performed in Python 3.6.

4.2. Numerical Results

To test the L-NTADE algorithm, the initial population size N m a x , mutation strategy, selective pressure parameter k p and scaling factor adaptation bias p m were changed. N m a x changed from 15 D to 25 D with step 5 D , four mutation strategies were considered, which resulted in 150 to 250 individuals for D = 10 and 1500 to 2500 individuals for D = 100 . Further increase of the population size parameter resulted in performance deterioration in most cases. The selective pressure was applied to the r 2 index in all mutation strategies, the ranking procedure was applied to the top population in r-new-to-ptop/t/t and r-new-to-ptop/t/n, and to the newest population in two other mutations. The selective pressure was applied only to r 2 as the preliminary tests have shown that applying it to other indexes does not bring any benefits. Two controlling values were used, k p = 0 and k p = 3 , with the former resulting in zero selective pressure (uniform distribution). The biased parameter adaptation was applied only for scaling factor F as previous studies have shown that it has little effect on the C r . The tested values are p m = 2 , the same as in L-SHADE, and p m = 4 , resulting in larger F values.
To compare the efficiency of different variants of L-NTADE, two main instruments were used, the Mann–Whitney rank sum statistical test with normal approximation and tie-breaking to compare a pair of variants. The normal approximation in the Mann–Whitney test means that the resulting statistics is the standard score (Z-score). This simplifies reasoning and allows Z-scores to be used directly to evaluate the level of difference between a pair of algorithms. As the number of experiments for every function and dimension is relatively large, 51 for CEC 2017 and 30 for CEC 2022, the usage of normal approximation is justified. Therefore, in later tables and figures the standard score values and total standard score over all test functions will be used to compare the efficiency of two algorithms along with the number of wins, ties and/or losses. In cases where a conclusion about the significance of the difference is required, the significance level will be set to 0.01 . In Table 1, each cell contains the number of wins/ties/losses and the total standard score summed over all test functions when comparing L-NTADE and NL-SHADE-LBC, which took second place in the CEC 2022 competition.
The comparison in Table 1 shows that the proposed L-NTADE algorithm can be better or worse than the NL-SHADE-LBC depending on the used parameters. For example, applying biased parameter adaptation always gives much worse performance, while the selective pressure has a positive effect if the population size and problem dimension is larger. As for the mutation strategies, in the 10 D case, the r-new-to-ptop/t/n have shown the best results, but other strategies have shown similar efficiency with and without selective pressure. In the 20 D case, the best strategy is r-new-to-ptop/n/t combined with selective pressure and increased population size. However, without both modifications, it also performs better than other strategies.
Table 2 and Table 3 contain the results of testing L-NTADE on the CEC 2017 benchmark. The competitor chosen for L-NTADE is the L-SHADE-RSP algorithm, the second-best approach in the CEC 2018 competition, which used the same benchmark. The same combinations of parameters were tested as in Table 1.
The results in Table 2 and Table 3 show that the r-new-to-ptop/n/t strategy combined with selective pressure and biased parameter adaptation has the best performance in most cases. However, in the 10 D case, the r-new-to-ptop/t/t strategy has the best performance with N m a x = 20 D , and the difference between variants with smaller and larger population sizes is rather small. Other strategies here have much worse results. Nevertheless, the L-NTADE is mostly superior compared to the L-SHADE-RSP algorithm. In the 30 D case, the r-new-to-ptop/n/t with N m a x = 20 D , selective pressure and p m = 4 has the best performance, winning L-SHADE-RSP on 17 functions out of 30, and losing in only two cases. The second-best strategy here is r-new-to-ptop/n/n, which uses individuals only from the newest population in the second difference. For other mutation strategies, it can be observed that biased parameter adaptation significantly increases the performance of the L-NTADE.
In the 50 D case, the best-performing variant is exactly the same, i.e., r-new-to-ptop/n/t with N m a x = 20 D , selective pressure and p m = 4 . Here, it can be observed that the effect of the population size is minor, unlike experiments on the CEC 2022 benchmark. The exponential rank-based selective pressure improves the performance in most cases, but this improvement is rather limited, unlike the improvement due to the biased parameter adaptation. In the 100 D case, the same conclusions are applicable, although the selective pressure here has a much larger effect, especially for the r-new-to-ptop/n/t mutation strategy.
Considering the results above, for the next experiments, the following parameters were chosen. For CEC 2022, N m a x = 15 D , k p = 0 , p m = 2 , and the mutation strategy is r-new-to-ptop/n/t. For CEC 2017, the population size is changed to N m a x = 20 D , k p = 3 , p m = 4 , and the mutation strategy is r-new-to-ptop/n/t. Table 4 shows the comparison of L-NTADE with alternative approaches on the CEC 2022 benchmark, including the top three algorithms (EA4eig [54], NL-SHADE-LBC [55] and NL-SHADE-RSP-MID [56]). The values in the table are the number of wins/ties/losses (total standard score).
As Table 4 shows, L-NTADE is capable of outperforming the best algorithms participating in the CEC 2022 competition as well as other approaches. The summed standard score gives additional information, allowing different performance levels to be observed with similar numbers of wins and losses. Table 5 shows the comparison on the CEC 2017 benchmark, and the same notation is used.
The performance of L-NTADE in the 10 D case, according to Table 5, is inferior compared to other approaches, while for other dimensions it is much better. This can be explained by the fact that the best mutation strategy for 10 D is not the one used in the experiments in Table 5. This is done for versatility. In the 30 D case, L-NTADE outperforms all other methods, but in 50 D it has similar performance to LSHADE-SPACMA. In 100 D , the only algorithm with better performance is again LSHADE-SPACMA, probably due to the hybridization with CMA-ES.
To evaluate the computational efficiency of L-NTADE, the CEC 2022 benchmark was used, for which the time complexity is estimated by calculating the time required to calculate a set of mathematical expressions ( T 0 ), evaluate the first test function ( T 1 ) and run the algorithm on this test function five times to obtain the average ( T 2 ). The resulting value is calculated as ( T 2 T 1 ) / T 0 [36]. The comparison results of L-NTADE with NL-SHADE-RSP and NL-SHADE-LBC are given in Table 6.
The comparison in Table 6 demonstrates that the complexity of L-NTADE is comparable with other similar approaches, and the usage of an additional population does not result in significant additional effort.
The presented results of the experiments have shown that the algorithmic scheme of L-NTADE has a certain potential, but for a better understanding of the reasons of its performance, several additional tests were performed. For a deeper dive into the algorithm, the histograms of pairwise Euclidean distances between all individuals were built on every generation for both the top and the newest population. These histograms were color-coded and built on a heatmap together with the average distance. Figure 2 show these histograms on F5, CEC 2017, shifted and rotated Rastrigin’s Function, 10 D , Figure 3 shows F17, hybrid Function (7) (Katsuura, Ackley’s, Expanded Griewank’s plus Rosenbrock’s, Modified Schwefel’s, Rastrigin’s) and Figure 4 shows F29, composition Function (9) (hybrid Function (5), hybrid Function (8), hybrid Function (9)) [35].
The distance histograms in Figure 2 demonstrate a significant difference between the top and newest populations. In particular, the top population after the initial convergence process remains in a relatively steady state, as can be seen by many horizontal lines, each corresponding to a point near a local minimum (which are intrinsic for Rastrigin’s function). The newest population, on the other hand, is continuously updated, resulting in a noise-like image. After the first third, when populations are relatively similar, they split their roles into keeping the information about potentially interesting solutions and actively searching for better ones. At the end of the search, it can be observed that the top population stays with four best solutions, while the newest continues attempts to improve.
Figure 3 shows a similar situation, where both populations tend to converge in the first 500 generations and then split into many groups corresponding to local optima. At around generation 900, one of the areas of local search dominates the other ones, which are deleted, and another phase of active convergence begins. At a certain point after generation 1300, the top population is stuck, which is again seen by horizontal lines, but the newest population continues the search, giving a similar noise-like picture in the second half of the search process.
In Figure 4, similar trends can be observed. However, now there are stages when even the newest population may get stuck, for example around generation 1700. Nevertheless, the search process continues further, generating different solutions in the newest population and transferring them into the top population.

5. Discussion

The experimental results in the previous section have demonstrated that the concepts proposed in the UDE algorithm [34] can be efficiently utilized, for example, in the way it is done in L-NTADE. The proposed approach uses a specific update rule for the newest population, which constantly replaces solutions with more efficient ones, but unlike classical DE selection, there is a chance that a better solution will be replaced by a worse one. At the same time, all solutions with high fitness are always stored in the top population. The ongoing update of the newest population is probably one of the reasons for the different behavior of L-NTADE compared to NL-SHADE-LBC, L-SHADE-RSP and other methods, which was observed in distance histograms. Additionally, utilizing two populations instead of one allows L-NTADE to solve some of the problems more efficiently, especially relatively complex hybrid and composition functions.
As for the mutation strategies tested, r-new-to-ptop/n/t, r-new-to-ptop/t/n and r-new-to-ptop/n/n are of interest. Combining the top and new vectors in the second difference appears to have a positive effect on the final efficiency. Additionally, the r-new-to-ptop/n/t strategy, which performed best overall, uses a directed second difference. The first difference between the randomly chosen newest and one of the p b % top vectors is in fact a point on a line connecting these two vectors, and the position of this point is controlled by F. The second difference does a similar thing, i.e., it makes a step from the newest vector to one of the top vectors, i.e., towards better solutions. In the experiments with rank-based selective pressure in L-SHADE-RSP, NL-SHADE-RSP and NL-SHADE-LBC, a similar structure of mutation strategy was used, i.e., in the second difference, the direction of step is mainly towards better solutions. This statement can also be supported by the fact that r-new-to-ptop/n/t was able to perform better than most other methods in the 100 D case of CEC 2017 functions, and in a study on selective pressure effects [42], it was shown that larger selective pressure has a positive effect in high-dimensional cases. At the same time, going in the opposite direction, from better solutions to randomly chosen ones when using r-new-to-ptop/t/n does not bring any benefits. Of course, other mutation strategies can be proposed for L-NTADE, but testing all possible variants is beyond the scope of the current study.
One of the disadvantages of L-NTADE is that it can be very sensitive to the population size, as the experiments on CEC 2022 have shown. However, for CEC 2017 this was not the case. Moreover, for CEC 2017, the selective pressure and biased scaling factor F adaptation worked well, but failed for CEC 2022. Considering the fact that these benchmarks mainly differ in the amount of computational resources available, a conclusion can be drawn. If the computational resource is relatively small, around 10,000 D , then selective pressure and biased parameter adaptation should be used. Otherwise, the population size should be chosen carefully, but selective pressure may help to achieve better results with a large population size. The tested version of L-NTADE is a baseline, and it can be further improved by introducing modifications proposed for other DE-based approaches. For example:
1.
Adding an archive set and a specifically developed update strategies for it;
2.
Adding crossover rate sorting;
3.
Introducing a control strategy for the p b % parameter;
4.
Developing new parameter adaptation strategies, suitable for L-NTADE;
5.
Developing adaptive mechanisms for switching between mutation strategies during the algorithm run;
6.
Creating hybrids of L-NTADE with other approaches.
The mentioned possible ways of improving L-NTADE are subjects for further studies.

6. Conclusions

This study proposed a new algorithmic scheme for differential evolution, which uses two populations and new mutation strategies. The performed experiments have shown that the developed L-NTADE is a highly competitive approach, which is able to outperform some of the state-of-the-art algorithms on popular benchmarks CEC 2017 and CEC 2022, especially on complex multi-modal functions. The proposed algorithm is relatively easy to implement as it is a non-hybrid method, and it can be further improved by adding modifications proposed for other DE methods. Unlike most of the DE versions, L-NTADE does not use the greedy selection strategy, but instead maintains two populations, one keeping the best solutions, with the other continuously updating. The results and analysis of algorithm behavior have demonstrated the advantages of such a scheme, i.e., the algorithm keeps the search process running all the time. One of the drawbacks of L-NTADE is its sensitivity to the population size parameter. However, all known DE algorithms have the same problem. Further studies of the proposed algorithmic scheme may include experimenting with replacement strategies in the population of new individuals and setting a different size for the new and top populations with specific control strategies.

Author Contributions

Conceptualization, V.S. and S.A.; methodology, V.S., S.A. and E.S.; software, V.S. and E.S.; validation, V.S., S.A. and E.S.; formal analysis, S.A.; investigation, V.S.; resources, E.S. and V.S.; data curation, E.S.; writing—original draft preparation, V.S. and S.A.; writing—review and editing, V.S.; visualization, S.A. and V.S.; supervision, E.S.; project administration, E.S. funding acquisition, S.A. and V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Ministry of Science and Higher Education of the Russian Federation, Grant No. 075-15-2022-1121.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CIComputational Intelligence
NNNeural Networks
FLFuzzy Logic
SISwarm Intelligence
EAEvolutionary Algorithms
GAGenetic Algorithms
ESEvolutionary Strategies
PSOParticle Swarm Optimization
DEDifferential Evolution
CECCongress on Evolutionary Computation
SHADESuccess-History Adaptive Differential Evolution
LPSRLinear Population Size Reduction
NLPSRNon-Linear Population Size Reduction
LBCLinear Bias Change
UDEUnbounded Differential Evolution
L-NTADELinear population size reduction Newest and Top Adaptive Differential Evolution

References

  1. Sloss, A.N.; Gustafson, S. 2019 Evolutionary Algorithms Review. In Proceedings of the Genetic Programming Theory and Practice, East Lansing, MI, USA, 16–19 May 2019. [Google Scholar]
  2. Sinha, A.; Malo, P.; Deb, K. A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications. IEEE Trans. Evol. Comput. 2018, 22, 276–295. [Google Scholar] [CrossRef]
  3. Alkayem, N.F.; Cao, M.; Shen, L.; Fu, R.; Sumarac, D. The combined social engineering particle swarm optimization for real-world engineering problems: A case study of model-based structural health monitoring. Appl. Soft Comput. 2022, 123, 108919. [Google Scholar] [CrossRef]
  4. Alkayem, N.F.; Shen, L.; Al-hababi, T.; Qian, X.; Cao, M. Inverse Analysis of Structural Damage Based on the Modal Kinetic and Strain Energies with the Novel Oppositional Unified Particle Swarm Gradient-Based Optimizer. Appl. Sci. 2022, 12, 11689. [Google Scholar] [CrossRef]
  5. Price, K.; Storn, R.; Lampinen, J. Differential Evolution: A Practical Approach to Global Optimization; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  6. Ali, M.; Awad, N.H.; Suganthan, P.; Shatnawi, A.; Reynolds, R. An improved class of real-coded Genetic Algorithms for numerical optimization. Neurocomputing 2018, 275, 155–166. [Google Scholar] [CrossRef]
  7. Maheswaranathan, N.; Metz, L.; Tucker, G.; Sohl-Dickstein, J. Guided Evolutionary Strategies: Escaping the Curse of Dimensionality in Random Search. 2018. Available online: https://openreview.net/forum?id=B1xFxh0cKX (accessed on 5 November 2022).
  8. Bonyadi, M.; Michalewicz, Z. Particle Swarm Optimization for Single Objective Continuous Space Problems: A Review. Evol. Comput. 2017, 25, 1–54. [Google Scholar] [CrossRef] [PubMed]
  9. Beyer, H.; Sendhoff, B. Simplify your covariance matrix adaptation evolution strategy. IEEE Trans. Evol. Comput. 2017, 21, 746–759. [Google Scholar] [CrossRef]
  10. Kar, A. Bio inspired computing—A review of algorithms and scope of applications. Expert Syst. Appl. 2016, 59, 20–32. [Google Scholar] [CrossRef]
  11. Skvorc, U.; Eftimov, T.; Korosec, P. CEC Real-Parameter Optimization Competitions: Progress from 2013 to 2018. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 3126–3133. [Google Scholar]
  12. Das, S.; Suganthan, P. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  13. Das, S.; Mullick, S.; Suganthan, P. Recent advances in differential evolution—An updated survey. Swarm Evol. Comput. 2016, 27, 1–30. [Google Scholar] [CrossRef]
  14. Qin, A.; Suganthan, P. Self-adaptive differential evolution algorithm for numerical optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, Edinburgh, UK, 2–5 September 2005; pp. 1785–1791. [Google Scholar] [CrossRef]
  15. dos Santos Coelho, L.; Ayala, H.V.H.; Mariani, V.C. A self-adaptive chaotic differential evolution algorithm using gamma distribution for unconstrained global optimization. Appl. Math. Comput. 2014, 234, 452–459. [Google Scholar] [CrossRef]
  16. Mallipeddi, R.; Suganthan, P.N.; Pan, Q.; Tasgetiren, M.F. Differential evolution algorithm with ensemble of parameters and mutation strategies. Appl. Soft Comput. 2011, 11, 1679–1696. [Google Scholar] [CrossRef]
  17. Gong, W.; Fialho, Á.; Cai, Z.; Li, H. Adaptive strategy selection in differential evolution for numerical optimization: An empirical study. Inf. Sci. 2011, 181, 5364–5386. [Google Scholar] [CrossRef]
  18. Brest, J.; Greiner, S.; Boškovic, B.; Mernik, M.; Žumer, V. Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans. Evol. Comput. 2006, 10, 646–657. [Google Scholar] [CrossRef]
  19. Zhang, J.; Sanderson, A.C. JADE: Self-adaptive differential evolution with fast and reliable convergence performance. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 2251–2258. [Google Scholar]
  20. Tanabe, R.; Fukunaga, A. Success-history based parameter adaptation for differential evolution. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; IEEE Press: Piscataway, NJ, USA, 2013; pp. 71–78. [Google Scholar] [CrossRef] [Green Version]
  21. Tanabe, R.; Fukunaga, A. Improving the search performance of SHADE using linear population size reduction. In Proceedings of the IEEE Congress on Evolutionary Computation, CEC, Beijing, China, 6–11 July 2014; pp. 1658–1665. [Google Scholar] [CrossRef] [Green Version]
  22. Piotrowski, A.P.; Napiorkowski, J.J. Step-by-step improvement of JADE and SHADE-based algorithms: Success or failure? Swarm Evol. Comput. 2018, 43, 88–108. [Google Scholar] [CrossRef]
  23. Sun, G.; Xu, G.; Jiang, N. A simple differential evolution with time-varying strategy for continuous optimization. Soft Comput. 2020, 24, 2727–2747. [Google Scholar] [CrossRef]
  24. Sun, G.; Yang, B.; Yang, Z.; Xu, G. An adaptive differential evolution with combined strategy for global numerical optimization. Soft Comput. 2020, 24, 6277–6296. [Google Scholar] [CrossRef]
  25. Huynh, T.N.; Do, D.T.T.; Lee, J. Q-Learning-based parameter control in differential evolution for structural optimization. Appl. Soft Comput. 2021, 107, 107464. [Google Scholar] [CrossRef]
  26. Song, Y.; Wu, D.; Deng, W.; Zhi Gao, X.; Li, T.; Zhang, B.; Li, Y. MPPCEDE: Multi-population parallel co-evolutionary differential evolution for parameter optimization. Energy Convers. Manag. 2021, 228, 113661. [Google Scholar] [CrossRef]
  27. Tan, Z.; Tang, Y.; Li, K.; Huang, H.; Luo, S. Differential evolution with hybrid parameters and mutation strategies based on reinforcement learning. Swarm Evol. Comput. 2022, 75, 101194. [Google Scholar] [CrossRef]
  28. Stanovov, V.; Akhmedova, S.; Semenkin, E. The automatic design of parameter adaptation techniques for differential evolution with genetic programming. Knowl. Based Syst. 2022, 239, 108070. [Google Scholar] [CrossRef]
  29. Stanovov, V.; Akhmedova, S.; Semenkin, E. Neuroevolution for parameter adaptation in differential evolution. Algorithms 2022, 15, 122. [Google Scholar] [CrossRef]
  30. Meng, Z.; Pan, J.S. HARD-DE: Hierarchical ARchive Based Mutation Strategy With Depth Information of Evolution for the Enhancement of Differential Evolution on Numerical Optimization. IEEE Access 2019, 7, 12832–12854. [Google Scholar] [CrossRef]
  31. Brest, J.; Maucec, M.S.; Boškovic, B. Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization: Algorithm j21. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; pp. 817–824. [Google Scholar]
  32. Mohamed, A.; Hadi, A.A.; Mohamed, A.K.; Awad, N.H. Evaluating the Performance of Adaptive GainingSharing Knowledge Based Algorithm on CEC 2020 Benchmark Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–8. [Google Scholar]
  33. Zhu, Z.; Chen, L.; Yuan, C.; Xia, C. Global replacement-based differential evolution with neighbor-based memory for dynamic optimization. Appl. Intell. 2018, 48, 3280–3294. [Google Scholar] [CrossRef]
  34. Kitamura, T.; Fukunaga, A. Differential Evolution with an Unbounded Population. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  35. Awad, N.; Ali, M.; Liang, J.; Qu, B.; Suganthan, P. Problem Definitions and Evaluation Criteria for the CEC 2017 Special Session and Competition on Single Objective Bound Constrained Real-Parameter Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2016. [Google Scholar]
  36. Kumar, A.; Price, K.; Mohamed, A.; Hadi, A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2022 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2021. [Google Scholar]
  37. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  38. Kitamura, T.; Fukunaga, A. Duplicate Individuals in Differential Evolution. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  39. Kumar, A.; Biswas, P.P.; Suganthan, P.N. Differential evolution with orthogonal array-based initialization and a novel selection strategy. Swarm Evol. Comput. 2022, 68, 101010. [Google Scholar] [CrossRef]
  40. Al-Dabbagh, R.D.; Neri, F.; Idris, N.; Baba, M.S.B. Algorithmic design issues in adaptive differential evolution schemes: Review and taxonomy. Swarm Evol. Comput. 2018, 43, 284–311. [Google Scholar] [CrossRef]
  41. Biedrzycki, R.; Arabas, J.; Jagodziński, D. Bound constraints handling in Differential Evolution: An experimental study. Swarm Evol. Comput. 2019, 50, 100453. [Google Scholar] [CrossRef]
  42. Stanovov, V.; Akhmedova, S.; Semenkin, E. Selective Pressure Strategy in differential evolution: Exploitation improvement in solving global optimization problems. Swarm Evol. Comput. 2019, 50, 100463. [Google Scholar] [CrossRef]
  43. Stanovov, V.; Akhmedova, S.; Semenkin, E. Biased Parameter Adaptation in Differential Evolution. Inf. Sci. 2021, 566, 215–238. [Google Scholar] [CrossRef]
  44. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans. Evol. Comput. 2009, 13, 945–958. [Google Scholar] [CrossRef]
  45. Stanovov, V.; Akhmedova, S.; Semenkin, E. Archive update strategy influences differential evolution performance. Adv. Swarm Intell. 2020, 12145, 397–404. [Google Scholar]
  46. Bullen, P. Handbook of Means and Their Inequalities; Springer: Dordrecht, The Netherlands, 2003. [Google Scholar] [CrossRef]
  47. Biswas, P.P.; Suganthan, P.N. Large Initial Population and Neighborhood Search incorporated in LSHADE to solve CEC2020 Benchmark Problems. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020; pp. 1–7. [Google Scholar] [CrossRef]
  48. Brest, J.; Maučec, M.; Boškovic, B. Single objective real-parameter optimization algorithm jSO. In Proceedings of the IEEE Congress on Evolutionary Computation, San Sebastian, Spain, 5–8 June 2017; IEEE Press: Piscataway, NJ, USA, 2017; pp. 1311–1318. [Google Scholar] [CrossRef]
  49. Stanovov, V.; Akhmedova, S.; Semenkin, E. LSHADE Algorithm with Rank-Based Selective Pressure Strategy for Solving CEC 2017 Benchmark Problems. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  50. Mohamed, A.; Hadi, A.A.; Fattouh, A.; Jambi, K. LSHADE with semi-parameter adaptation hybrid with CMA-ES for solving CEC 2017 benchmark problems. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; pp. 145–152. [Google Scholar]
  51. Stanovov, V.; Akhmedova, S.; Semenkin, E. NL-SHADE-RSP Algorithm with Adaptive Archive and Selective Pressure for CEC 2021 Numerical Optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; pp. 809–816. [Google Scholar] [CrossRef]
  52. Cuong, L.V.; Bao, N.N.; Binh, H.T.T. Technical Report: A Multi-Start Local Search Algorithm with L-SHADE for Single Objective Bound Constrained Optimization; Technical Report; SoICT, Hanoi University of Science and Technology: Hanoi, Vietnam, 2021. [Google Scholar]
  53. Viktorin, A.; Senkerik, R.; Pluhacek, M.; Kadavy, T.; Zamuda, A. Distance based parameter adaptation for Success-History based Differential Evolution. Swarm Evol. Comput. 2019, 50, 100462. [Google Scholar] [CrossRef]
  54. Bujok, P.; Kolenovsky, P. Eigen Crossover in Cooperative Model of Evolutionary Algorithms Applied to CEC 2022 Single Objective Numerical Optimisation. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  55. Stanovov, V.; Akhmedova, S.; Semenkin, E. NL-SHADE-LBC algorithm with linear parameter adaptation bias change for CEC 2022 Numerical Optimization. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  56. Biedrzycki, R.; Arabas, J.; Warchulski, E. A Version of NL-SHADE-RSP Algorithm with Midpoint for CEC 2022 Single Objective Bound Constrained Problems. In Proceedings of the 2022 IEEE Congress on Evolutionary Computation (CEC), Padua, Italy, 18–23 July 2022. [Google Scholar]
  57. Mohamed, A.W.; Hadi, A.A.; Agrawal, P.; Sallam, K.M.; Mohamed, A.K. Gaining-Sharing Knowledge Based Algorithm with Adaptive Parameters Hybrid with IMODE Algorithm for Solving CEC 2021 Benchmark Problems. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; pp. 841–848. [Google Scholar]
  58. Biswas, S.; Saha, D.; De, S.; Cobb, A.D.; Das, S.; Jalaian, B. Improving Differential Evolution through Bayesian Hyperparameter Optimization. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Krakow, Poland, 28 June–1 July 2021; pp. 832–840. [Google Scholar]
  59. Kumar, A.; Misra, R.K.; Singh, D. Improving the local search capability of Effective Butterfly Optimizer using Covariance Matrix Adapted Retreat Phase. In Proceedings of the 2017 IEEE Congress on Evolutionary Computation (CEC), Donostia-San Sebastián, Spain, 5–8 June 2017; pp. 1835–1842. [Google Scholar]
Figure 1. Flow chart of the L-NTADE algorithm.
Figure 1. Flow chart of the L-NTADE algorithm.
Mathematics 10 04666 g001
Figure 2. Heatmap of pairwise distance histograms on every generation, F5, CEC 2017, 10 D .
Figure 2. Heatmap of pairwise distance histograms on every generation, F5, CEC 2017, 10 D .
Mathematics 10 04666 g002
Figure 3. Heatmap of pairwise distance histograms on every generation, F17, CEC 2017, 10 D .
Figure 3. Heatmap of pairwise distance histograms on every generation, F17, CEC 2017, 10 D .
Mathematics 10 04666 g003
Figure 4. Heatmap of pairwise distance histograms on every generation, F29, CEC 2017, 10 D .
Figure 4. Heatmap of pairwise distance histograms on every generation, F29, CEC 2017, 10 D .
Mathematics 10 04666 g004
Table 1. L-NTADE vs. NL-SHADE-LBC, CEC 2022, Mann–Whitney tests and total standard score.
Table 1. L-NTADE vs. NL-SHADE-LBC, CEC 2022, Mann–Whitney tests and total standard score.
D = 10
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t6/3/30/5/76/2/40/6/6
19.8−45.220.2−44.5
r-new-to-ptop/t/n6/4/21/4/75/4/31/4/7
21.9−40.219.9−42.5
r-new-to-ptop/n/t6/2/41/2/96/3/31/3/8
18.4−42.617.4−41.1
r-new-to-ptop/n/n6/3/31/3/86/3/31/4/7
19.4−40.320.4−42.2
N m a x = 20 D r-new-to-ptop/t/t4/5/30/4/84/5/30/6/6
3.9−48.73.7−40.7
r-new-to-ptop/t/n3/6/30/5/73/5/40/4/8
2.7−44.83.8−44.9
r-new-to-ptop/n/t3/3/60/3/95/4/31/4/7
−20.3−48.014.8−40.2
r-new-to-ptop/n/n3/4/50/5/75/5/20/5/7
−13.4−43.315.9−39.1
N m a x = 25 D r-new-to-ptop/t/t0/4/80/4/80/4/80/5/7
−41.7−47.9−39.5−44.0
r-new-to-ptop/t/n0/6/60/5/70/5/70/4/8
−38.1−44.3−40.5−49.3
r-new-to-ptop/n/t0/3/90/4/82/2/80/3/9
−51.0−52.3−28.3−50.4
r-new-to-ptop/n/n0/6/60/4/82/6/40/4/8
−42.3−50.2−25.4−46.3
D = 20
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t5/4/34/4/45/5/24/4/4
8.8−4.813.1−2.9
r-new-to-ptop/t/n5/5/22/5/56/3/33/4/5
18.8−17.918.0−17.1
r-new-to-ptop/n/t6/5/13/4/55/6/14/4/4
25.6−18.021.8−6.2
r-new-to-ptop/n/n5/5/24/3/56/3/33/4/5
19.5−10.120.8−8.2
N m a x = 20 D r-new-to-ptop/t/t5/4/33/4/55/4/33/4/5
15.0−16.310.4−14.5
r-new-to-ptop/t/n6/3/33/4/56/3/32/5/5
21.0−15.823.0−18.1
r-new-to-ptop/n/t6/4/23/4/55/5/23/4/5
25.0−13.026.7−13.4
r-new-to-ptop/n/n7/2/34/3/56/3/34/3/5
24.4−10.925.5−7.5
N m a x = 25 D r-new-to-ptop/t/t5/4/33/4/55/4/33/4/5
10.1−14.17.0−14.6
r-new-to-ptop/t/n5/4/33/4/57/2/33/4/5
21.3−14.324.7−16.0
r-new-to-ptop/n/t4/5/33/4/56/4/23/5/4
3.9−11.927.5−12.1
r-new-to-ptop/n/n6/4/24/3/55/5/24/3/5
23.1−7.525.4−5.4
Table 2. L-NTADE vs. L-SHADE-RSP, CEC 2017, 10 D and 30 D , Mann–Whitney tests and total standard score.
Table 2. L-NTADE vs. L-SHADE-RSP, CEC 2017, 10 D and 30 D , Mann–Whitney tests and total standard score.
D = 10
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t8/16/69/16/57/18/59/15/6
8.920.55.916.9
r-new-to-ptop/t/n7/16/78/13/96/17/76/15/9
−6.2−5.0−7.4−9.7
r-new-to-ptop/n/t5/19/67/15/84/18/85/14/11
−2.21.5−20.2−24.0
r-new-to-ptop/n/n7/18/56/19/57/18/57/16/7
0.78.4−0.90.2
N m a x = 20 D r-new-to-ptop/t/t7/18/59/16/59/16/510/14/6
18.427.220.728.9
r-new-to-ptop/t/n7/17/68/13/97/16/78/15/7
0.44.6−0.39.2
r-new-to-ptop/n/t7/17/67/15/88/14/88/12/10
12.25.3−7.3−12.8
r-new-to-ptop/n/n7/19/47/18/55/20/57/16/7
2.616.35.93.7
N m a x = 25 D r-new-to-ptop/t/t8/17/59/16/59/16/59/17/4
20.522.124.225.1
r-new-to-ptop/t/n8/17/57/16/77/19/47/14/9
14.38.714.44.7
r-new-to-ptop/n/t8/18/48/15/77/17/68/13/9
17.711.14.6−11.2
r-new-to-ptop/n/n8/20/26/19/58/14/86/20/4
13.412.46.37.1
D = 30
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t15/5/1017/9/415/5/1017/10/3
45.1118.843.5117.0
r-new-to-ptop/t/n14/7/917/10/315/7/817/10/3
33.5108.233.7109.8
r-new-to-ptop/n/t17/6/718/10/218/7/518/10/2
84.4120.8103.4121.2
r-new-to-ptop/n/n15/8/719/8/316/8/619/8/3
57.3117.776.7124.0
N m a x = 20 D r-new-to-ptop/t/t15/5/1017/10/315/6/917/10/3
47.5118.045.8116.5
r-new-to-ptop/t/n14/8/817/10/315/6/917/10/3
39.5104.944.0102.2
r-new-to-ptop/n/t17/6/718/9/318/8/417/11/2
85.0119.9112.6124.7
r-new-to-ptop/n/n16/7/719/8/316/8/619/8/3
69.1119.487.2123.2
N m a x = 25 D r-new-to-ptop/t/t15/4/1116/11/315/6/917/10/3
49.0106.352.8116.7
r-new-to-ptop/t/n15/6/916/11/316/5/917/10/3
43.794.043.1102.1
r-new-to-ptop/n/t18/5/716/11/318/8/417/11/2
91.4111.1110.6117.0
r-new-to-ptop/n/n17/6/717/10/317/8/517/10/3
74.7109.293.8114.7
Table 3. L-NTADE vs. L-SHADE-RSP, CEC 2017, 50 D and 100 D , Mann–Whitney tests and total standard score.
Table 3. L-NTADE vs. L-SHADE-RSP, CEC 2017, 50 D and 100 D , Mann–Whitney tests and total standard score.
D = 50
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t13/3/1417/8/513/2/1516/8/6
−2.094.4−2.294.1
r-new-to-ptop/t/n12/1/1716/6/813/0/1716/6/8
−25.968.4−26.167.2
r-new-to-ptop/n/t13/6/1119/7/415/5/1021/7/2
18.6119.248.0140.3
r-new-to-ptop/n/n13/1/1617/7/612/4/1417/7/6
−12.282.7−0.598.2
N m a x = 20 D r-new-to-ptop/t/t13/4/1318/8/413/4/1318/8/4
3.4103.13.698.5
r-new-to-ptop/t/n13/2/1516/6/813/2/1516/7/7
−17.569.4−16.971.1
r-new-to-ptop/n/t13/7/1019/7/415/7/820/8/2
33.5122.354.2141.8
r-new-to-ptop/n/n13/3/1417/7/612/5/1318/7/5
−8.587.44.9105.9
N m a x = 25 D r-new-to-ptop/t/t13/3/1417/10/313/4/1316/10/4
1.896.27.394.9
r-new-to-ptop/t/n13/3/1416/7/713/3/1415/7/8
−10.572.2−13.471.2
r-new-to-ptop/n/t13/6/1119/7/416/5/921/7/2
33.4112.662.8141.5
r-new-to-ptop/n/n13/2/1518/5/712/7/1118/6/6
−6.387.46.0106.5
D = 100
Selective pressure k p = 0 k p = 3
Scaling factor adaptation bias p m = 2 p m = 4 p m = 2 p m = 4
N m a x = 15 D r-new-to-ptop/t/t11/0/1914/7/911/0/1914/7/9
−65.850.7−66.246.1
r-new-to-ptop/t/n11/0/1912/2/1610/1/1912/3/15
−68.7−2.5−73.11.3
r-new-to-ptop/n/t11/1/1816/6/811/3/1619/5/6
−49.877.2−37.3107.3
r-new-to-ptop/n/n10/1/1914/3/1310/1/1915/5/10
−67.226.0−63.554.6
N m a x = 20 D r-new-to-ptop/t/t11/0/1915/6/911/0/1916/5/9
−64.862.4−63.760.8
r-new-to-ptop/t/n10/1/1912/5/1310/1/1913/4/13
−69.111.4−69.913.4
r-new-to-ptop/n/t12/1/1719/3/812/1/1720/4/6
−40.386.6−31.6120.5
r-new-to-ptop/n/n10/1/1914/6/1010/1/1915/7/8
−68.232.8−63.165.1
N m a x = 25 D r-new-to-ptop/t/t11/0/1917/4/911/0/1917/4/9
−64.563.1−65.564.5
r-new-to-ptop/t/n10/1/1913/5/1210/1/1912/5/13
−70.519.4−67.114.1
r-new-to-ptop/n/t12/1/1718/4/812/0/1822/3/5
−43.388.7−32.8126.3
r-new-to-ptop/n/n10/1/1914/6/1010/0/2017/5/8
−66.732.4−66.176.8
Table 4. Mann–Whitney tests of L-NTADE against the competition top 3, CEC 2022, and other approaches, number of wins/ties/losses and total standard score.
Table 4. Mann–Whitney tests of L-NTADE against the competition top 3, CEC 2022, and other approaches, number of wins/ties/losses and total standard score.
Algorithm 10 D 20 D
L-NTADE0/12/0 (0.0)0/12/0 (0.0)
EA4eig [54]6/2/4 (6.94)6/2/4 (9.38)
NL-SHADE-LBC [55]6/2/4 (18.40)6/5/1 (25.63)
NL-SHADE-RSP-MID [56]5/3/4 (8.69)8/1/3 (36.29)
L-SHADE-RSP [49]7/1/4 (25.19)5/5/2 (17.71)
NL-SHADE-RSP [51]7/2/3 (26.78)8/3/1 (39.26)
MLS-LSHADE [52]8/1/3 (31.93)6/2/4 (20.11)
APGSK-IMODE [57]7/3/2 (38.99)9/1/2 (47.44)
MadDE [58]9/2/1 (46.54)8/2/2 (37.17)
Table 5. Mann–Whitney tests of L-NTADE against other approaches, CEC 2017, number of wins/ties/losses and total standard score.
Table 5. Mann–Whitney tests of L-NTADE against other approaches, CEC 2017, number of wins/ties/losses and total standard score.
Algorithm 10 D 30 D
L-NTADE0/30/0 (0.0)0/30/0 (0.0)
L-SHADE-RSP [49]8/12/10 (−12.81)17/11/2 (124.67)
LSHADE-SPACMA [50]9/12/9 (−17.63)13/10/7 (50.41)
jSO [48]7/13/10 (−14.98)18/11/1 (133.69)
EBOwithCMAR [59]6/13/11 (−46.46)15/9/6 (66.78)
Algorithm 50 D 100 D
L-NTADE0/30/0 (0.0)0/30/0 (0.0)
L-SHADE-RSP [49]20/8/2 (141.76)20/4/6 (120.46)
LSHADE-SPACMA [50]13/5/12 (2.26)11/3/16 (−33.34)
jSO [48]21/7/2 (158.53)24/2/4 (147.53)
EBOwithCMAR [59]19/6/5 (114.56)19/4/7 (94.46)
Table 6. Computational complexity of L-NTADE compared with NL-SHADE-LBC and NL-SHADE-RSP on CEC 2022 benchmark.
Table 6. Computational complexity of L-NTADE compared with NL-SHADE-LBC and NL-SHADE-RSP on CEC 2022 benchmark.
L-NTADE
D T 0 T 1 T 2 ( T 2 T 1 ) / T 0
D = 10 8 × 10 6 2.4 × 10 5 1.268 × 10 4 12.85
D = 20 8 × 10 6 7.2 × 10 5 2.052 × 10 4 16.65
NL-SHADE-LBC
D T 0 T 1 T 2 ( T 2 T 1 ) / T 0
D = 10 8 × 10 6 2.4 × 10 5 1.330 × 10 4 13.63
D = 20 8 × 10 6 7.2 × 10 5 2.570 × 10 4 23.15
NL-SHADE-RSP
D T 0 T 1 T 2 ( T 2 T 1 ) / T 0
D = 10 8 × 10 6 2.3 × 10 5 1.110 × 10 4 11.00
D = 20 8 × 10 6 7.3 × 10 5 1.774 × 10 4 13.05
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Stanovov, V.; Akhmedova, S.; Semenkin, E. Dual-Population Adaptive Differential Evolution Algorithm L-NTADE. Mathematics 2022, 10, 4666. https://0-doi-org.brum.beds.ac.uk/10.3390/math10244666

AMA Style

Stanovov V, Akhmedova S, Semenkin E. Dual-Population Adaptive Differential Evolution Algorithm L-NTADE. Mathematics. 2022; 10(24):4666. https://0-doi-org.brum.beds.ac.uk/10.3390/math10244666

Chicago/Turabian Style

Stanovov, Vladimir, Shakhnaz Akhmedova, and Eugene Semenkin. 2022. "Dual-Population Adaptive Differential Evolution Algorithm L-NTADE" Mathematics 10, no. 24: 4666. https://0-doi-org.brum.beds.ac.uk/10.3390/math10244666

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop