Next Article in Journal
Optimization Model and Algorithm of Logistics Vehicle Routing Problem under Major Emergency
Next Article in Special Issue
Levy Flight-Based Improved Grey Wolf Optimization: A Solution for Various Engineering Problems
Previous Article in Journal
A Flexible Class of Two-Piece Normal Distribution with a Regression Illustration to Biaxial Fatigue Data
Previous Article in Special Issue
A Survey on High-Dimensional Subspace Clustering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm

by
Iman Faridmehr
1,
Moncef L. Nehdi
2,*,
Iraj Faraji Davoudkhani
3 and
Alireza Poolad
4
1
Department of Construction Production and Theory of Structures, Institute of Architecture and Construction, South Ural State University, 454080 Chelyabinsk, Russia
2
Department of Civil Engineering, McMaster University, Hamilton, ON L8S 4M6, Canada
3
Energy Management Research Center, University of Mohaghegh Ardabili, Ardabil 5619911367, Iran
4
Department of Electrical Engineering, Islamic Azad University, Bushehr Branch, Bushehr 7519619555, Iran
*
Author to whom correspondence should be addressed.
Submission received: 7 January 2023 / Revised: 1 March 2023 / Accepted: 3 March 2023 / Published: 6 March 2023
(This article belongs to the Special Issue Advances in Machine Learning, Optimization, and Control Applications)

Abstract

:
This paper proposes a novel optimization method for solving real-world optimization problems. It is inspired by a cooperative human phenomenon named the mountaineering team-based optimization (MTBO) algorithm. Proposed for the first time, the MTBO algorithm is mathematically modeled to achieve a robust optimization algorithm based on the social behavior and human cooperation needed in considering the natural phenomena to reach a mountaintop, which represents the optimal global solution. To solve optimization problems, the proposed MTBO algorithm captures the phases of the regular and guided movement of climbers based on the leader’s experience, obstacles against reaching the peak and getting stuck in local optimality, and the coordination and social cooperation of the group to save members from natural hazards. The performance of the MTBO algorithm was tested with 30 known CEC 2014 test functions, as well as on classical engineering design problems, and the results were compared with that of well-known methods. It is shown that the MTBO algorithm is very competitive in comparison with state-of-art metaheuristic methods. The superiority of the proposed MTBO algorithm is further confirmed by statistical validation, as well as the Wilcoxon signed-rank test with advanced optimization algorithms. Compared to the other algorithms, the MTBO algorithm is more robust, easier to implement, exhibits effective optimization performance for a wide range of real-world test functions, and attains faster convergence to optimal global solutions.

1. Introduction

Numerous metaheuristic optimization algorithms have been proposed in recent years, several of which have been deployed in solving engineering problems. The main performance features of such methods include a simple structure with easy implementation, not requiring gradient data, and not getting caught in premature convergence [1,2]. Metaheuristic algorithms inspired by nature solve optimization problems by imitating biological or physical phenomena. They are generally divided into four categories (Figure 1), including evolutionary algorithms, methods based on swarm intelligence, physics-based algorithms, and human-based algorithms.
Evolution-based methods are modeled on the laws of natural selection. In these methods, upon random generation of a population, the search is started and evolves in the next generations. The advantage of these methods is the combining of the fittest individuals to form the next generation for optimizing the population. The most well-known of these methods include the genetic algorithm (GA) [3], the evolution strategy [4], biogeography-based optimization (BBO) [5], and genetic programming (GP) [6]. The second group of metaheuristic methods includes swarm intelligence methods based on the social behavior of animals. The most popular of these methods include the particle swarm optimization (PSO) algorithm [7], the ant colony optimization algorithm (ACO) [8], the artificial bee algorithm (ABC) [9], the glowworm swarm optimization algorithm (GSO) [10], the grey wolf optimization (GWO) algorithm [11], the firefly algorithm (FA) [12], and the spotted hyena optimization (SHO) algorithm [13]. In Ref. [14], the optimization of the non-linear Hammerstein model is evaluated via the marine predator algorithm’s (MPA) capabilities as a population-based optimization based on the predators’ strategy for catching prey. In Ref. [15], an optimization method is presented based on the dwarf mongoose optimization algorithm (DMOA) to estimate the autoregressive exogenous (ARX) model parameter. In Ref. [16], a metaheuristic algorithm named the Aquila optimizer (AO) is used to determine the control autoregressive (CAR) model parameter. In Ref. [17], the parameter estimation of power system harmonics is investigated through the swarm intelligence-based optimization strength of the cuckoo search algorithm (CSA). In Ref. [18], a fractional hierarchical gradient descent (FHGD) algorithm is presented based on the standard hierarchical gradient descent generalization of the fractional order to solve the non-linear system problem. In Ref. [19], an optimization method named the flower pollination algorithm is applied to estimate the identification problems in non-linear active noise control systems.
Physics-based algorithms are inspired by nature’s physical laws. The most popular of these methods include the gravitational search algorithm (GSA) [20], the simulated annealing algorithm (SA) [21], the atom search optimization (ASO) algorithm [22], the artificial electric field algorithm (AEFA) [23], the big bang–big crunch (BBBC) algorithm [24], the small world optimization algorithm (SWOA) [25], the galaxy-based search algorithm (GbSA) [26], the black hole (BH) algorithm [27], the vortex search algorithm (VSA) [28], and the electromagnetism-like mechanism (EM) algorithm [29]. The fourth category includes metaheuristic algorithms inspired by human behavior. The most popular of these methods include teaching–learning-based optimization (TLBO) [30], the harmony search (HS) [31], the tabu search (TS) [32], the group search optimizer (GSO) [33], the imperialist competitive algorithm (ICA) [34], the league championship algorithm (LCA) [35], the firework algorithm (FA) [36], the soccer league competition (SLC) [37], the seeker optimization algorithm (SOA) [38], the exchange market algorithm (EMA) [39], group counseling optimization (GCO) [40], and the driving training-based optimization (DTBO) algorithm [41].
Population-based metaheuristic algorithms share a common characteristic beyond their nature. These algorithms divide the search process into two phases: exploration and exploitation [42,43,44,45,46]. In the exploration phase, the algorithm must have operators to explore the search space to find the global optimum. In the exploitation phase, the algorithm can find the promising area of the search space. Therefore, the exploitation phase is related to the local search capability in the promising area of the search space discovered in the exploration phase. Creating a balance between these two phases, owing to the optimization randomness, is an essential challenge for developing the metaheuristic algorithm. A question that arises is, considering all these algorithms, is there a real need for new algorithms?
The significance and function of optimization in numerous disciplines of science have become more obvious with the development of science and technology. Hence, to meet the numerous optimization issues, useful tools are required. Faced with the diversity of challenging real-world problems in different areas of science and engineering [47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63], scientists must solve a wide range of complex problems with different objective functions, including linear or non-linear, single-objective or multi-objective functions, which are unlike each other. No single algorithm can solve all such optimization problems based on the no-free-lunch (NFL) theorem [64]. On the other hand, the inherent nature of metaheuristic algorithms is such that they may have the best possible performance in solving several functions, while on the other hand, the same algorithm may not perform well at solving other functions of a different type. Therefore, each algorithm can cover only a certain set of test functions well. Therefore, most scientific branches have widely recognized the need for a comprehensive and robust algorithm that is versatile to handle a comprehensive set of functions with various objectives.
In the present study, a new metaheuristic algorithm, the mountaineering team-based optimization (MTBO) algorithm, inspired by humans’ social performance and cooperation, by considering natural phenomena, is presented. This algorithm is novel, and as far as the authors know, there is no previous study on this algorithm in previous optimization studies. The performance of the MTBO algorithm in solving real-world functions, basic and common standard test functions, CEC 2014 benchmark functions (unimodal, simple multimodal, hybrid, and composition test functions), as well as a wide range of common engineering design problems, is investigated herein. In each optimization stage, the MTBO algorithm’s performance is compared with that of several modern and standard algorithms. The optimization results have shown that the MTBO algorithm is very competitive compared to common optimization methods. The advantages of the proposed MTBO algorithm are as follows:
  • A novel metaheuristic algorithm inspired by the social performance and cooperation of humans by considering natural phenomena;
  • Has proper and effective MTBO optimization performance for a wide range of real-world functions compared to other well-known algorithms;
  • Has a simple optimization process with superior robustness compared to other algorithms;
  • Characteristic of fast and appropriate convergence to an acceptable and global optimal solution compared to modern algorithms;
  • A new algorithm based on the effective optimization of real and modern test functions.
The subsequent structure of this paper is as follows. In Section 2, the optimization through a mountaineering team-based algorithm is formulated. In Section 3, Appendix A, and Appendix B, the performance of the proposed algorithm is implemented on test functions, and the optimization results are presented. In Section 4, the performance of the proposed algorithm in solving real-world engineering problems is evaluated, and the results are analyzed. The findings obtained from the proposed algorithm and suggestions for future work are presented in Section 5.

2. Mountaineering Team-Based Optimization (MTBO)

2.1. Inspiration

Common optimization algorithms can be classified into local optimization and global optimization. Evolutionary methods are often used for global optimization. It is clear that intellectual and environmental evolution with the coordinated behavior of humans takes place much faster than physical and genetic evolution. Therefore, the cultural evolution and human perspective have not been ignored, and a group of algorithms, known as cultural algorithms, have been introduced. Cultural algorithms are actually not a completely new category of algorithms. Rather, the main idea is that by adding the possibility of cultural evolution (by capturing the possibility of exchanging information between members of the population) to the existing algorithms, they increase the speed of convergence, as expected.
In this paper, a new optimization algorithm is introduced in the field of evolutionary computations, which is based on intellectual and environmental evolution with coordinated human behavior. A mountaineering team consists of a number of mountaineers with an experienced and professional leader whose goal is to conquer the mountaintop in the region, where the mountaintop is considered the final global solution to the optimization problem [64,65,66,67]. Like other evolutionary optimization methods, the developed algorithm starts with an initial population. In this algorithm, each population member is called a mountaineering team member or mountaineer. This algorithm’s core is the mountaineers’ regular and coordinated movement and the consideration of the natural phenomena. According to the regular and coordinated movement phase, the mountaineers are coordinated by their teammates and also the group leader, which in optimization science is equivalent to the best solution in the current iteration of the algorithm to reach their goal, which is to conquer the mountaintop, or in optimization, the science to reach the global optimum or the best solution. In presenting this algorithm, natural disasters such as avalanches are also considered, which can hinder the progress of the mountaineers and even endanger their lives. The main inspiration of the MTBO algorithm is the team’s orderly and coordinated movement to conquer the mountaintop, considering the natural disasters, formulated below in rational steps.

2.2. Mathematical Model

2.2.1. First Phase: Coordinated Mountaineering

In a mountaineering team, the group’s most experienced member is always chosen as the leader and front of the group, which in optimization science is equivalent to the best solution in the current iteration of the algorithm. Here, the best member of the population of the algorithm, or equivalently, the mountaineering group, assumes this role. This member leads the best or the whole group towards the destination or goal to conquer the mountaintop or, equivalently, to reach the optimal global solution. Therefore, the members move toward the group leader as follows:
X i new = X i + r a n d × X L e a d e r X i
It should be noted that in a mountaineering team, the movement is organized under the supervision of the group leader, and usually, the members are organized from the best to the worst, and each member, in addition to being guided by the group leader, is also guided, and directed by the member just in front. It can be assumed that the equivalent in the MTBO algorithm is that after each iteration, the population is ordered from the best to the worst, and each individual is guided through the group leader and the individual in front of him/her. Therefore, the equation of regular movement towards the mountaintop is modified in the following form:
X i new = X i + r a n d × X L e a d e r X i + r a n d × X i i X i
where X i i is the position of the individual member directed by the member just in front.
On the other hand, in the optimization world, every action happens randomly, and the probability of this phase is assumed to be equal to Li, and hence the pseudo-code of this phase is as follows:
i f   r a n d < L i X i new = X i + r a n d × X L e a d e r X i + r a n d × X i i X i e n d

2.2.2. Second Phase: Effect of Natural Disasters

Several natural disasters could threaten the lives of the mountaineers and prevent them from reaching the mountaintop, or in other words, trap the population in local optima that are likely to occur at any moment. Figure 2 shows the common threat that mountaineers may face when conquering the mountain peak. The most important thing in the MTBO algorithm is the occurrence of an avalanche and falling off the cliff. In the MTBO, the basis of the optimization process is mostly based on natural disasters, i.e., avalanches. Therefore, the probability of this phase or the occurrence of an avalanche is higher than in other conditions. Therefore, in the MTBO, the critical situation in the event of an avalanche that occurs randomly in nature is considered to be equivalent to the worst situation of the algorithm, X Worst or its equivalent X Avalanche , and that in the event of an avalanche and other calamities, the ith individual tries to get away from the calamity situation, X Worst or its equivalent X Avalanche , and save herself/himself through the below inspired equation. In other words, inspired by the science of optimization, the individual is saved from getting stuck in the optimal local solution and moves towards the global optimization of the best possible solution.
X i new = X i r a n d × X Avalanche X i
The probability of avalanche occurrence is assumed to be equal to Ai, and the pseudo-code of this phase is presented as follows:
i f   r a n d < A i X i new = X i r a n d × X Avalanche X i e n d

2.2.3. Third Phase: Coordinated and Group Effort against Disasters

The main difference between human groups and other phenomena and beings is that humans help and guide each other in an informed, organized, and highly effective manner. This social and cooperative behavior is a vital skill in a mountaineering team. Therefore, in a mountaineering team, when any calamity occurs, the entire team will try to save the trapped member in the case of possible disaster or getting stuck. Therefore, the MTBO is inspired by the concerted and social effort and cooperation of the group to save the trapped member, i.e., the position of all the members is considered equal to their average position, X mean or X Team that the ith individual is toward the position X mean or X Team ; this behavior is modeled as follows:
X i new = X i + r a n d × X Team X i
The probability of saving an individual trapped by an avalanche, or in other words, trapped in the optimal local solution, is assumed to be equal to Mi, and the pseudo-code of this phase is presented as follows:
i f   r a n d < M i X i new = X i + r a n d × X Team X i e n d

2.2.4. Fourth Phase: Possible Death of Members

Unfortunately, it has been observed that sometimes, due to an avalanche’s intensity, a mountaineering team member is killed. Therefore, there is a possibility of the death of mountaineers in the disaster, and none of the above phases can save the mountaineer. This phase in the MTBO algorithm is considered in such a way that that member is removed from the group, and a new member is randomly replaced using the following equation:
X i new = X X max X m i n ( ) m i n
Finally, the overall pseudo-code of the optimization process of the proposed MTBO algorithm is depicted in Algorithm 1. Additionally, the MTBO algorithm flowchart in the optimization process is illustrated in Figure 3.
Algorithm 1: The MTBO Algorithm. https://github.com/Irajfaraji/MTBO (accessed on 6 January 2023)
1: to set values of the control parameters of MTBO algorithm: the scaling factors Li, Ai, and Mi, iterations maximum number Itermax, and the population size NP and setting the iterations number Iter = 0 for individuals;
2: to generate the initial random population NP (i = 1, 2, …, NP);
3: X i = X X max X m i n ( ) m i n
4: to evaluate the fitness of each individual;
5: while The i till maximum no of iterations Itermax do
6: to set the iterations number Iter = Iter + 1;
7: for i = 1 to NP do
8: to choose the numbers X L e a d e r , X i i , X Avalanche ;
9: i f   r a n d < L i
10: X i new = X i + r a n d × X L e a d e r X i + r a n d × X i i X i
11: e l s e   i f   r a n d < A i
12: X i new = X i r a n d × X Avalanche X i
13:  e l s e   i f   r a n d < M i ;
14: X i new = X i + r a n d × X Team X i
15: else
16: X i new = X X max X m i n ( ) m i n ;
17: end if
18: if f X i new < f X i
19: X i = X i new and f X i = f X i new ;
20: end if
21: if f X i < f X L e a d e r o r f X B e s t
22: X L e a d e r o r X B e s t = X i and f X L e a d e r o r f X B e s t = f X i ;
23: end if
24: end for
25: end while
Return the best solution has been achieved by MTBO algorithm: X L e a d e r or X B e s t

2.3. Computational Complexity of the MTBO

Note that the computational complexity of the MTBO mainly depends on three processes: initialization, fitness evaluation, and updating of the population. Note that with NP individuals, the computational complexity of the initialization process is O(NP). The computational complexity of the updating mechanism is O(Itermax × NP) + O(Itermax × NP × D), which is composed of searching for the best location and updating the location vector of all populations, where Itermax is the maximum number of iterations and D is the dimension of the specific problems. Therefore, the computational complexity of the MTBO is defined by:
O(MTBO) = O(NP × (Itermax + Itermax × D + 1))

3. Results and Discussion

3.1. Understanding MTBO Performance

First, to describe the performance of the MTBO algorithm based on different populations and identify the best values of factors Li, Ai, and Mi for the MTBO optimization performance, three classic and diverse optimization functions [67] are considered, according to Table 1. In addition, a three-dimensional specification of these functions is provided in Figure 4.

3.1.1. Investigation of MTBO Population Changes

This section uses different populations from 15 to 90 for the MTBO algorithm to solve three test functions with a dimension of 30 and several iterations of 1000. The mean value and standard deviation (Std.) for 30 independent executions for each test function are given in Table 2. It can be observed that the population between 60 and 75 is a suitable choice for this algorithm for dimension 30. Moreover, the convergence characteristics of the MTBO with different populations from 15 to 90 for the numerical results in Table 2 are shown in Figure 5. The symbol R indicates the rank of the obtained result in the total results.

3.1.2. Determining Desirable Factors of MTBO

The performance of a metaheuristic algorithm depends on three factors: (1) the specific optimization problem at hand, (2) the values of the control parameters, and (3) the random variability inherent to stochastic algorithms. Therefore, the following aspects are taken into account.
(i)
The regular and coordinated natural movement of the climbing team.
This algorithm chooses the group’s most experienced member as the leader. The basis of the optimization process is the avalanche. In other words, each member is guided by the group leader and the member in front. In Table 3, various possibilities for continuing the regular movement of the population have been examined. The number of repetitions is 1000, the population of the algorithm is 60, and the dimension of the problem is 30. According to the optimization results obtained in Table 3, it can be concluded that the suitable and desirable value for Li is equal to (0.25 + 0.25 × rand).
(ii)
Avalanche occurrence probability as a model of natural disasters.
In this algorithm, the optimization process is based on the avalanche. Therefore, the possibility of avalanche occurrence is more than in other conditions, which are analyzed in Table 4. The number of repetitions is 1000, the population of the algorithm is 60, and the dimension of the problem is 30. According to the optimization results obtained in Table 4, it can be concluded that the appropriate and desirable value for Ai is equal to (0.75 + 0.25 × rand).
(iii)
The possibility of rescuing an individual by the mountaineering team.
This algorithm establishes the probability of saving a person after the occurrence of an avalanche. Thus, the possibility of saving a person cannot be very high compared to the previous two processes, and the conditions of its occurrence are when the previous two processes do not happen to the person in question. In Table 5, different possibilities for rescue of an individual have been examined first. The number of repetitions is 1000, the population of the algorithm is 60, and the dimension of the problem is 30. According to the results obtained in Table 5, the appropriate and desirable value for Mi is equal to (0.75 + 0.25 × rand).
Appendix A examines the performance comparison of the MTBO based on basic test functions, while Appendix B discusses the performance comparison of the MTBO based on the CEC 2014 test functions.

4. MTBO for Real Engineering Problems

In this section, the performance of the MTBO algorithm has been evaluated with three constrained engineering design problems based on equality and inequality constraints [68], including the tension/compression spring design [69], the three-bar truss design [70,71], and pressure vessel optimization [72,73]. Due to the different constraints of these problems, the constraint management method should be used. Previous optimization studies have used different types of penalty functions, such as the static, dynamic, adaptive, co-evolutionary, and death penalty, as well as the special operator and goal separation methods. The death penalty is the simplest method with easy implementation and low computational cost, which assigns a large numerical value in the minimization of the problem, causing the algorithm to move away from impractical solutions in the optimization process. In this study, the MTBO algorithm is equipped with a death penalty to satisfy the constraints.

4.1. Tension/Compression Spring Design Problem

The aim of the studied problem is the minimization of the tension/compression spring weight. The design problem is depicted in Figure 6. The appropriate design should satisfy the constraints of the shear stress, deflection, and surge frequency. In this design, the problem has three variables, including the diameter of the wire (d), the average diameter of the coil (D), and the number of active coils (N).
The optimization problem is presented as follows [69]:
Minimize:
F 1 X = x 3 + 2 x 2 x 1 2 .
Subject to:
g 1 X = 1 x 2 3 x 3 71785 x 1 4 0 ,
g 2 X = 4 x 2 2 x 1 x 2 12566 x 1 3 x 2 x 1 4 + 1 5108 x 1 2 1 0 ,
g 3 X = 1 140.45 x 1 x 2 2 x 3 0 ,
g 4 X = x 1 + x 2 1.5 1 0 .
Variable range: 0.05 ≤ x1 ≤ 2, 0.25 ≤ x2 ≤ 1.3, 2 ≤ x3 ≤ 15.
This design problem is conducted using the MTBO algorithm, as well as the Rao-1, BA, PSO, and WOA algorithms for comparative purposes. The results, including the decision variables, constraints, and function values, are given in Table 6 and Table 7, which indicate that the MTBO algorithm has obtained better results than that of the other algorithms. Also, the convergence process for the tension/compression spring problem using different algorithms is demonstrated in Figure 7, which shows that the MTBO algorithm achieved lower mean and best values of the objective function.

4.2. Three-Bar Truss Design Problem

This section presents the three-bar truss design problem to minimize its weight. The objective function is bounded, and structural design problems have many constraints. The constraints include stress, deflection, and buckling. Figure 8 shows the three-bar truss design problem.
This design problem is defined as follows [70,71]:
Minimize:
F 2 X = 100 × 2 2 x 1 + x 2 .
Subject to:
g 1 X = P × 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 σ 0 ,
g 2 X = P × x 2 2 x 1 2 + 2 x 1 x 2 σ 0 ,
g 3 X = P × 1 2 x 2 + x 1 σ 0 ,
Variable range: 0 ≤ x1 ≤ 1, 0 ≤ x2 ≤ 1. This problem is performed via the MTBO, as well as the Rao-1, BA, PSO, and WOA algorithms. The obtained optimal decision variables, constraints, and function values are presented in Table 8 and Table 9, which clearly indicates the superiority of the MTBO algorithm in obtaining better results compared to that of the Rao-1, BA, PSO, and WOA algorithms. Also, the convergence curve of different algorithms in the design problem is depicted in Figure 9, which shows that the MTBO algorithm obtained lower mean and best values.

4.3. Pressure Vessel Optimization Problem

In pressure vessel optimization, the objective is to minimize the total cost, including the materials, shaping, and welding of the cylindrical pressure vessel, as shown in Figure 10. Decision variables include shell thickness (Ts), head thickness (Th), inner radius (R), and length of cylindrical section excluding head (L).
The pressure vessel optimization problem is presented as follows [72,73]:
Minimize:
F 3 X = 0.6224 x 1 x 3 x 4 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3 .
Subject to:
g 1 X = x 1 + 0.0193 x 3 0 ,
g 2 X = x 2 + 0.00954 x 3 0 ,
g 3 X = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 ,
g 4 X = x 4 240 0 ,
Variable range: 0 ≤ x1 ≤ 100, 0 ≤ x2 ≤ 100, 10 ≤ x3 ≤ 200, 10 ≤ x4 ≤ 200.
This problem of pressure vessel optimization is implemented via the MTBO, as well as the Rao-1, BA, PSO, and WOA algorithms. The decision variables, constraints, and function values are given in Table 10 and Table 11, which prove the superiority of the MTBO in achieving superior results compared to that of the Rao-1, BA, PSO, and WOA algorithms. Moreover, the convergence process of the Rao-1, BA, PSO, and WOA algorithms in the design problem is demonstrated in Figure 11, which shows that the MTBO obtained lower mean and best values.

5. Conclusions

This paper has established a novel optimization algorithm named the mountaineering team-based optimization (MTBO) algorithm based on intellectual and environmental evolution with coordinated human behavior. The proposed algorithm is formulated based on the four phases of coordinated mountaineering, the effect of natural disasters, coordinated and group effort against disasters, and the possible death of the members due to avalanches. The capability of the MTBO algorithm is investigated with different populations to identify the best values of factors considering classic functions. The performance of the MTBO is further evaluated on 23 basic functions based on unimodal, multimodal, and fixed multimodal benchmark test functions. Statistical analysis and Wilcoxon test results proved the superior and competitive performance of the MTBO algorithm in comparison with the genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), artificial bee colony (ABC), and simulated annealing (SA) algorithms (see Appendix A). Moreover, the MTBO algorithm’s effectiveness has been investigated in solving the CEC 2014 test functions based on unimodal functions, simple multimodal, hybrid, and composition, which has provided very competitive results compared to the well-known Rao-1, BA, PSO, and WOA algorithms (see Appendix B). Furthermore, to evaluate the MTBO algorithm’s performance, three engineering problems, including tension/compression spring design, three-bar truss design, and pressure vessel optimization, were solved, proving the MTBO is very competitive compared to the Rao-1, BA, PSO, and WOA algorithms, and proving its better performance. Hybridization of the MTBO algorithm with the well-known evolutionary algorithms is suggested for future work.

Author Contributions

Conceptualization, I.F.D. and A.P.; methodology, I.F.D.; software, A.P.; validation, I.F.D.; formal analysis, I.F.; investigation, I.F.D.; resources, I.F.D.; data curation, A.P.; writing—original draft preparation, I.F.; writing—review and editing, I.F. and I.F.D.; visualization, A.P.; supervision, M.L.N.; project administration, M.L.N.; funding acquisition, M.L.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not acceptable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Performance Comparison of MTBO Based on Basic Test Functions

In the first part of the comparative study, to understand the proposed algorithm’s power, its performance has been compared with five standard algorithms based on 23 basic functions, based on unimodal, multimodal, and fixed multimodal benchmark test functions [74,75,76,77,78,79], according to Table A1.
Table A1. The 23 essential functions based on unimodal, multimodal, and fixed multimodal benchmark test functions [79].
Table A1. The 23 essential functions based on unimodal, multimodal, and fixed multimodal benchmark test functions [79].
Unimodal Benchmark Functions
FunctionDimRangefmin
f 1 x = i = 1 n x i 2 30 100 , 100 0
f 2 x = i = 1 n x i + i = 1 n x i 30 10 , 10 0
f 3 x = i = 1 n j 1 i x j 2 30 100 , 100 0
f 4 x = max i x 1 , 1 i n 30 100 , 100 0
f 5 x = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2 30 30 , 30 0
f 6 x = i = 1 n x i + 0.5 2 30 100 , 100 0
f 7 x = i = 1 n i x i 4 + r a n d 0 , 1 30 1.28 , 1.28 0
Multimodal Benchmark Functions
FunctionDimRangefmin
f 8 x = i = 1 n x i sin x i 30 500 , 500 418.9829 × 5
f 9 x = i = 1 n x i 2 10 cos 2 π x 1 + 10 30 5.12 , 5.12 0
f 10 x = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e 30 32 , 32 0
f 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i 30 600 , 600 0
f 12 x = π n 10 sin π y 1 + i = 1 n 1 y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4 y i = 1 + x i + 1 4 u x 1 , a , k , m = k x i a m x i > a 0 a < x i < a k x i a m x i < a 30 50 , 50 0
f 13 x = 0.1 sin 2 3 π x 1 + i = 1 n x i 1 2 1 + sin 2 3 π x i + 1 + x n 1 2 1 + sin 2 2 π x n + i = 1 n u x i , 5 , 100 , 4 30 50 , 50 0
f 14 x = i = 1 n sin x i sin i x i 2 π 2 m , m = 5 30 0 , π 4.687
f 15 x = e i = 1 n x i β 2 m 2 e i = 1 n x i 2 i = 1 n cos 2 x i , m = 5 30 20 , 20 1
f 16 x = i = 1 n sin 2 x i exp i = 1 n x i 2 . exp i = 1 n sin 2 x i 30 10 , 10 1
Fixed-Dimension Multimodal Benchmark Functions
FunctionDimRangefmin
f 14 x = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1 2 65 , 65 1
f 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 4 5 , 5 0.00030
f 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2 5 , 5 1.0316
f 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 2 5 , 5 0.398
f 18 x = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 × 30 + 2 x 1 3 x 2 2 × 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 2 2 , 2 3
f 19 x = i = 1 4 c i exp j = 1 3 a i j x j p i j 2 3 1 , 3 3.86
f 20 x = i = 1 4 c i exp j = 1 6 a i j x j p i j 2 6 0 , 1 3.32
f 21 x = i = 1 5 X a i X a i T + c i 1 4 0 , 10 10.1532
f 22 x = i = 1 7 X a i X a i T + c i 1 4 0 , 10 10.4028
f 23 x = i = 1 10 X a i X a i T + c i 1 4 0 , 10 10.5363
The performance of the MTBO algorithm is compared with the GA, DE, PSO, ABC, and SA algorithms. The control parameters of these algorithms have been selected and determined based on their reference article, according to Table A2.
Table A2. The control parameters of different algorithms.
Table A2. The control parameters of different algorithms.
AlgorithmParameterValue
Genetic algorithm (GA) [74]Crossover factor0.7
Mutation factor0.3
(DE) [75]Crossover probability0.1
Scaling factor0.9
Particle swarm optimization (PSO) [76,77]Constriction factor χ0.729
Acceleration control coefficient c12.05
Acceleration control coefficient c22.05
Artificial bee colony (ABC) [68]Onlooker number no50% of the colony
Employed bee number ne50% of the colony
Scout number ns1
Limitns × D (dimension of the
problem)
Simulated annealing (SA) [78]Cooling rate α0.8
Initial temperature T01
Moreover, the code that the author has provided to the readers on selected sites, and no changes have been made except that for all algorithms, the number of iterations selected is 1000, and the number of the population considered is 60. Also, for this comparison, the dimension designated is 30. Visualization of some basic benchmark functions in 2D is depicted in Figure A1.
Figure A1. Visualization of some basic benchmark functions in 2D.
Figure A1. Visualization of some basic benchmark functions in 2D.
Mathematics 11 01273 g0a1aMathematics 11 01273 g0a1b
Moreover, the optimization performance of the MTBO for these benchmark functions is depicted in Figure A2.
Figure A2. Optimization performance of the MTBO for some basic benchmark functions.
Figure A2. Optimization performance of the MTBO for some basic benchmark functions.
Mathematics 11 01273 g0a2aMathematics 11 01273 g0a2b
The numerical results based on the mean value, the best value, the standard deviation, and the Wilcoxon test results are given in Table A3, Table A4, Table A5, Table A6 and Table A7.
Table A3. Summary of the Mean results for the test functions for the classic and MTBO algorithms.
Table A3. Summary of the Mean results for the test functions for the classic and MTBO algorithms.
FunctionGADEPSOABCSAMTBO
F13.53 × 10−2
-
1.26 × 10−11
-
2.96 × 10−8
-
5.14 × 10−14
-
1.59 × 10−7
-
7.70 × 10−19
F21.39
-
7.07 × 10−2
+
5.59 × 10−1
-
2.38 × 10−1
-
8.95 × 10−1
-
1.72 × 10−1
F34.50 × 10+2
-
3.77 × 10+1
-
1.67 × 10+2
-
3.64 × 10+1
-
6.21 × 10+1
-
3.55 × 10+1
F41.42 × 10+1
-
6.19
-
9.61
-
4.42
-
6.56
-
2.36
F59.56 × 10+1
-
3.80 × 10+1
-
3.44 × 10+1
-
3.07 × 10+1
-
5.49 × 10+1
-
2.73 × 10+1
F66.38 × 10−3
-
6.67 × 10−13
-
4.74 × 10−7
-
2.55 × 10−15
-
2.99 × 10−5
-
2.71 × 10−17
F71.15 × 10−1
-
3.06 × 10−2
-
4.75 × 10−2
-
2.45 × 10−2
-
3.89 × 10−2
-
1.69 × 10−2
F8−7.57 × 10+3
-
−8.10 × 10+3
+
−8.24 × 10+3
+
−7.96 × 10+3
-
−7.51 × 10+3
-
−8.09 × 10+3
F94.87 × 10+1
-
3.21 × 10+1
-
3.35 × 10+1
-
3.30 × 10+1
-
3.44 × 10+1
-
2.97 × 10+1
F102.94
-
1.92
-
2.40
-
2.29
-
2.57
-
1.34
F115.80 × 10−1
-
8.32 × 10−2
-
1.72 × 10−1
-
5.02 × 10−2
-
3.97 × 10−2
-
3.73 × 10−2
F124.58
-
9.51 × 10−1
-
1.96
-
1.37
-
1.21
-
2.08 × 10−1
F132.09 × 10+1
-
1.03 × 10+1
-
1.57 × 10+1
-
1.21 × 10+1
-
1.34 × 10+1
-
2.59
F149.98 × 10−1
=
9.98 × 10−1
=
9.98 × 10−1
=
9.98 × 10−1
=
9.98 × 10−1
=
9.98 × 10−1
F152.55 × 10−3
-
2.66 × 10−3
-
2.50 × 10−3
-
5.39 × 10−4
+
7.85 × 10−4
-
6.73 × 10−4
F16−1.03
=
−1.03
=
−1.03
=
−1.03
=
−1.03
=
−1.03
F173.98 × 10−1
=
3.98 × 10−1
=
3.98 × 10−1
=
3.98 × 10−1
=
3.98 × 10−1
=
3.98 × 10−1
F183.00
=
3.00
=
3.00
=
3.00
=
3.00
=
3.00
F19−3.86
=
−3.86
=
−3.86
=
−3.86
=
−3.86
=
−3.86
F20−3.29
=
−3.28
-
−3.27
-
−3.24
-
−3.27
-
−3.29
F21−5.78
-
−6.28
-
−6.53
-
−7.03
-
−6.86
-
−8.65
F22−6.68
-
−7.68
-
−5.53
-
−8.25
-
−7.82
-
−8.80
F23−6.37
-
−8.76
-
−9.48
=
−8.67
-
−7.75
-
−9.48
Nm6676520
Final rank332361
Nm represents the number of times with a ranking higher than the mean value obtained.
The Wilcoxon test based on Refs. [80,81,82,83,84] is performed for different algorithms, and the proposed algorithm decisively won over all the algorithms and had a more effective performance.
Table A4. Wilcoxon’s test rank summary of the statistical assessment results for the classic and MTBO algorithms.
Table A4. Wilcoxon’s test rank summary of the statistical assessment results for the classic and MTBO algorithms.
FunctionGADEPSOABCSAMTBO
F1634251
F2614352
F3635241
F4635241
F5643251
F6634251
F7635241
F8521463
F9624351
F10624351
F11645321
F12625431
F13625341
F143.53.53.53.53.53.5
F15564132
F163.53.53.53.53.53.5
F173.53.53.53.53.53.5
F183.53.53.53.53.53.5
F193.53.53.53.53.53.5
F201.534.564.51.5
F21654231
F22546231
F23631.5451.5
Total11872.5091.5067.509340.50
Rank mean5.13043.15223.97832.93484.04351.7609
Final rank634251
Table A5. The competitive results of the Wilcoxon’s test.
Table A5. The competitive results of the Wilcoxon’s test.
Corresponding AlgorithmMTBO versus
p-ValuesBetterWorstEqual
SA1.9644 × 10−41805
ABC3.2701 × 10−41715
PSO0.00491616
DE0.00741625
GA2.9305 × 10−41706
Table A6. Summary of the best results for the test functions for the classic and MTBO algorithms.
Table A6. Summary of the best results for the test functions for the classic and MTBO algorithms.
FunctionGADEPSOABCSAMTBO
F10.000.000.000.000.000.00
F20.000.000.000.000.000.00
F318.702.575.962.275.420.32
F49.262.734.902.164.070.27
F522.909.354.8111.507.064.03
F60.000.000.000.000.000.00
F70.040.010.010.010.010.01
F8−9.23 × 10+3−9.16 × 10+3−9.14 × 10+3−8.46 × 10+3−9.23 × 10+3−9.06 × 10+3
F921.9018.9014.9016.0016.2015.90
F100.000.000.000.000.000.00
F110.000.000.000.000.000.00
F120.310.000.000.000.000.00
F139.150.010.560.000.020.00
F141.001.001.001.001.001.00
F150.000.000.000.000.000.00
F16−1.03−1.03−1.03−1.03−1.03−1.03
F170.400.400.400.400.400.40
F183.003.003.003.003.003.00
F19−3.86−3.86−3.86−3.86−3.86−3.86
F20−3.32−3.32−3.32−3.32−3.32−3.32
F21−1.02 × 10+1−1.02 × 10+1−1.02 × 10+1−1.02 × 10+1−1.02 × 10+1−1.02 × 10+1
F22−1.04 × 10+1−1.04 × 10+1−1.04 × 10+1−1.04 × 10+1−1.04 × 10+1−1.04 × 10+1
F23−1.05 × 10+1−1.05 × 10+1−1.05 × 10+1−1.05 × 10+1−1.05 × 10+1−1.05 × 10+1
Nb11.0012.0013.0012.0010.0018.00
Final rank5.003.002.003.006.001.00
Nb represents the number of times with a ranking higher than the best value obtained.
Table A7. Summary of the Std. results for the test functions for the classic and MTBO algorithms.
Table A7. Summary of the Std. results for the test functions for the classic and MTBO algorithms.
FunctionGADEPSOABCSAMTBO
F10.080.000.000.000.000.00
F21.800.120.730.420.370.45
F3407.00199.00202.0058.1046.4060.30
F42.802.502.541.272.231.34
F579.0031.9042.4040.5044.2023.00
F62.850.000.000.000.000.00
F70.070.010.020.010.020.01
F81080.00533.00684.00346.001350.00603.00
F914.909.029.4614.6016.006.52
F101.270.781.471.461.181.03
F110.950.090.240.050.050.03
F124.800.962.332.430.820.30
F1312.209.2811.3012.5012.305.10
F140.000.000.000.000.000.00
F150.010.010.010.000.000.00
F160.000.000.000.000.000.00
F170.000.000.000.000.000.00
F180.000.000.000.000.000.00
F190.000.000.000.000.000.00
F200.060.060.060.060.060.06
F213.703.663.753.613.172.74
F223.833.483.683.383.622.86
F233.893.162.583.333.572.58

Appendix B. Performance Comparison of MTBO Based on CEC 2014 Test Functions

This section evaluates the MTBO algorithm’s performance in solving CEC 2014 test functions [80] (see Table A8), based on unimodal functions, simple multimodal, hybrid, and composition. The unimodal test functions have an optimal point to evaluate the algorithm’s convergence and exploitation. Multimodal test functions have more than one optimum, which is an important challenge, as they have one global optimum, and the rest are local optimums. Multimodal functions are suitable for evaluating algorithms from exploring the local optimum to reaching the global optimum. Composition test functions are the combined or transferred versions of the unimodal and multimodal functions.
Table A8. CEC 2014 benchmark test functions.
Table A8. CEC 2014 benchmark test functions.
NumberFunctions[Min, Max]
F1UnimodalRotated high conditioned elliptic function[−100, 100]
F2Rotated Bent Cigar function[−100, 100]
F3Rotated discus function[−100, 100]
F4Simple
Multimodal
Shifted and rotated Rosenbrock’s function[−100, 100]
F5Shifted and rotated Ackley’s function[−100, 100]
F6Shifted and rotated Weierstrass function[−100, 100]
F7Shifted and rotated Griewank’s function[−100, 100]
F8Shifted Rastrigin’s function[−100, 100]
F9Shifted and rotated Rastrigin’s function[−100, 100]
F10Shifted Schwefel’s function[−100, 100]
F11Shifted and rotated Schwefel’s function[−100, 100]
F12Shifted and rotated Katsuura function[−100, 100]
F13Shifted and rotated HappyCat function[−100, 100]
F14Shifted and rotated HGBat function[−100, 100]
F15Shifted and rotated expanded Griewank’s plus Rosenbrock’s function[−100, 100]
F16Shifted and rotated expanded Scaffer’s F6 function[−100, 100]
F17HybridHybrid function 1 (N = 3)[−100, 100]
F18Hybrid function 2 (N = 3)[−100, 100]
F19Hybrid function 3 (N = 4)[−100, 100]
F20Hybrid function 4 (N = 4)[−100, 100]
F21Hybrid function 5 (N = 5)[−100, 100]
F22Hybrid function 6 (N = 5)[−100, 100]
F23CompositionComposition function 1 (N = 5)[−100, 100]
F24Composition function 2 (N = 3)[−100, 100]
F25Composition function 3 (N = 3)[−100, 100]
F26Composition function 4 (N = 5)[−100, 100]
F27Composition function 5 (N = 5)[−100, 100]
F28Composition function 6 (N = 5)[−100, 100]
F29Composition function 7 (N = 3)[−100, 100]
F30Composition function 8 (N = 3)[−100, 100]
To solve the CEC 2014 test functions, the effectiveness of the MTBO algorithm is compared with the Rao-1 [81], BA [82,83], PSO [75,76], and WOA [84] algorithms. The control parameters of the mentioned algorithms have been considered based on their reference article, according to Table A9. The number of iterations is selected as 5000, the population considered is 60, and the dimension is selected at 30.
Table A9. The control parameters of different algorithms for solving the CEC 2014 test functions.
Table A9. The control parameters of different algorithms for solving the CEC 2014 test functions.
AlgorithmParameterValue
Rao-1 [81]Without any control parameter-
Bat algorithm (BA) [82,83]loudness A0.25
pulse rate r0.5
Scaling factor ε0.1
the minimum frequency fmin0.7
the maximum frequency fmax0.9
Particle swarm optimization (PSO) [75,76]Constriction factor χ0.729
Acceleration control coefficient c12.05
Acceleration control coefficient c22.05
Whale optimization algorithm (WOA) [84]Scaling factor a[0, 2]
Scaling factor b1
Scaling factor l[−1, 1]
The numerical results based on the mean value are given in Table A10. The results show the MTBO method’s superiority and competitiveness compared to other algorithms.
Table A10. Summary of the mean results for CEC 2014 test functions for different algorithms with D = 30.
Table A10. Summary of the mean results for CEC 2014 test functions for different algorithms with D = 30.
FunctionRao-1BAPSOWOAMTBO
MeanMeanMeanMeanMean
F1Unimodal1.77 × 10+7
-
3.71 × 10+7
-
3.02 × 10+7
-
2.94 × 10+7
-
3.30 × 10+6
F28.05 × 10+3
-
1.91 × 10+7
-
1.23 × 10+6
-
4.81 × 10+6
-
1.99 × 10+1
F33.04 × 10+4
-
5.12 × 10+4
-
2.87 × 10+4
-
3.32 × 10+4
-
1.50 × 10+3
F4Simple
Multimodal
1.04 × 10+2
-
1.88 × 10+2
-
1.74 × 10+2
-
1.84 × 10+2
-
4.47 × 10+1
F52.09 × 10+1
=
2.11 × 10+1
-
2.09 × 10+1
=
2.04 × 10+1
+
2.09 × 10+1
F62.83 × 10+1
-
3.60 × 10+1
-
3.45 × 10+1
-
3.50 × 10+1
-
2.29 × 10+1
F76.33 × 10−2
-
1.08
-
8.78 × 10−1
-
9.74 × 10−1
-
3.14 × 10−2
F81.75 × 10+2
-
1.66 × 10+2
-
1.93 × 10+2
-
1.90 × 10+2
-
9.52 × 10+1
F92.16 × 10+2
-
2.61 × 10+2
-
2.18 × 10+2
-
2.36 × 10+2
-
1.02 × 10+2
F106.05 × 10+3
-
3.60 × 10+3
-
3.95 × 10+3
-
4.07 × 10+3
-
1.99 × 10+3
F116.90 × 10+3
-
5.67 × 10+3
-
4.78 × 10+3
+
4.85 × 10+3
-
5.37 × 10+3
F122.42
-
2.73
-
2.88
-
1.67
+
2.35
F134.63 × 10−1
-
5.20 × 10−1
-
4.91 × 10−1
-
5.02 × 10−1
-
4.31 × 10−1
F145.44 × 10−1
-
3.90 × 10−1
-
3.43 × 10−1
-
2.39 × 10−1
+
3.03 × 10−1
F151.79 × 10+1
-
6.85 × 10+1
-
5.71 × 10+1
-
9.34 × 10+1
-
1.63 × 10+1
F161.28 × 10+1
-
1.29 × 10+1
-
1.24 × 10+1
-
1.27 × 10+1
-
1.13 × 10+1
F17Hybrid1.76 × 10+6
-
5.77 × 10+6
-
2.48 × 10+6
-
3.58 × 10+6
-
2.06 × 10+5
F184.73 × 10+6
-
5.81 × 10+3
-
4.45 × 10+3
-
3.51 × 10+4
-
2.18 × 10+3
F198.99
+
4.57 × 10+1
-
3.18 × 10+1
-
4.48 × 10+1
-
1.20 × 10+1
F205.61 × 10+3
-
4.15 × 10+4
-
2.24 × 10+4
-
2.22 × 10+4
-
7.36 × 10+2
F215.23 × 10+5
-
8.17 × 10+5
-
9.41 × 10+5
-
1.05 × 10+6
-
1.18 × 10+5
F224.55 × 10+2
-
7.21 × 10+2
-
7.50 × 10+2
-
7.23 × 10+2
-
3.82 × 10+2
F23Composition3.15 × 10+2
=
3.44 × 10+2
-
3.31 × 10+2
-
3.30 × 10+2
-
3.15 × 10+2
F242.40 × 10+2
-
2.37 × 10+2
-
2.34 × 10+2
-
2.08 × 10+2
+
2.31 × 10+2
F252.09 × 10+2
+
2.12 × 10+2
+
2.25 × 10+2
-
2.20 × 10+2
-
2.13 × 10+2
F261.01 × 10+2
-
1.00 × 10+2
=
1.01 × 10+2
-
1.00 × 10+2
=
1.00 × 10+2
F279.23 × 10+2
-
1.15 × 10+3
-
1.08 × 10+3
-
9.78 × 10+2
-
8.44 × 10+2
F281.51 × 10+3
-
2.31 × 10+3
-
2.26 × 10+3
-
2.33 × 10+3
-
1.40 × 10+3
F291.89 × 10+6
+
4.02 × 10+6
-
5.79 × 10+6
-
4.88 × 10+6
-
3.52 × 10+6
F304.09 × 10+3
+
9.09 × 10+4
-
6.90 × 10+4
-
9.76 × 10+4
-
7.06 × 10+3
Nm511521
Final rank2.54.54.52.51
Nm represents the number of times with a ranking higher than the mean value obtained.
The Wilcoxon’s test is implemented based on Ref. [79], and the results are presented in Table A11, Table A12, Table A13 and Table A14. The resssults are clear that the MTBO algorithm has obtained the best results among 30 execution times decisively compared to all the other algorithms.
Table A11. Wilcoxon’s test rank summary of the statistical assessment results for the CEC 2014 test functions for the different algorithms with D = 30.
Table A11. Wilcoxon’s test rank summary of the statistical assessment results for the CEC 2014 test functions for the different algorithms with D = 30.
FunctionRao-1BAPSOWOAMTBO
F1Unimodal25431
F225341
F335241
F4Simple
Multimodal
25341
F535313
F625341
F725341
F832541
F925341
F1052341
F1154123
F1234512
F1325341
F1454312
F1524351
F1645231
F17Hybrid25341
F1853241
F1915342
F2025431
F2123451
F2223541
F23Composition1.55431.5
F2454312
F2512543
F264.524.522
F2725431
F2823451
F2913542
F3014352
Total79122102.510343.5
Rank mean2.63334.06673.41673.43331.4500
Final rank25341
Table A12. The competitive results of the Wilcoxon’s test.
Table A12. The competitive results of the Wilcoxon’s test.
Corresponding AlgorithmMTBO versus
p-ValuesBetterWorstEqual
WOA9.1900 × 10−82541
PSO2.8088 × 10−92811
BA2.4151 × 10−102811
Rao-15.0240 × 10−52442
Table A13. Summary of the best results for the CEC 2014 test functions for the different algorithms with D = 30.
Table A13. Summary of the best results for the CEC 2014 test functions for the different algorithms with D = 30.
FunctionRao-1BAPSOWOAMTBO
BestBestBestBestBest
F1Unimodal1.080 × 10+71.220 × 10+71.160 × 10+71.050 × 10+73.130 × 10+5
F22.680 × 10+22.080 × 10+63.670 × 10+51.140 × 10+60.58
F31.510 × 10+42.040 × 10+41.000 × 10+41.090 × 10+499.80
F4Simple
Multimodal
3.71104.00114.00117.000.01
F520.8020.3020.5020.3020.20
F617.0032.2029.0027.905.74
F70.000.990.790.800.00
F8130.00114.00131.00141.0040.70
F9198.00182.00129.00187.0070.60
F105.410 × 10+32.220 × 10+33.370 × 10+33.560 × 10+3712.00
F116.620 × 10+34.490 × 10+32.900 × 10+33.130 × 10+31.690 × 10+3
F121.871.431.300.911.83
F130.360.380.400.330.23
F140.250.230.160.180.21
F1516.5027.2031.5049.204.69
F1612.3012.1011.6012.0010.50
F17Hybrid1.070 × 10+61.560 × 10+65.550 × 10+58.400 × 10+52.620 × 10+4
F1825,600.00596.00394.00384.0078.90
F195.8016.7017.5017.708.27
F202.520 × 10+31.330 × 10+46.790 × 10+35.590 × 10+3184.00
F213.450 × 10+53.480 × 10+55.250 × 10+42.880 × 10+51.010 × 10+4
F22290.00487.00512.00502.0047.50
F23Composition315.00322.00322.00322.00315.00
F24220.00203.00200.00201.00200.00
F25205.00200.00200.00200.00200.00
F26100.00100.00100.00100.00100.00
F27415.00440.00417.00420.00408.00
F28750.001740.00200.001630.001020.00
F292.930 × 10+33.420 × 10+49.360 × 10+31.130 × 10+41.330 × 10+3
F301.500 × 10+32.290 × 10+42.870 × 10+42.070 × 10+41.140 × 10+3
Nm33.002.005.003.00
Final rank3.53.505.002.003.50
Nm represents the number of times with a ranking higher than the best value obtained.
Table A14. Summary of the Std. results for the CEC 2014 test functions for the different algorithms with D = 30.
Table A14. Summary of the Std. results for the CEC 2014 test functions for the different algorithms with D = 30.
FunctionRao-1BAPSOWOAMTBO
STDSTDSTDSTDSTD
F1Unimodal7.81 × 10+62.350 × 10+71.230 × 10+71.220 × 10+76.20 × 10+6
F26490.006.720 × 10+68.060 × 10+56.080 × 10+626.20
F39110.003.920 × 10+42.270 × 10+42.330 × 10+41480.00
F4Simple
Multimodal
56.7069.6048.8065.1030.40
F50.060.200.150.190.04
F67.435.163.434.062.82
F70.110.100.070.100.03
F829.0032.5050.8030.1023.80
F915.5082.4057.2056.1019.10
F10478.00702.00315.00568.00491.00
F11251.00838.001140.00802.001940.00
F120.310.590.350.560.28
F130.070.240.070.160.08
F140.300.060.050.050.05
F150.9622.8014.8032.807.51
F160.290.670.500.440.54
F17Hybrid469,000.002.27 × 10+61.33 × 10+61.52 × 10+61.94 × 10+5
F188,490,000.0094,400.005260.0095,100.003110.00
F192.1331.6019.0040.602.81
F202080.0014,100.0022,100.0010,800.00910.00
F21175,000.004.450 × 10+57.120 × 10+58.790 × 10+51.200 × 10+5
F22130.00299.00134.00215.00144.00
F23Composition0.005.087.277.160.00
F248.369.252.664.967.99
F252.7718.3021.1016.503.45
F260.080.240.140.110.12
F27206.00600.00360.00387.00228.00
F28331.00282.00858.00684.00300.00
F293,990,000.006.80 × 10+65.03 × 10+65.16 × 10+64.95 × 10+6
F302360.003.740 × 10+532,200.00 1.19 × 10+512,300.00
Figure A3 also shows the convergence comparisons of the algorithms for multimodal functions.
Figure A3. Convergence comparisons of the algorithms for multimodal functions.
Figure A3. Convergence comparisons of the algorithms for multimodal functions.
Mathematics 11 01273 g0a3aMathematics 11 01273 g0a3b

References

  1. Abualigah, L.; Elaziz, M.A.; Khasawneh, A.M.; Alshinwan, M.; Ibrahim, R.A.; Al-qaness, M.A.; Mirjalili, S.; Sumari, P.; Gandomi, A.H. Meta-heuristic optimization algorithms for solving real-world mechanical engineering design problems: A comprehensive survey, applications, comparative analysis, and results. Neural Comput. Appl. 2022, 34, 1–30. [Google Scholar] [CrossRef]
  2. Yang, X.S.; He, X. Nature-inspired optimization algorithms in engineering: Overview and applications. Nat. Inspired Comput. Eng. 2016, 637, 1–20. [Google Scholar]
  3. Whitley, D. A genetic algorithm tutorial. Stat. Comput. 1994, 4, 65–85. [Google Scholar] [CrossRef]
  4. Rechenberg, I. Evolutionsstrategien. In Simulationsmethoden in Der Medizin und Biologie; Springer: Berlin/Heidelberg, Germany, 1978; pp. 83–114. [Google Scholar]
  5. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  6. Koza, J.R.; Poli, R. Genetic programming. In Search Methodologies; Springer: Boston, MA, USA, 2005; pp. 127–164. [Google Scholar]
  7. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; IEEE: New York, NY, USA, 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  8. Al Salami, N.M. Ant colony optimization algorithm. UbiCC J. 2009, 4, 823–826. [Google Scholar]
  9. Karaboga, D. Artificial bee colony algorithm. Scholarpedia 2010, 5, 6915. [Google Scholar] [CrossRef]
  10. Krishnanand, K.N.; Ghose, D. Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell. 2009, 3, 87–124. [Google Scholar] [CrossRef]
  11. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  12. Yang, X.S.; He, X. Firefly algorithm: Recent advances and applications. arXiv 2013, arXiv:1308.3898. [Google Scholar] [CrossRef] [Green Version]
  13. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  14. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique. Mathematics 2022, 10, 4217. [Google Scholar] [CrossRef]
  15. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
  16. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Cheema, K.M.; Milyani, A.H. Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems. Mathematics 2022, 10, 1749. [Google Scholar] [CrossRef]
  17. Malik, N.A.; Chang, C.-L.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Kiani, A.K.; Milyani, A.H.; Azhari, A.A. Parameter estimation of harmonics arising in electrical instruments of smart grids using cuckoo search heuristics. Front. Energy Res. 2022, 10, 1733. [Google Scholar] [CrossRef]
  18. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Mehmood, A.; Shah, S.M. Design of fractional hierarchical gradient descent algorithm for parameter estimation of nonlinear control autoregressive systems. Chaos Solitons Fractals 2022, 157, 111913. [Google Scholar] [CrossRef]
  19. Khan, W.U.; He, Y.; Raja, M.A.Z.; Chaudhary, N.I.; Khan, Z.A.; Shah, S.M. Flower Pollination Heuristics for Nonlinear Active Noise Control Systems. CMC-Comput. Mater. Contin. 2021, 67, 815–834. [Google Scholar]
  20. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  21. Van Laarhoven, P.J.; Aarts, E.H. Simulated annealing. In Simulated Annealing: Theory and Applications; Springer: Dordrecht, The Netherlands, 1987; pp. 7–15. [Google Scholar]
  22. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl. Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  23. Yadav, A. AEFA: Artificial electric field algorithm for global optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar]
  24. Rezaee Jordehi, A. A chaotic-based big bang–big crunch algorithm for solving global optimisation problems. Neural Comput. Appl. 2014, 25, 1329–1335. [Google Scholar] [CrossRef]
  25. Du, H.; Wu, X.; Zhuang, J. Small-world optimization algorithm for function optimization. In Proceedings of the International Conference on Natural Computation, Xi’an, China, 24–28 September 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 264–273. [Google Scholar]
  26. Shah-Hosseini, H. Principal components analysis by the galaxy-based search algorithm: A novel metaheuristic for continuous optimisation. Int. J. Comput. Sci. Eng. 2011, 6, 132–140. [Google Scholar]
  27. Hatamlou, A. Black hole: A new heuristic optimization approach for data clustering. Inf. Sci. 2013, 222, 175–184. [Google Scholar] [CrossRef]
  28. Doğan, B.; Ölmez, T. A new metaheuristic for numerical function optimization: Vortex Search algorithm. Inf. Sci. 2015, 293, 125–145. [Google Scholar] [CrossRef]
  29. Birbil, Ş.İ.; Fang, S.C. An electromagnetism-like mechanism for global optimization. J. Glob. Optim. 2003, 25, 263–282. [Google Scholar] [CrossRef]
  30. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  31. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  32. Glover, F. Tabu search—Part I. ORSA J. Comput. 1989, 1, 190–206. [Google Scholar] [CrossRef] [Green Version]
  33. He, S.; Wu, Q.H.; Saunders, J.R. Group search optimizer: An optimization algorithm inspired by animal searching behavior. IEEE Trans. Evol. Comput. 2009, 13, 973–990. [Google Scholar] [CrossRef]
  34. Xing, B.; Gao, W.J. Imperialist competitive algorithm. In Innovative Computational Intelligence: A Rough Guide to 134 Clever Algorithms; Springer: Cham, Switzerland, 2014; pp. 203–209. [Google Scholar]
  35. Kashan, A.H. League championship algorithm: A new algorithm for numerical function optimization. In Proceedings of the International Conference of Soft Computing and Pattern Recognition, Malacca, Malaysia, 4–7 December 2009; IEEE: New York, NY, USA, 2009; pp. 43–48. [Google Scholar]
  36. Tan, Y.; Zhu, Y. Fireworks algorithm for optimization. In Proceedings of the International Conference in Swarm Intelligence, Brussels, Belgium, 8–10 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 355–364. [Google Scholar]
  37. Moosavian, N.; Roodsari, B.K. Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks. Swarm Evol. Comput. 2014, 17, 14–24. [Google Scholar] [CrossRef]
  38. Dai, C.; Zhu, Y.; Chen, W. Seeker optimization algorithm. In Proceedings of the International Conference on Computational and Information Science, Reading, UK, 28–31 May 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 167–176. [Google Scholar]
  39. Ghorbani, N.; Babaei, E. Exchange market algorithm. Appl. Soft Comput. 2014, 19, 177–187. [Google Scholar] [CrossRef]
  40. Eita, M.A.; Fahmy, M.M. Group counseling optimization. Appl. Soft Comput. 2014, 22, 585–604. [Google Scholar] [CrossRef]
  41. Dehghani, M.; Trojovská, E.; Trojovský, P. A new human-based metaheuristic algorithm for solving optimization problems on the base of simulation of driving training process. Sci. Rep. 2022, 12, 9924. [Google Scholar] [CrossRef]
  42. Alba, E.; Dorronsoro, B. The exploration/exploitation tradeoff in dynamic cellular genetic algorithms. IEEE Trans. Evol. Comput. 2005, 9, 126–142. [Google Scholar] [CrossRef] [Green Version]
  43. Lin, L.; Gen, M. Auto-tuning strategy for evolutionary algorithms: Balancing between exploration and exploitation. Soft Comput. 2009, 13, 157–168. [Google Scholar] [CrossRef]
  44. Chen, J.; Xin, B.; Peng, Z.; Dou, L.; Zhang, J. Optimal contraction theorem for exploration–exploitation tradeoff in search and optimization. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2009, 39, 680–691. [Google Scholar] [CrossRef]
  45. Macready, W.G.; Wolpert, D.H. Bandit problems and the exploration/exploitation tradeoff. IEEE Trans. Evol. Comput. 1998, 2, 2–22. [Google Scholar] [CrossRef]
  46. Sadoughi, M.K.; Hu, C.; MacKenzie, C.A.; Eshghi, A.T.; Lee, S. Sequential exploration-exploitation with dynamic trade-off for efficient reliability analysis of complex engineered systems. Struct. Multidiscip. Optim. 2018, 57, 235–250. [Google Scholar] [CrossRef] [Green Version]
  47. Xiong, N.; Molina, D.; Ortiz, M.L.; Herrera, F. A walk into metaheuristics for engineering optimization: Principles methods and recent trends. Int. J. Comput. Intell. Syst. 2015, 8, 606–636. [Google Scholar] [CrossRef] [Green Version]
  48. Sun, H.; Ebadi, A.G.; Toughani, M.; Nowdeh, S.A.; Naderipour, A.; Abdullah, A. Designing framework of hybrid photovoltaic-biowaste energy system with hydrogen storage considering economic and technical indices using whale optimization algorithm. Energy 2022, 238, 121555. [Google Scholar] [CrossRef]
  49. Naderipour, A.; Nowdeh, S.A.; Saftjani, P.B.; Abdul-Malek, Z.; Mustafa, M.W.B.; Kamyab, H.; Davoudkhani, I.F. Deterministic and probabilistic multi-objective placement and sizing of wind renewable energy sources using improved spotted hyena optimizer. J. Clean. Prod. 2021, 286, 124941. [Google Scholar] [CrossRef]
  50. Naderipour, A.; Abdul-Malek, Z.; Noorden, Z.A.; Davoudkhani, I.F.; Nowdeh, S.A.; Kamyab, H.; Chelliapan, S.; Ghiasi, S.M.S. Carrier wave optimization for multi-level photovoltaic system to improvement of power quality in industrial environments based on Salp swarm algorithm. Environ. Technol. Innov. 2021, 21, 101197. [Google Scholar] [CrossRef]
  51. Babanezhad, M.; Nowdeh, S.A.; Abdelaziz, A.Y.; AboRas, K.M.; Kotb, H. Reactive power based capacitors allocation in distribution network using mathematical remora optimization algorithm considering operation cost and loading conditions. Alex. Eng. J. 2022, 61, 10511–10526. [Google Scholar] [CrossRef]
  52. Alanazi, A.; Alanazi, M.; Nowdeh, S.A.; Abdelaziz, A.Y.; El-Shahat, A. An optimal sizing framework for autonomous photovoltaic/hydrokinetic/hydrogen energy system considering cost, reliability and forced outage rate using horse herd optimization. Energy Rep. 2022, 8, 7154–7175. [Google Scholar] [CrossRef]
  53. Moghaddam, M.J.H.; Kalam, A.; Nowdeh, S.A.; Ahmadi, A.; Babanezhad, M.; Saha, S. Optimal sizing and energy management of stand-alone hybrid photovoltaic/wind system based on hydrogen storage considering LOEE and LOLE reliability indices using flower pollination algorithm. Renew. Energy 2019, 135, 1412–1434. [Google Scholar] [CrossRef]
  54. Nowdeh, S.A.; Davoudkhani, I.F.; Moghaddam, M.H.; Najmi, E.S.; Abdelaziz, A.Y.; Ahmadi, A.; Razavi, S.; Gandoman, F.H. Fuzzy multi-objective placement of renewable energy sources in distribution system with objective of loss reduction and reliability improvement using a novel hybrid method. Appl. Soft Comput. 2019, 77, 761–779. [Google Scholar] [CrossRef]
  55. Jahannoosh, M.; Nowdeh, S.A.; Naderipour, A.; Kamyab, H.; Davoudkhani, I.F.; Klemeš, J.J. New hybrid meta-heuristic algorithm for reliable and cost-effective designing of photovoltaic/wind/fuel cell energy system considering load interruption probability. J. Clean. Prod. 2021, 278, 123406. [Google Scholar] [CrossRef]
  56. Jafar-Nowdeh, A.; Babanezhad, M.; Arabi-Nowdeh, S.; Naderipour, A.; Kamyab, H.; Abdul-Malek, Z.; Ramachandaramurthy, V.K. Meta-heuristic matrix moth–flame algorithm for optimal reconfiguration of distribution networks and placement of solar and wind renewable sources considering reliability. Environ. Technol. Innov. 2020, 20, 101118. [Google Scholar] [CrossRef]
  57. Jahannoush, M.; Nowdeh, S.A. Optimal designing and management of a stand-alone hybrid energy system using meta-heuristic improved sine–cosine algorithm for Recreational Center case study for Iran country. Appl. Soft Comput. 2020, 96, 106611. [Google Scholar] [CrossRef]
  58. Naderipour, A.; Abdul-Malek, Z.; Vahid, M.Z.; Seifabad, Z.M.; Hajivand, M.; Arabi-Nowdeh, S. Optimal reliable and cost-effective framework of photovoltaic-wind-battery energy system design considering outage concept using grey wolf optimizer algorithm—Case study for Iran. IEEE Access 2019, 7, 182611–182623. [Google Scholar] [CrossRef]
  59. Davoodkhani, F.; Arabi Nowdeh, S.; Abdelaziz, A.Y.; Mansoori, S.; Nasri, S.; Alijani, M. A new hybrid method based on gray wolf optimizer-crow search algorithm for maximum power point tracking of photovoltaic energy system. In Modern Maximum Power Point Tracking Techniques for Photovoltaic Energy Systems; Springer: Cham, Switzerland, 2020; pp. 421–438. [Google Scholar]
  60. Arabi Nowdeh, S.; Moghaddam, M.J.H.; Nasri, S.; Abdelaziz, A.Y.; Ghanbari, M.; Faraji, I. A new hybrid moth flame optimizer-perturb and observe method for maximum power point tracking in photovoltaic energy system. In Modern Maximum Power Point Tracking Techniques for Photovoltaic Energy Systems; Springer: Cham, Switzerland, 2020; pp. 401–420. [Google Scholar]
  61. Shakarami, M.R.; Davoudkhani, I.F. Wide-area power system stabilizer design based on grey wolf optimization algorithm considering the time delay. Electr. Power Syst. Res. 2016, 133, 149–159. [Google Scholar] [CrossRef]
  62. Ghasemi, M.; Davoudkhani, I.F.; Akbari, E.; Rahimnejad, A.; Ghavidel, S.; Li, L. A novel and effective optimization algorithm for global optimization and its engineering applications: Turbulent Flow of Water-based Optimization (TFWO). Eng. Appl. Artif. Intell. 2020, 92, 103666. [Google Scholar] [CrossRef]
  63. Hadidian-Moghaddam, M.J.; Arabi-Nowdeh, S.; Bigdeli, M.; Azizian, D. A multi-objective optimal sizing and siting of distributed generation using ant lion optimization technique. Ain Shams Eng. J. 2018, 9, 2101–2109. [Google Scholar] [CrossRef]
  64. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  65. Bedogni, V.; Manes, A. A constitutive equation for the behaviour of a mountaineering rope under stretching during a climber’s fall. Procedia Eng. 2011, 10, 3353–3358. [Google Scholar] [CrossRef] [Green Version]
  66. Landrø, M.; Pfuhl, G.; Engeset, R.; Jackson, M.; Hetland, A. Avalanche decision-making frameworks: Classification and description of underlying factors. Cold Reg. Sci. Technol. 2020, 169, 102903. [Google Scholar] [CrossRef]
  67. Wickens, C.D.; Keller, J.W.; Shaw, C. Human factors in high-altitude mountaineering. J. Hum. Perform. Extrem. Environ. 2015, 12, 1. [Google Scholar] [CrossRef]
  68. Karaboga, D.; Basturk, B. On the performance of artificial bee colony (ABC) algorithm. Appl. Soft Comput. 2008, 8, 687–697. [Google Scholar] [CrossRef]
  69. Belegundu, A.D.; Arora, J.S. A study of mathematical programming methods for structural optimization. Part I: Theory. Int. J. Numer. Methods Eng. 1985, 21, 1583–1599. [Google Scholar] [CrossRef]
  70. Nowacki, H. Optimization in pre-contract ship design. In Proceedings of the International Conference on Computer Applications in the Automation of Shipyard Operation and Ship Design, Tokyo, Japan, 28–30 August 1973. [Google Scholar]
  71. Sadollah, A.; Bahreininejad, A.; Eskandar, H.; Hamdi, M. Mine blast algorithm: A new population based algorithm for solving constrained engineering optimization problems. Appl. Soft Comput. 2013, 13, 2592–2612. [Google Scholar] [CrossRef]
  72. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  73. Kannan, B.K.; Kramer, S.N. An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J. Mech. Des. 1994, 116, 405–411. [Google Scholar] [CrossRef]
  74. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1988, 3, 95–99. [Google Scholar] [CrossRef]
  75. Storn, R.; Price, K. Differential evolution–A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  76. Clerc, M.; Kennedy, J. The particle swarm-explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef] [Green Version]
  77. Ghasemi, M.; Akbari, E.; Rahimnejad, A.; Razavi, S.E.; Ghavidel, S.; Li, L. Phasor particle swarm optimization: A simple and efficient variant of PSO. Soft Comput. 2019, 23, 9701–9718. [Google Scholar] [CrossRef]
  78. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef]
  79. Wang, H.; Rahnamayan, S.; Sun, H.; Omran, M.G. Gaussian bare-bones differential evolution. IEEE Trans. Cybern. 2013, 43, 634–647. [Google Scholar] [CrossRef]
  80. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization. Comput. Intell. Lab. Zhengzhou Univ. Zhengzhou China Tech. Rep. Nanyang Technol. Univ. Singap. 2013, 635, 490. [Google Scholar]
  81. Rao, R. Rao algorithms: Three metaphor-less simple algorithms for solving optimization problems. Int. J. Ind. Eng. Comput. 2020, 11, 107–130. [Google Scholar] [CrossRef]
  82. Yang, X.-S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Cruz, C., Gonzalez, J., Krasnogor, N., Terraza, G., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74, SCI 284. [Google Scholar]
  83. Yang, X.-S.; Gandomi, A.H. Bat Algorithm: A Novel Approach for Global Engineering Optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  84. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
Figure 1. Categories of metaheuristic optimization algorithms.
Figure 1. Categories of metaheuristic optimization algorithms.
Mathematics 11 01273 g001
Figure 2. The common threat that mountaineers may face when conquering the mountain peak.
Figure 2. The common threat that mountaineers may face when conquering the mountain peak.
Mathematics 11 01273 g002
Figure 3. Flowchart of the proposed MTBO algorithm.
Figure 3. Flowchart of the proposed MTBO algorithm.
Mathematics 11 01273 g003
Figure 4. Three-dimensional specification of (a) F1, (b) F2, and (c) F3.
Figure 4. Three-dimensional specification of (a) F1, (b) F2, and (c) F3.
Mathematics 11 01273 g004aMathematics 11 01273 g004b
Figure 5. Convergence characteristics of the MTBO with different populations from 15 to 90 (a) F1, (b) F2, and (c) F3.
Figure 5. Convergence characteristics of the MTBO with different populations from 15 to 90 (a) F1, (b) F2, and (c) F3.
Mathematics 11 01273 g005
Figure 6. Problem of tension/compression spring design.
Figure 6. Problem of tension/compression spring design.
Mathematics 11 01273 g006
Figure 7. Convergence process for tension/compression spring problem using different algorithms.
Figure 7. Convergence process for tension/compression spring problem using different algorithms.
Mathematics 11 01273 g007
Figure 8. Problem of three-bar truss design. The numbers 1 to 3 are the number of elements of truss.
Figure 8. Problem of three-bar truss design. The numbers 1 to 3 are the number of elements of truss.
Mathematics 11 01273 g008
Figure 9. Convergence graphs for the three-bar truss design problem using different algorithms.
Figure 9. Convergence graphs for the three-bar truss design problem using different algorithms.
Mathematics 11 01273 g009
Figure 10. Problem of the pressure vessel optimization.
Figure 10. Problem of the pressure vessel optimization.
Mathematics 11 01273 g010
Figure 11. Convergence graphs for pressure vessel optimization problem using different algorithms.
Figure 11. Convergence graphs for pressure vessel optimization problem using different algorithms.
Mathematics 11 01273 g011
Table 1. Summary of the selected test functions with fmin = 0.
Table 1. Summary of the selected test functions with fmin = 0.
Test FunctionSearch Range
f 1 = i 1 D x i 2 [−100, 100]
f 2 = i = 1 D 1 ( 100 ( x i 2 x i + 1 ) 2 + x i 1 ) 2 [−2.048, 2.048]
f 3 = i = 1 D x i 2 10 cos ( 2 π x i + 10 ) [−5.12, 5.12]
Table 2. Mean value and standard deviation for 30 independent executions for each test function.
Table 2. Mean value and standard deviation for 30 independent executions for each test function.
PopulationF1F2F3
MeanStd.RMeanStd.RMeanStd.R
150.54490.8326632.95968.1957649.94839.96466
309.81 × 10−72.62 × 10−6523.85112.3849533.637417.25505
451.67 × 10−112.19 × 10−11422.84532.0032425.969111.47544
601.33 × 10−153.95 × 10−15321.99090.6714319.20275.34883
754.76 × 10−201.09 × 10−19221.65960.9834116.91434.29871
901.83 × 10−235.52 × 10−23121.87720.9459218.00885.14762
Table 3. Various possibilities to continue the regular movement of the population.
Table 3. Various possibilities to continue the regular movement of the population.
LiF1F2F3
MeanStd.RMeanStd.RMeanStd.R
(0.25 + 0.25 × rand)1.33 × 10−153.95 × 10−15121.99090.6714119.20275.34883
(0.5 + 0.5 × rand)8.68459.4542631.02714.4540641.786611.74406
(0.25 + 0.5 × rand)8.41 × 10−151.51 × 10−14222.39981.2361329.052812.98794
0.5 × rand3.42 × 10−128.85 × 10−12422.35361.1654216.51638.36662
rand1.41 × 10−144.21 × 10−14323.19291.7868429.64976.93785
0.11.09 × 10−71.70 × 10−7523.41221.0828514.66357.47471
0.9389.8292365.6368767.301333.3700757.923317.73957
Table 4. Various possibilities for the occurrence of an avalanche.
Table 4. Various possibilities for the occurrence of an avalanche.
AiF1F2F3
MeanStd.RMeanStd.RMeanStd.R
(0.5 + 0.5 × rand)1.33 × 10−153.95 × 10−15421.99090.6714319.20275.34881
(0.5 + 0.25 × rand)4.85 × 10−127.99 × 10−12522.69420.9995423.680012.54862
(0.75 + 0.25 × rand)2.28 × 10−186.09 × 10−18321.77111.3563226.76445.19013
(0.9 + 0.1 × rand)4.70 × 10−211.13 × 10−20122.79722.6426551.638222.01656
0.1284.0440242.0450641.35106.7514634.002412.21985
0.92.38 × 10−206.09 × 10−20221.23881.1805132.336111.31764
Table 5. Various possibilities for the rescue of an individual.
Table 5. Various possibilities for the rescue of an individual.
MiF1F2F3
MeanStd.RMeanStd.RMeanStd.R
(0.5 + 0.5 × rand)1.33 × 10−153.95 × 10−15421.99090.6714319.20275.34881
(0.5 + 0.25 × rand)6.64 × 10−152.10 × 10−14622.08370.9729428.55549.86145
(0.75 + 0.25 × rand)8.66 × 10−171.43 × 10−16221.99801.2873226.764416.56573
(0.9 + 0.1 × rand)2.84 × 10−156.05 × 10−15521.91071.1549127.560315.63384
0.11.51 × 10−183.27 × 10−18122.52990.5712640.594310.77546
0.91.29 × 10−153.96 × 10−15322.28020.8322524.37656.17352
Table 6. Tension/compression spring optimal design problem.
Table 6. Tension/compression spring optimal design problem.
VariableMTBO
x10.05173
x20.35771
x311.2312
g1−1.02 × 10−5
g2−3.49 × 10−6
g3−4.06
g4−0.73
Best0.012665
Mean0.012684
Worst0.012702
Std.5.60 × 10−05
Table 7. Statistical results for the tension/compression spring problem by the studied algorithms.
Table 7. Statistical results for the tension/compression spring problem by the studied algorithms.
VariableBestMeanWorstStd.p-Values
MTBO0.0126650.0126840.0127025.60 × 10−5--
Rao-10.0126660.0127250.0128757.94 × 10−50.020840
BA0.0126660.0134950.0166739.18 × 10−30.009918
PSO0.0126750.0127280.0128994.26 × 10−40.009235
WOA0.0126720.0127110.0129461.84 × 10−30.008167
Table 8. Three-bar truss structure optimal design problem using MTBO.
Table 8. Three-bar truss structure optimal design problem using MTBO.
VariableMTBO
x10.78868
x20.40825
g1(X)−2.52
g2(X)−1.4639
g3(X)−0.5360
Best263.8958434
Mean263.895844
Worst263.8958442
Std.7.19 × 10−7
Table 9. Statistical results for the three-bar truss structure optimal design problem by the studied algorithms.
Table 9. Statistical results for the three-bar truss structure optimal design problem by the studied algorithms.
VariableBestMeanWorstStd.p-Values
MTBO263.8958434263.895844263.89584447.19 × 10−7--
Rao-1263.8958441263.897012263.8975282.58 × 10−30.00481
BA263.8958449263.910134263.9318248.29 × 10−32.8772 × 10−4
PSO263.8958608263.898057263.9412651.67 × 10−29.3245 × 10−5
WOA263.8958505263.897963263.9284296.43 × 10−38.4932 × 10−5
Table 10. Pressure vessel optimal design problem using MTBO.
Table 10. Pressure vessel optimal design problem using MTBO.
VariableMTBO
x10.8125
x20.4375
x342.09845
x41.76637
g1(X)0.0
g2(X)−0.036
g3(X)−3.5 × 10−10
g4(X)−63.40
Best6059.714335
Mean6168.7825
Worst6304.2583
Std.95.37
Table 11. Statistical results for pressure vessel optimal design problem by the studied algorithms.
Table 11. Statistical results for pressure vessel optimal design problem by the studied algorithms.
VariableBestMeanWorstStd.p-Values
MTBO6059.7143356168.78256304.258395.37--
Rao-16059.7143356182.70546391.1278242.930.020568
BA6059.7143356195.10066325.3192308.640.0164401
PSO6061.5924627982.63799296.1815693.510.0065754
WOA6059.7159636314.85627142.5356500.780.0081667
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Faridmehr, I.; Nehdi, M.L.; Davoudkhani, I.F.; Poolad, A. Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm. Mathematics 2023, 11, 1273. https://0-doi-org.brum.beds.ac.uk/10.3390/math11051273

AMA Style

Faridmehr I, Nehdi ML, Davoudkhani IF, Poolad A. Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm. Mathematics. 2023; 11(5):1273. https://0-doi-org.brum.beds.ac.uk/10.3390/math11051273

Chicago/Turabian Style

Faridmehr, Iman, Moncef L. Nehdi, Iraj Faraji Davoudkhani, and Alireza Poolad. 2023. "Mountaineering Team-Based Optimization: A Novel Human-Based Metaheuristic Algorithm" Mathematics 11, no. 5: 1273. https://0-doi-org.brum.beds.ac.uk/10.3390/math11051273

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop