3/23/2023 0 Comments Sequential samplingHolistically, metaheuristics can be seen as performance-driven sequential sampling processes (Markovian processes) (Yang et al. As the purpose of this paper is not to review all the optimisation algorithms that employed sequential sampling techniques, we advise the readers to refer to (de Mello and Bayraksan 2014) where they can find a comprehensive review for algorithms used the Monte Carlo sampling approach for solving optimisation problems. Sequential sampling was used intensively in the second half of the last century for estimating the distribution of unknown functions, and the influence of such methods is still highly echoed in current engineering metamodels (Jin et al. The candidate problems usually range from continuous differentiable problems to discrete, noisy, and even loosely defined objectives, such as in engineering applications. ( 2020), as well as the recent whales optimisation algorithm (WOA) (Mirjalili and Lewis 2016), and pathfinder algorithm (PFA) (Yapici and Cetinkaya 2019), to mention but a few. 2002) and its latest modifications such as the Geem and Sim ( 2010), Shaqfa and Orbán ( 2019) and Jeong et al. 2021), and harmony search (HS) (Geem et al. 2019), including algorithms such as genetic algorithms (GAs) (Holland 1992), particle swarm optimisation (PSO) (Kennedy and Eberhart 1995) and the recent generalized version (GEPSO) (Sedighizadeh et al. Many metaheuristic algorithms were developed to solve optimisation problems by mimicking biological and physical analogies (Ser et al. The ever-increasing complexity of engineering applications means that variable sets are growing larger, and the subsequent landscapes that need to be explored by these optimisation problems are becoming increasingly complicated. Over the past few decades, global optimisation techniques for solving combinatorial problems have flourished. The code of the algorithm is provided in C++14, Python3.7, and Octave (Matlab). The design of this algorithm is kept simple so it can be easily coupled or hybridised with other search paradigms. Moreover, the algorithm outperforms and scales better than recent algorithms when it is benchmarked under a limited number of iterations for the composite CEC2017 problems. The algorithm performs well in finding global minima for nonconvex and multimodal functions, especially with high dimensional problems and it was found very competitive in comparison with the recent algorithmic proposals. We used 26 standard benchmarks with different properties that cover most of the optimisation problems’ nature, three traditional engineering problems, and one real complex engineering problem from the state-of-the-art literature. This proposed approach has been benchmarked against standard optimisation problems as well as a selected set of simple and complex engineering applications. The trade-off balance between the diversification and the intensification is explained theoretically and experimentally. A simple theoretical derivation revealed that the exploration of this approach is unbiased and the rate of the diversification is constant during the runtime. By using a simple topology, the algorithm avoids premature convergence by sampling new solutions every generation. It depends solely on the sequential random sampling that can be used in diversification and intensification processes while keeping the information-flow between generations and the structural bias at a minimum. Unlike traditional metaheuristics, the proposed method has no direct mutation- or crossover-like operations. This algorithm samples most of its solutions within prominent search domains and is equipped with a self-adaptive mechanism to control the dynamic tightening of the prominent domains while the greediness of the algorithm increases over time (iterations). In this paper, we propose a simple global optimisation algorithm inspired by Pareto’s principle.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |