Browsing by Author "Song, Wei"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Open Access Learning to guide particle search for dynamic multi-objective optimization(IEEE, 2024-02-23) Song, Wei; Liu, Shaocong; Wang, Xinjie; Yang, Shengxiang; Jin, YaochuDynamic multiobjective optimization problems (DMOPs) are characterized by multiple objectives that change over time in varying environments. More specifically, environmental changes can be described as various dynamics. However, it is difficult for existing dynamic multiobjective algorithms (DMOAs) to handle DMOPs due to their inability to learn in different environments to guide the search. Besides, solving DMOPs is typically an online task, requiring low computational cost of a DMOA. To address the above challenges, we propose a particle search guidance network (PSGN), capable of directing individuals’ search actions, including learning target selection and acceleration coefficient control. PSGN can learn the actions that should be taken in each environment through rewarding or punishing the network by reinforcement learning. Thus, PSGN is capable of tackling DMOPs of various dynamics. Additionally, we efficiently adjust PSGN hidden nodes and update the output weights in an incremental learning way, enabling PSGN to direct particle search at a low computational cost. We compare the proposed PSGN with seven state-of-the-art algorithms, and the excellent performance of PSGN verifies that it can handle DMOPs of various dynamics in a computationally very efficient way.Item Open Access Multi-region trend prediction strategy with online sequential extreme learning machine for dynamic multi-objective optimization(IEEE, 2024-08-07) Song, Wei; Liu, Shaocong; Yu, Hongbin; Guo, Yinan; Yang, ShengxiangDynamic multi-objective optimization problems (DMOPs) involve multiple conflicting and time-varying objectives, requiring dynamic multi-objective algorithms (DMOAs) to track changing Pareto-optimal fronts. In recent decade, prediction-based DMOAs have shown promise in handling DMOPs. However, in existing prediction-based DMOAs some specific solutions in a small number of prior environments are generally used. Consequently, it is difficult for these DMOAs to capture Pareto-optimal set (POS) changes accurately. Besides, gaps may exist in some objective subspaces due to uneven population distribution, causing a difficulty in searching these subspaces. Faced with such difficulties, this article proposes a multi-region trend prediction strategy-based dynamic multi-objective evolutionary algorithm (MTPS-DMOEA) to handle DMOPs. MTPS-DMOEA divides the objective space into multiple subspaces and predicts POS moving trends through the use of POS center points from multiple objective subspaces, which contributes to accurately capturing POS changes. In MTPS-DMOEA, the parameters of the prediction model are continuously updated via online sequential extreme learning machine, facilitating the adequate utilization of useful information in historical environments and hence the enhancement of the generalization performance for the prediction. To fill gaps in some objective subspaces, MTPS-DMOEA introduces diverse solutions generated from the previous POS in adjacent subspaces. We compare the proposed MTPS-DMOEA with six state-of-the-art DMOAs on fourteen benchmark test problems, and the experimental results demonstrate the excellent performance of MTPS-DMOEA in handling DMOPs.Item Open Access Particle search control network for dynamic optimization(IEEE, 2024-08) Song, Wei; Liu, Zhi; Liu, Shaocong; Ding, Xiaofeng; Guo, Yinan; Yang, ShengxiangIn dynamic optimization problems (DOPs), environmental changes can be characterized as various dynamics. Faced with different dynamics, existing dynamic optimization algorithms (DOAs) are difficult to tackle, because they are incapable of learning in each environment to control the search. Besides, diversity loss is a critical issue in solving DOPs. Maintaining a high diversity over dynamic environments is reasonable as it can address such an issue automatically. In this paper we propose a particle search control network (PSCN) to maintain a high diversity over time and control two key search actions of each input individual, i.e., locating the local learning target and adjusting the local acceleration coefficient. Specifically, PSCN adequately considers the diversity to generate subpopulations located by hidden node centers, where each center is assessed by significance-based criteria and distance-based criteria. The former enable a small intra-subpopulation distance and a big search scope (subpopulation width) for each subpopulation, while the latter make each center distant from other existing centers. In each subpopulation, the best found position is selected as the local learning target. In the output layer, PSCN determines the action of adjusting the local acceleration coefficient of each individual. Reinforcement learning is introduced to obtain the desired output of PSCN, enabling the network to control the search by learning in different iterations of each environment. The experimental results especially performance comparisons with eight state-of-the-art DOAs demonstrate that PSCN brings significant improvements in performance of solving DOPs.