Browsing by Author "Hou, Zhanglu"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Embargo A cluster prediction strategy with the induced mutation for dynamic multi-objective optimization(Elsevier, 2024-01-25) Xu, Kangyu; Xia, Yizhang; Zou, Juan; Hou, Zhanglu; Yang, Shengxiang; Hu, Yaru; Liu, YuanDynamic multi-objective optimization problems (DMOPs) are multi-objective optimization problems in which at least one objective and/or related parameter vary over time. The challenge of solving DMOPs is to efficiently and accurately track the true Pareto-optimal set when the environment undergoes changes. However, many existing prediction-based methods overlook the distinct individual movement directions and the available information in the objective space, leading to biased predictions and misleading the subsequent search process. To address this issue, this paper proposes a prediction method called IMDMOEA, which relies on cluster center points and induced mutation. Specifically, employing linear prediction methods based on cluster center points in the decision space enables the algorithm to rapidly capture the population's evolutionary direction and distributional shape. Additionally, to enhance the algorithm's adaptability to significant environmental changes, the induced mutation strategy corrects the population's evolutionary direction by selecting promising individuals for mutation based on the predicted result of the Pareto front in the objective space. These two complementary strategies enable the algorithm to respond faster and more effectively to environmental changes. Finally, the proposed algorithm is evaluated using the JY, dMOP, FDA, and F test suites. The experimental results demonstrate that IMDMOEA competes favorably with other state-of-the-art algorithms.Item Open Access A dynamic preference-driven evolutionary algorithm for solving dynamic multi-objective problems(ACM, 2024-07-01) Wang, Xueqing; Zheng, Jinhua; Zou, Juan; Hou, Zhanglu; Liu, Yuan; Yang, ShengxiangConsidering the decision-maker's preference information in static multi-objective optimization problems (MOPs) has been extensively studied. However, incorporating dynamic preference information into dynamic MOPs is a relatively less explored area. This paper introduces a preference information-driven DMOEA and proposes a preference-based prediction method. Specifically, a preference-based inverse model is designed to respond to the time-varying preference information, and the model is used to predict an initial population for tracking the changing ROI. Furthermore, a hybrid prediction strategy, that combines a linear prediction model and estimation of population manifolds in the ROI, is proposed to ensure convergence and distribution of population when the preference remain constant. The experimental results show that the proposed algorithm has significant advantages over existing representative DMOEAs through experimental tests on 19 common test problems.Item Embargo A novel preference-driven dynamic multi-objective evolutionary algorithm for solving dynamic multi-objective problems(Elsevier, 2024-06-30) Wang, Xueqing; Zheng, Jinhua; Hou, Zhanglu; Liu, Yuan; Zou, Juan; Xia, Yizhang; Yang, ShengxiangMost studies in dynamic multi-objective optimization have predominantly focused on rapidly and accurately tracking changes in the Pareto optimal front (POF) and Pareto optimal set (POS) when the environment undergoes changes. However, there are real-world scenarios where it is necessary to simultaneously solve changing objective functions and satisfy the preference of Decision Makers (DMs). In particular, the DMs may be only interested in a partial region of the POF, known as the region of interest (ROI), rather than requiring the entire POF. To meet the challenge of simultaneously predicting a changing POF and/or POS and dynamic ROI, this paper proposes a new dynamic multi-objective evolutionary algorithm (DMOEAs) based on the preference. The proposed algorithm consists of three key components: an evolutionary direction adjustment strategy based on changing reference points to accommodate shifts in preferences, an angle-based search strategy for tracking the varying ROI, and a hybrid prediction strategy that combines linear prediction models and population manifold estimation within the ROI to ensure convergence and distribution in scenarios where preferences remain unchanged. Experimental studies conducted on 30 widely used benchmark problems in which it outperforms contrasting algorithms on 71% of test suits. Empirical results demonstrate the significant advantages of the proposed algorithm over existing state-of-the-art DMOEAs.Item Embargo Continuous variation operator configuration for decomposition-based evolutionary multi-objective optimization(Elsevier, 2024-07-10) Liu, Yuan; Li, Jiazheng; Zou, Juan; Hou, Zhanglu; Yang, Shengxiang; Zheng, JinhuaThere are various multi-objective evolutionary algorithms (MOEAs) for solving multi-objective optimization problems (MOPs), and the significant difference between them lies in the way they generate offspring, which are the so-called variation operators. Since different variation operators have their own characteristics, it is often tedious to select a suitable EA for a given MOP. Even if the optimal operator is assigned, the fixed operator and hyper-parameters make it difficult to balance exploration and exploitation during the evolutionary process. It is imperative to configure variation operators and hyper-parameters automatically during the evolutionary process, which can improve the efficiency of algorithm search. However, numerous configurations only consider operators or discretize hyper-parameters, making it difficult to achieve satisfactory results. In this paper, we formulate the operator configuration as a continuous Markov Decision Process (MDP) and use a suitable Reinforcement Learning (RL) paradigm to realize the online configuration of EAs. To simplify the deployment of MDP, we adopt a decomposition-based framework and use a one-dimensional vector with a combination of weights and objectives as state spaces. In addition, we take the selection of crossover and mutation operators and the fine-tuning of their hyper-parameters as joint action spaces. With an RL technique, we expect to achieve maximum improvement in the performance of offspring on each preference by selecting an action in a given state. We further explore the effectiveness of the proposed methodology on different characteristic MOPs. Experimental results show that our method is more competitive than other configurations and state-of-the-art EAs.Item Open Access A performance indicator for reference-point-based multiobjective evolutionary optimization(2018-11) Hou, Zhanglu; Yang, Shengxiang; Zou, Juan; Zheng, Jinhua; Yu, Guo; Ruan, GanAiming at the difficulty in evaluating preference-based evolutionary multiobjective optimization, this paper proposes a new performance indicator. The main idea is to project the preferred solutions onto a constructed hyperplane which is perpendicular to the vector from the reference (aspiration) point to the origin. And then the distance from preferred solutions to the origin and the standard deviation of distance from each mapping point to the nearest point will be calculated. The former is used to measure the convergence of the obtained solutions. The latter is utilized to assess the diversity of preferred solutions in the region of interest. The indicator is conducted to assess different algorithms on a series of benchmark problems with various features. The results show that the proposed indicator is able to properly evaluate the performance of preference-based multiobjective evolutionary algorithms.