Browsing by Author "Wang, Junchen"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item Open Access History-guided hill exploration for evolutionary computation(IEEE Press, 2023-03) Wang, Junchen; Li, Changhe; Zeng, Sanyou; Yang, ShengxiangAlthough evolutionary computing (EC) methods are stochastic optimization methods, it is usually difficult to find the global optimum by restarting the methods when the population converges to a local optimum. A major reason is that many optimization problems have basins of attraction (BoAs) that differ widely in shape and size, and the population always prefers to converge toward basins of attraction that are easy to search. Although heuristic restart based on tabu search is a theoretically feasible idea to solve this problem, existing EC methods with heuristic restart are difficult to avoid repetitive search results while maintaining search efficiency. This paper tries to overcome the dilemma by online learning the BoAs and proposes a search mode called history-guided hill exploration (HGHE). In the search mode, evaluated solutions are used to help separate the search space into hill regions which correspond to the BoAs, and a classical EC method is used to locate the optimum in each hill region. An instance algorithm for continuous optimization named HGHE differential evolution (HGHE-DE) is proposed to verify the effectiveness of HGHE. Experimental results prove that HGHE-DE can continuously discover unidentified BoAs and locate optima in identified BoAs.Item Open Access Learning to search promising regions by a Monte-Carlo tree model(IEEE Press, 2022-07) Xia, Hai; Li, Changhe; Zeng, Sanyou; Tan, Qingshan; Wang, Junchen; Yang, ShengxiangIn complex optimization problems, learning where to search is a difficult but critical decision for all search algorithms. Evolutionary computation methods also encounter a dilemma about where to explore or exploit. In this paper, a Monte-Carlo tree is constructed to guide evolutionary algorithms to search multiple promising regions simultaneously. In the Monte-Carlo tree model, a root node that contains all historical solutions represents the whole solution space. In each node of the tree, with k-means clustering method to partition solutions into different groups, group labels of the solutions are used to train support vector regression, which can learn a boundary to partition a region into different sub-regions. According to state values of nodes, reproduction operators of evolutionary algorithms are strengthened by selecting solutions in the most promising regions. From experimental results on multimodal problems, the proposed algorithm shows a competitive performance, which also indicates a great potential for applications to other kinds of optimization problems.Item Embargo Modeling and evolutionary optimization for multi-objective vehicle routing problem with real-time traffic conditions(Association for Computing Machinery, 2020-02) Xiao, Long; Li, Changhe; Wang, Junchen; Mavrovouniotis, Michalis; Yang, Shengxiang; Dan, XiaorongThe study of the vehicle routing problem (VRP) is of outstanding significance for reducing logistics costs. Currently, there is little VRP considering real-time traffic conditions. In this paper, we propose a more realistic and challenging multi-objective VRP containing real-time traffic conditions. Besides, we also offer an adaptive local search algorithm combined with a dynamic constrained multi-objective evolutionary framework. In the algorithm, we design eight local search operators and select them adaptively to optimize the initial solutions. Experimental results show that our algorithm can obtain an excellent solution that satisfies the constraints of the vehicle routing problem with real-time traffic conditions.Item Open Access A reinforcement-learning-based evolutionary algorithm using solution space clustering for multimodal optimization problems(IEEE Press, 2021-06) Xia, Hai; Li, Changhe; Zeng, Sanyou; Tan, Qingshan; Wang, Junchen; Yang, ShengxiangIn evolutionary algorithms, how to effectively select interactive solutions for generating offspring is a challenging problem. Though many operators are proposed, most of them select interactive solutions (parents) randomly, having no specificity for the features of landscapes in various problems. To address this issue, this paper proposes a reinforcement-learning-based evolutionary algorithm to select solutions within the approximated basin of attraction. In the algorithm, the solution space is partitioned by the k-dimensional tree, and features of subspaces are approximated with respect to two aspects: objective values and uncertainties. Accordingly, two reinforcement learning (RL) systems are constructed to determine where to search: the objective-based RL exploits basins of attraction (clustered subspaces) and the uncertainty-based RL explores subspaces that have been searched comparatively less. Experiments are conducted on widely used benchmark functions, demonstrating that the algorithm outperforms three other popular multimodal optimization algorithms.