Keynote talk: Prof Xiaodong Li, School of Computing Technologies, RMIT University, Melbourne, Australia
Title: Decision Making in Evolutionary Optimization and Beyond
Abstract: In real-world situations, optimization is rarely done alone without any decision made during the process. Decisions may take the form of preferences supplied by a decision maker or knowledge learnt from prior experience in solving similar problem instances. Very often these decisions play a crucial role in obtaining the kind of optimal solutions we ultimately desire for. If this cannot be done automatically, then human-in-the-loop is often the approach taken to inject preference information that is needed to guide the search. In recent years, we have witnessed the rising popularity of machine learning in facilitating and automating such decision making in the process of optimization, which has a much broader impact beyond just evolutionary optimization. In this talk, I will present several such decision-making facilitated optimization approaches, e.g., using Bayesian optimization to learn the decision maker’s preferences interactively in an evolutionary multiobjective optimization algorithm [1]; multimodal optimization using a niching method guided by preference information [2]; employing machine learning to learn from previously solved problem instances (typically combinatorial optimization problems such as the traveling salesman problem), and use such knowledge to build a model to predict the optimal solutions on unseen and much large problem instances [3]. Our “solution prediction via machine learning” approach can be used as a generic problem reduction method for solving some large-scale combinatorial optimization problems [4] and as a warm-start method to meta-heuristics such as ant colony optimization [5].
References:
[1] Taylor, K., Ha, H., Li, M., Chan, J. and Li, X. (2021), “Bayesian Preference Learning for Interactive Multi-objective Optimisation”, in Proceedings of the 2021 Conference on Genetic and Evolutionary Computation Conference (GECCO), Lille, France, ACM, pp.466-475.
[2] Miessen, A., Najman, J. and Li, X. (2021), “Finding Representative Solutions in Multimodal Optimization for Enhanced Decision-Making”, in Metaheuristics for Finding Multiple Solutions, pp.57 – 88, Springer, 2021.
[3] Sun, Y., Ernst, A.T., Li, X. and Weiner, J. (2021), “Generalization of Machine Learning for Problem Reduction: a Case Study on Travelling Salesman Problems”, OR Spectrum, 43:607-633, 2021.
[4] Sun, Y., Li, X., Ernst, A. (2021), “Using Statistical Measures and Machine Learning for Graph Reduction to Solve Maximum Weight Clique Problems”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(5): 1746 -1760, May 2021.
[5] Sun, Y. Wang, S., Shen, Y., Li, X., Ernst, A.T., and Kirley, M. (2022), “Boosting Ant Colony Optimization via Solution Prediction and Machine Learning”, Computers and Operations Research, Vol.143, July 2022, 105769.
Bio:
Xiaodong Li received his B.Sc. degree from Xidian University, Xi’an, China, and Ph.D. degree in information science from University of Otago, Dunedin, New Zealand, respectively. He is a Professor in Artificial Intelligence currently with the School of Computing Technologies, RMIT University, Melbourne, Australia. His research interests include machine learning, evolutionary computation, data mining/analytics, multiobjective optimization, multimodal optimization, large-scale optimization, deep learning, math-heuristic methods, and swarm intelligence. He serves as an Associate Editor of journals including IEEE Transactions on Evolutionary Computation, Swarm Intelligence (Springer), and International Journal of Swarm Intelligence Research. He is a founding member of IEEE CIS Task Force on Swarm Intelligence, a former vice-chair of IEEE Task Force on Multi-modal Optimization, and a former chair of IEEE CIS Task Force on Large Scale Global Optimization. He is the recipient of 2013 ACM SIGEVO Impact Award and 2017 IEEE CIS “IEEE Transactions on Evolutionary Computation Outstanding Paper Award”. He was elevated to IEEE Fellow in 2020 (“for contributions to large-scale and particle swarm optimization”). His h-index is 59, with a total number of citations 14000+ (according to Google Scholar).