A Review of Multi-objective Test Problems and a Scalable Test Problem Toolkit
Document Type
Journal Article
Publisher
IEEE
Faculty
Faculty of Computing, Health and Science
School
School of Computer and Information Science
RAS ID
4817
Abstract
When attempting to better understand the strengths and weaknesses of an algorithm, it is important to have a strong understanding of the problem at hand. This is true for the field of multiobjective evolutionary algorithms (EAs) as it is for any other field. Many of the multiobjective test problems employed in the EA literature have not been rigorously analyzed, which makes it difficult to draw accurate conclusions about the strengths and weaknesses of the algorithms tested on them. In this paper, we systematically review and analyze many problems from the EA literature, each belonging to the important class of real-valued, unconstrained, multiobjective test problems. To support this, we first introduce a set of test problem criteria, which are in turn supported by a set of definitions. Our analysis of test problems highlights a number of areas requiring attention. Not only are many test problems poorly constructed but also the important class of nonseparable problems, particularly nonseparable multimodal problems, is poorly represented. Motivated by these findings, we present a flexible toolkit for constructing well-designed test problems. We also present empirical results demonstrating how the toolkit can be used to test an optimizer in ways that existing test suites do not
DOI
10.1109/TEVC.2005.861417
Access Rights
subscription content
Comments
Huband, S., Hingston, P. F., Barone, L., & While, L. (2006). A Review of Multi-objective Test Problems and a Scalable Test Problem Toolkit. IEEE Transactions on Evolutionary Computation, 10(5), 477-506. http://dx.doi.org/10.1109/TEVC.2005.861417