Trial-based dominance for comparing both the speed and accuracy of stochastic optimizers with standard non-parametric tests
Non-parametric tests can determine the better of two stochastic optimization algorithms when benchmarking results are ordinal—like the final fitness values of multiple trials—but for many benchmarks, a trial can also terminate once it reaches a prespecified target value. In such cases, both the time...
Saved in:
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/174585 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Non-parametric tests can determine the better of two stochastic optimization algorithms when benchmarking results are ordinal—like the final fitness values of multiple trials—but for many benchmarks, a trial can also terminate once it reaches a prespecified target value. In such cases, both the time that a trial takes to reach the target value (or not) and its final fitness value characterize its outcome. This paper describes how trial-based dominance can totally order this two-variable dataset of outcomes so that traditional non-parametric methods can determine the better of two algorithms when one is faster, but less accurate than the other, i.e. when neither algorithm dominates. After describing trial-based dominance, we outline its benefits. We subsequently review other attempts to compare stochastic optimizers, before illustrating our method with the Mann-Whitney U test. Simulations demonstrate that “U-scores” are much more effective than dominance when tasked with identifying the better of two algorithms. We validate U-scores by having them determine the winners of the CEC 2022 competition on single objective, bound-constrained numerical optimization. |
---|