An algorithm to optimize a noisy function in a few variables is given. A function is considered noisy if its evaluation is imprecise due to stochastic measurement errors. The authors were working in the domain of chemical experiments, and each evaluation of the function was expensive and time-consuming. The number of parameters was kept below six.
The objective function is assumed to be smooth but noisy. The algorithm moves through a scaling phase to set up trust regions and then moves to a descent phase, approximating the objective function by a quadratic model function. The algorithm is shown to be robust and convergent in practice, but the authors admit that they did not undertake a theoretical proof of convergence. They compare their algorithm with Nedler and Mead’s [1] and show that theirs is more robust and twice as fast.
The introduction to the paper is exceptionally good. The pseudocode for the algorithm is clean, and the empirical examples lend credence to the method. The majority of the 16 references are from before 1990.