A local and global convergence theory for the alternating method is presented in this paper. This method is an iterative procedure for minimizing a nonlinear function; at each step, a subset of the variables is selected, and the function is minimized with respect to only these variables, all remaining variables being fixed at the current value.
Local convergence of the algorithm is established under the following assumptions:
- (1) There is continuity of the minimizing function, and its first and second derivative.
- (2) There is positive definiteness of the Hessian of the minimizing function at the optimal point.
- (3) There is strict convexity of the minimizing function, in a neighborhood of the optimal solution point.
- (4) In a neighborhood of the optimal solution point, for each subset of variables, local minimizers of the function (with variables not in the subset fixed) are also global minimizers.
The authors note that condition 4 is quite atypical, and discuss the significance of the assumption in some detail. However, it remains rather obscure.
Global convergence of the algorithm is based on a point-of-attraction type of argument. Convergence is established under the (strong) assumption of compactness of the feasible region.
The paper presents some reading difficulties, which are due mostly to notational problems. It also lacks any comparisons (in terms of different assumptions) with other papers presenting similar convergence theories for alternating methods. The main contribution of the paper is a self-contained analysis of the algorithm, which could be beneficial to practitioners who often use this method to solve real-world problems.