Computing Reviews

Low-rank decomposition meets kernel learning
Lan L., Zhang K., Ge H., Cheng W., Liu J., Rauber A., Li X., Wang J., Zha H. Artificial Intelligence250 1-15,2017.Type:Article
Date Reviewed: 01/17/18

This paper describes how low-rank kernel learning can be modified to make use of side information such as class labels on some of the data. In low-rank kernel learning, the kernel can be approximated using the Nystrom method in which a selection of sample points, called landmark points, are used to estimate the kernel matrix. The matrix given by this Nystrom method is called the dictionary kernel, and similarity between samples is measured as the distance between the closest landmark points. To make use of the labeled classes, the authors use two matrices: the first is that obtained from the standard Nystrom method, which will be suitably scaled when combined with the second matrix, which is derived from the labeled classes. The authors describe a minimization problem for finding a suitable combination of these two matrices. The algorithm is described in some detail, and its time and space complexity, which is linear in the sample sizes, is determined. The authors describe experiments where their algorithm is compared to seven standard algorithms for low-rank kernel learning, using benchmark datasets from the SSL datasets and from LIBSVM data.

The authors’ method is found to be competitive with these, and indeed in several cases superior. They also consider the effect of varying the number of labeled samples. As one might expect, increasing the number of labeled samples improves the results. This kind of mixing of techniques seems to be a promising approach in machine learning.

Reviewer:  J. P. E. Hodgson Review #: CR145776 (1805-0256)

Reproduction in whole or in part without permission is prohibited.   Copyright 2024 ComputingReviews.com™
Terms of Use
| Privacy Policy