Computing Reviews
Today's Issue Hot Topics Search Browse Recommended My Account Log In
Review Help
Search
Leave one out error, stability, and generalization of voting combinations of classifiers
Evgeniou T., Pontil M., Elisseeff A. Machine Learning55 (1):71-97,2004.Type:Article
Date Reviewed: Mar 11 2005

The assessment and explanation of why, when, and how much the use of a combination of classifiers is better than the use of a single classifier is one of the most persistent topics in machine learning. In this paper, the authors study the generalization error of a variant of bagging, using the leave-one-out error and stability as theoretical tools for obtaining error bounds, in order to compare single and combined support vector machines (SVMs).

The authors are able to show, theoretically and experimentally, that the bounds are tighter for ensembles than for single classifiers when the learning system is not stable, as Breiman claimed in his original paper on bagging [1]. This is so because bagging achieves higher stability, which doesn’t necessarily mean that the test error is always lower. This increase in stability also explains why bagging reaches a saturation point when too many classifiers are combined.

Some results support combining several models trained on small subsets, rather than learning a single model from the whole dataset (a clear connection to the modern technique known as chunking) in order to increase accuracy. It would be more efficient, since learning time usually grows nonlinearly, with respect to the number of examples.

In this paper, only SVM and a variant of bagging are considered, and the conclusions leave many open questions. Some of the theoretical results are difficult to follow without the proper background on leave-one-out error estimates and kernel machines. Overall, though, the work represents one step forward in the understanding of ensembles.

Reviewer:  Jose Hernandez-Orallo Review #: CR130973 (0507-0838)
1) Breiman, L.; , Bagging predictors. Machine Learning 26, 2(1996), 123–140.
Bookmark and Share
  Featured Reviewer  
 
Classifier Design And Evaluation (I.5.2 ... )
 
 
Induction (I.2.6 ... )
 
 
Structural (I.5.1 ... )
 
 
Learning (I.2.6 )
 
 
Models (I.5.1 )
 
Would you recommend this review?
yes
no
Other reviews under "Classifier Design And Evaluation": Date
Linear discrimination with symmetrical models
Bobrowski L. Pattern Recognition 19(1): 101-109, 1986. Type: Article
Feb 1 1988
An application of a graph distance measure to the classification of muscle tissue patterns
Sanfeliu A. (ed), Fu K., Prewitt J. International Journal of Pattern Recognition and Artificial Intelligence 1(1): 17-42, 1987. Type: Article
Dec 1 1989
Selective networks and recognition automata
George N. J., Edelman G.  Computer culture: the scientific, intellectual, and social impact of the computer (, New York,2011984. Type: Proceedings
May 1 1987
more...

E-Mail This Printer-Friendly
Send Your Comments
Contact Us
Reproduction in whole or in part without permission is prohibited.   Copyright 1999-2024 ThinkLoud®
Terms of Use
| Privacy Policy