Finite mixture models (FMMs), as convex combinations of density functions in a parametric family, have been a useful and versatile statistical tool for density estimation, together with the popular expectation-maximization (EM) algorithm for fitting model parameters. The fuzzy c-means (FCM) algorithm, on the other hand, is widely used to assign memberships of observation data to classes. FCM entails a minimization process for a fuzzy objective function to obtain membership functions and the means of derived clusters.
In this paper, Chatzis proposes a method for training/fitting FMMs, which is based on FCM, by taking advantage of the benefits of FMM as an approach and the fuzzy paradigm in the training process. In this approach, fuzzified elements are introduced into the objective function. The paper gives a detailed algorithm for this process, based on pedantically elaborated theory and proven propositions. After the theoretical exposition, the authors present applications of the algorithm for mixtures of Gaussian factor analyzers, probabilistic principal component analyzers, and Student’s t-factor analyzers.
The paper concludes with an experimental evaluation of the approach using the University of California, Irvine’s Machine Learning Repository datasets--Crabs, Iris, Pima, Magic, and Wine--which have become the de facto benchmarks for these purposes. Unsupervised and supervised classifications are examined. Generally, the algorithm outperforms comparable algorithms.
Further investigations are needed to assess the types of data with which this approach works best. This paper, however, enthusiastically hints that this particular mix of (classical) statistics and fuzzy elements might be a fruitful one.