@MISC{Jiang_thevc, author = {Wenxin Jiang}, title = {The VC Dimension for Mixtures of Binary Classifiers}, year = {} }

Share

OpenURL

Abstract

The mixtures-of-experts (ME) methodology provides a tool of classification when experts of logistic regression models or Bernoulli models are mixed according to a set of local weights. We show that the VapnikChervonenkis (VC) dimension of the mixtures-of-experts architecture is bounded below by the number of experts m, and is bounded above by O(m 4 s 2 ), where s is the dimension of the input. For mixtures of Bernoulli experts with a scalar input, we show that the lower bound m is attained, in which case we obtain the exact result that the VC dimension is equal to the number of experts. 1 Introduction The Vapnik-Chervonenkis (VC) dimension is a central concept for recent developments of computational learning theory (see Anthony and Biggs 1992). The VC dimension is a combinatorial parameter defined on a set of binary functions 1 or a system of classifiers, which shows the expressive power of the system. In risk minimization (Vapnik 1982, 1992 and 1998), the VC dimension ...