Results 1 -
4 of
4
Inner Product Spaces for Bayesian Networks
- Journal of Machine Learning Research
, 2005
"... Bayesian networks have become one of the major models used for statistical inference. We study the question whether the decisions computed by a Bayesian network can be represented within a low-dimensional inner product space. We focus on two-label classification tasks over the Boolean domain. As mai ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Bayesian networks have become one of the major models used for statistical inference. We study the question whether the decisions computed by a Bayesian network can be represented within a low-dimensional inner product space. We focus on two-label classification tasks over the Boolean domain. As main results we establish upper and lower bounds on the dimension of the inner product space for Bayesian networks with an explicitly given (full or reduced) parameter collection. In particular, these bounds are tight up to a factor of 2. For some nontrivial cases of Bayesian networks we even determine the exact values of this dimension. We further consider logistic autoregressive Bayesian networks and show that every sufficiently expressive inner product space must have dimension at least Ω(n 2), where n is the number of network nodes. We also derive the bound 2 Ω(n) for an artificial variant of this network, thereby demonstrating the limits of our approach and raising an interesting open question. As a major technical contribution, this work reveals combinatorial and algebraic structures within Bayesian networks such that known methods for the derivation of lower bounds on the dimension of inner product spaces can be brought into play. Keywords: dimension Bayesian network, inner product space, embedding, linear arrangement, Euclidean 1.
DATA EXPLORATION WITH LEARNING METRICS
"... ISBN 951-22-7344-6 (printed version) ISBN 951-22-7345-4 (electronic version) ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
ISBN 951-22-7344-6 (printed version) ISBN 951-22-7345-4 (electronic version)
A hybrid generative/discriminative method for EEG evoked potential detection
"... I. INTRODUCTION Generative and discriminative learning approaches are two prevailing and powerful, yet different, paradigms in machine leaning. Generative learning models, such as Bayesian inference [1] attempt to model the underlying distributions of the variables in order to compute classificatio ..."
Abstract
- Add to MetaCart
(Show Context)
I. INTRODUCTION Generative and discriminative learning approaches are two prevailing and powerful, yet different, paradigms in machine leaning. Generative learning models, such as Bayesian inference [1] attempt to model the underlying distributions of the variables in order to compute classification and regression functions. These methods provide a rich framework for learning from prior knowledge. Discriminative learning models, such as support vector machines (SVM) [2] avoid generative modeling by directly optimizing a mapping from the inputs to the desired outputs by adjusting the resulting classification boundary. These latter methods commonly demonstrate superior performance in classification. Recently, researchers have investigated the relationship between these two learning paradigms and have attempted to combine their complementary strengths
Positive Definite Kernels in Machine Learning
, 2009
"... This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set ..."
Abstract
- Add to MetaCart
(Show Context)
This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions {k(x, ·), x ∈ X} associated with a kernel k defined on a space X. We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain datatypes such as images, graphs or speech segments. Remark: This report is a draft. Comments and suggestions will be highly appreciated. Summary We provide in this survey a short introduction to positive definite kernels and the set of methods they have inspired in machine learning, also known as kernel methods. The main idea behind kernel methods is the following. Most datainference tasks aim at defining an appropriate decision function f on a set of objects of interest X. When X is a vector space of dimension d, say R d, linear functions fa(x) = a T x are one of the easiest and better understood choices, notably for regression, classification or dimensionality reduction. Given a positive definite kernel k on X, that is a real-valued function on X × X which quantifies effectively how similar two points x and y are through the value k(x, y), kernel methods are algorithms which estimate functions f of the form f: x ∈ X → f(x) = ∑ αik(xi, x), (1) i∈I