Results 1  10
of
14
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract

Cited by 828 (3 self)
 Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
The connection between regularization operators and support vector kernels
, 1998
"... In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green’s Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, ..."
Abstract

Cited by 179 (42 self)
 Add to MetaCart
In this paper a correspondence is derived between regularization operators used in regularization networks and support vector kernels. We prove that the Green’s Functions associated with regularization operators are suitable support vector kernels with equivalent regularization properties. Moreover, the paper provides an analysis of currently used support vector kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a byproduct we show that a large number of radial basis functions, namely conditionally positive definite
On a Kernelbased Method for Pattern Recognition, Regression, Approximation, and Operator Inversion
, 1997
"... We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connection ..."
Abstract

Cited by 94 (24 self)
 Add to MetaCart
We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connections between the costfunction and some properties up to now believed to apply to Support Vector Machines only. The optimal solution of all the problems described above can be found by solving a simple quadratic programming problem. The paper closes with a proof of the equivalence between Support Vector kernels and Greene's functions of regularization operators.
A probabilistic framework for SVM regression and error bar estimation
 Machine Learning
, 2002
"... In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error b ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error bar formula is derived based on the ɛinsensitive loss function.
On a class of support vector kernels based on frames in function hilbert spaces
 Neural Computation
, 2001
"... In recent years there has been an increasing interest in kernelbased techniques, such as Support Vector Techniques, Regularization Networks and Gaussian Processes. There are inner relationships among those techniques with the kernel function playing a central role. This paper discusses a new class ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
In recent years there has been an increasing interest in kernelbased techniques, such as Support Vector Techniques, Regularization Networks and Gaussian Processes. There are inner relationships among those techniques with the kernel function playing a central role. This paper discusses a new class of kernel functions derived from the socalled frames in a function Hilbert space.
Bayesian Approaches to Segmenting a Simple Time Series
, 1997
"... The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We per ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We perform simulations comparing these Bayesian approaches, and then perform a comparison with other classical approaches, namely AIC, MDL and BIC. We conclude that the MML criterion is the preferred criterion. We then apply the segmentation method to financial time series data. 1 Introduction In this paper, we consider the problem of segmenting simple time series. We consider time series of the form: y t+1 = y t + ¯ j + ffl t where we are given N data points (y 1 : : : ; yN ) and we assume there are C + 1 segments (j 2 f0; : : : Cg), and that each ffl t is Gaussian with mean zero and variance oe 2 j . We wish to estimate  the number of segments, C + 1,  the segment boundaries, fv 1 ; : :...
MML and Bayesianism: Similarities and Differences (Introduction to Minimum Encoding Inference  Part II)
, 1994
"... This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. We argue that MML augments Bayesian methods by providing a sound Bayesian method for point estimation which is invariant under nonlinear transformations. We explore the issues of invariance of estimators under nonlinear transformations, the role of the Fisher Information matrix in MML inference, and the apparent similarity between MML and the adoption of a Jeffreys' Prior. We then compare MML to an approximate method of Bayesian Model Class Selection. Despite apparent similarities in their expressions, the properties of the two approaches can be different.
MDL and MML: Similarities and Differences (Introduction to Minimum Encoding Inference  Part III)
, 1994
"... This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Le ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
Information, Language, and PixonBased Image Reconstruction
 in Digital Image Recovery and Synthesis III, Proc. SPIE, P.S. Idell and
, 1996
"... From an information theoretic point of view, the inverse problem and the problem of data compression are intimately related. Optimal compression seeks the most concise representation of a data set, while Bayesian probability theory favors image reconstruction algorithms which minimally model the inf ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
From an information theoretic point of view, the inverse problem and the problem of data compression are intimately related. Optimal compression seeks the most concise representation of a data set, while Bayesian probability theory favors image reconstruction algorithms which minimally model the information present in the data. This should not be surprising. It is in keeping with a scientists intuitive need to satisfy the precepts of Occam's Razor, i.e. not to over interpret one's data. Information scientists might describe this process as quantifying the Algorithmic Information Content (AIC) of the image, and then using this "coordinate system" for optimal image reconstruction. The present paper describes pixonbased image reconstruction, a technique based upon AIC minimal image models. Because AIC is language dependent (description length and language complexity are inversely related) we have based the practical implementation of our method on concise (descriptive) languages for gene...
Article URL
, 2009
"... This Provisional PDF corresponds to the article as it appeared upon acceptance. Fully formatted PDF and full text (HTML) versions will be made available soon. How to integrate individual patient values and preferences in clinical practice guidelines? A research protocol Implementation Science 2010, ..."
Abstract
 Add to MetaCart
(Show Context)
This Provisional PDF corresponds to the article as it appeared upon acceptance. Fully formatted PDF and full text (HTML) versions will be made available soon. How to integrate individual patient values and preferences in clinical practice guidelines? A research protocol Implementation Science 2010, 5:10