Results 1  10
of
11
The Connection between Regularization Operators and Support Vector Kernels
, 1998
"... In this paper a correspondence is derived between regularization operators used in Regularization Networks and Support Vector Kernels. We prove that the Green's Functions associated with regularization operators are suitable Support Vector Kernels with equivalent regularization properties. Moreover ..."
Abstract

Cited by 148 (43 self)
 Add to MetaCart
In this paper a correspondence is derived between regularization operators used in Regularization Networks and Support Vector Kernels. We prove that the Green's Functions associated with regularization operators are suitable Support Vector Kernels with equivalent regularization properties. Moreover the paper provides an analysis of currently used Support Vector Kernels in the view of regularization theory and corresponding operators associated with the classes of both polynomial kernels and translation invariant kernels. The latter are also analyzed on periodical domains. As a byproduct we show that a large number of Radial Basis Functions, namely conditionally positive definite functions, may be used as Support Vector kernels.
On a Kernelbased Method for Pattern Recognition, Regression, Approximation, and Operator Inversion
, 1997
"... We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connection ..."
Abstract

Cited by 77 (25 self)
 Add to MetaCart
We present a Kernelbased framework for Pattern Recognition, Regression Estimation, Function Approximation and multiple Operator Inversion. Previous approaches such as ridgeregression, Support Vector methods and regression by Smoothing Kernels are included as special cases. We will show connections between the costfunction and some properties up to now believed to apply to Support Vector Machines only. The optimal solution of all the problems described above can be found by solving a simple quadratic programming problem. The paper closes with a proof of the equivalence between Support Vector kernels and Greene's functions of regularization operators.
A probabilistic framework for SVM regression and error bar estimation
 Machine Learning
, 2002
"... In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error b ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error bar formula is derived based on the ɛinsensitive loss function.
On a class of support vector kernels based on frames in function hilbert spaces
 Neural Computation
, 2001
"... In recent years there has been an increasing interest in kernelbased techniques, such as Support Vector Techniques, Regularization Networks and Gaussian Processes. There are inner relationships among those techniques with the kernel function playing a central role. This paper discusses a new class ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In recent years there has been an increasing interest in kernelbased techniques, such as Support Vector Techniques, Regularization Networks and Gaussian Processes. There are inner relationships among those techniques with the kernel function playing a central role. This paper discusses a new class of kernel functions derived from the socalled frames in a function Hilbert space.
MDL and MML: Similarities and Differences (Introduction to Minimum Encoding Inference  Part III)
, 1994
"... This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper continues the introduction to minimum encoding inductive inference given by Oliver and Hand. This series of papers was written with the objective of providing an introduction to this area for statisticians. We describe the message length estimates used in Wallace's Minimum Message Length (MML) inference and Rissanen's Minimum Description Length (MDL) inference. The differences in the message length estimates of the two approaches are explained. The implications of these differences for applications are discussed.
MML and Bayesianism: Similarities and Differences (Introduction to Minimum Encoding Inference  Part II)
, 1994
"... This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper continues the introduction to minimum encoding inference given by Oliver and Hand. This series of papers were written with the objective of providing an introduction to this area for statisticians. We examine the relationship between Bayesianism and Minimum Message Length (MML) inference. We argue that MML augments Bayesian methods by providing a sound Bayesian method for point estimation which is invariant under nonlinear transformations. We explore the issues of invariance of estimators under nonlinear transformations, the role of the Fisher Information matrix in MML inference, and the apparent similarity between MML and the adoption of a Jeffreys' Prior. We then compare MML to an approximate method of Bayesian Model Class Selection. Despite apparent similarities in their expressions, the properties of the two approaches can be different.
Bayesian Approaches to Segmenting a Simple Time Series
, 1997
"... The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We per ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The segmentation problem arises in many applications in data mining, A.I. and statistics. In this paper, we consider segmenting simple time series. We develop two Bayesian approaches for segmenting a time series, namely the Bayes Factor approach, and the Minimum Message Length (MML) approach. We perform simulations comparing these Bayesian approaches, and then perform a comparison with other classical approaches, namely AIC, MDL and BIC. We conclude that the MML criterion is the preferred criterion. We then apply the segmentation method to financial time series data. 1 Introduction In this paper, we consider the problem of segmenting simple time series. We consider time series of the form: y t+1 = y t + ¯ j + ffl t where we are given N data points (y 1 : : : ; yN ) and we assume there are C + 1 segments (j 2 f0; : : : Cg), and that each ffl t is Gaussian with mean zero and variance oe 2 j . We wish to estimate  the number of segments, C + 1,  the segment boundaries, fv 1 ; : :...
Using Regularization to Derive Optimally Connected Architectures
"... There are several ways to design neural networks so that they would generalize better. Specifying an architecture with local connections is one of these, but the choice for the size and location of the neighborhoods involved in this method always seems to be ad hoc. We propose to add an extra term t ..."
Abstract
 Add to MetaCart
There are several ways to design neural networks so that they would generalize better. Specifying an architecture with local connections is one of these, but the choice for the size and location of the neighborhoods involved in this method always seems to be ad hoc. We propose to add an extra term to the error function, which forces a fully connected network to find optimal local connections or locally shared connections. We also propose methods to optimize the hyperparameters involved in this extra term. Results are shown on a compression task of handwritten digits with an autoencoder MLP. 1 Introduction The number of free parameters in artificial neural networks is a well known factor of generalization performance [Baum and Haussler 89, Moody 92]. Constraining this number is thus necessary to reach good performance. One can do so in basically two different ways : imposing a priori constraints on the network, or letting the data impose the constraints (through learning). The first m...
Nuclear Power Plant Components Condition Monitoring by Probabilistic Support Vector Machine
, 2013
"... In this paper, an approach for the prediction of the condition of Nuclear Power Plant (NPP) components is proposed, for the purposes of condition monitoring. It builds on a modified version of the Probabilistic Support Vector Regression (PSVR) method, which is based on the Bayesian probabilistic par ..."
Abstract
 Add to MetaCart
In this paper, an approach for the prediction of the condition of Nuclear Power Plant (NPP) components is proposed, for the purposes of condition monitoring. It builds on a modified version of the Probabilistic Support Vector Regression (PSVR) method, which is based on the Bayesian probabilistic paradigm with a Gaussian prior. Specific techniques are introduced for the tuning of the PSVR hyerparameters, the model identification and the uncertainty analysis. A real case study is considered, regarding the prediction of a drifting process parameter of a NPP component.