Results 1  10
of
14
Regularization networks and support vector machines
 Advances in Computational Mathematics
, 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Abstract

Cited by 288 (34 self)
 Add to MetaCart
(Show Context)
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.
An equivalence between sparse approximation and Support Vector Machines
 A.I. Memo 1606, MIT Arti cial Intelligence Laboratory
, 1997
"... This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995), ..."
Abstract

Cited by 216 (7 self)
 Add to MetaCart
This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995), and a sparse approximation scheme that resembles the Basis Pursuit DeNoising algorithm (Chen, 1995 � Chen, Donoho and Saunders, 1995). SVM is a technique which can be derived from the Structural Risk Minimization Principle (Vapnik, 1982) and can be used to estimate the parameters of several di erent approximation schemes, including Radial Basis Functions, algebraic/trigonometric polynomials, Bsplines, and some forms of Multilayer Perceptrons. Basis Pursuit DeNoising is a sparse approximation technique, in which a function is reconstructed by using a small number of basis functions chosen from a large set (the dictionary). We show that, if the data are noiseless, the modi ed version of Basis Pursuit DeNoising proposed in this paper is equivalent to SVM in the following sense: if applied to the same data set the two techniques give the same solution, which is obtained by solving the same quadratic programming problem. In the appendix we also present a derivation of the SVM technique in the framework of regularization theory, rather than statistical learning theory, establishing a connection between SVM, sparse approximation and regularization theory.
A unified framework for Regularization Networks and Support Vector Machines
, 1999
"... This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Res ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
(Show Context)
This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Research under contractN o.N 0001493 10385 and contractN o.N 000149510600. Partial support was also provided by DaimlerBenz AG, Eastman Kodak, Siemens Corporate Research, Inc., ATR and AT&T. Contents Introductic 3 2 OverviF of stati.48EF learni4 theory 5 2.1 Unifo6 Co vergence and the VapnikChervo nenkis bo und ............. 7 2.2 The metho d o Structural Risk Minimizatio ..................... 10 2.3 #unifo8 co vergence and the V # ..................... 10 2.4 Overviewo fo urappro6 h ............................... 13 3 Reproduci9 Kernel HiT ert Spaces: a briL overviE 14 4RegulariEqq.L Networks 16 4.1 Radial Basis Functio8 ................................. 19 4.2 Regularizatioz generalized splines and kernel smo oxy rs .............. 20 4.3 Dual representatio o f Regularizatio Netwo rks ................... 21 4.4 Fro regressioto 5 Support vector machiT9 22 5.1 SVMin RKHS ..................................... 22 5.2 Fro regressioto 6SRMforRNsandSVMs 26 6.1 SRMfo SVMClassificatio .............................. 28 6.1.1 Distributio dependent bo undsfo SVMC .................. 29 7 A BayesiL Interpretatiq ofRegulariTFqEL and SRM? 30 7.1 Maximum A Po terio6 Interpretatio o f ............... 30 7.2 Bayesian interpretatio o f the stabilizer in the RN andSVMfunctio6I6 ...... 32 7.3 Bayesian interpretatio o f the data term in the Regularizatio andSVMfunctioy8 33 7.4 Why a MAP interpretatio may be misleading .................... 33 Connectine between SVMs and Sparse Ap...
A SignalProcessing Framework for Reflection
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... ... In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequencyspac ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
... In this paper, we formalize these notions, showing that the reflected light field can be thought of in a precise quantitative way as obtained by convolving the lighting and BRDF, i.e. by filtering the incident illumination using the BRDF. Mathematically, we are able to express the frequencyspace coe#cients of the reflected light field as a product of the spherical harmonic coe# cients of the illumination and the BRDF. These results are of practical importance in determining the wellposedness and conditioning of problems in inverse renderingestimation of BRDF and lighting parameters from real photographs. Furthermore, we are able to derive analytic formulae for the spherical harmonic coe#cients of many common BRDF and lighting models. From this formal analysis, we are able to determine precise conditions under which estimation of BRDFs and lighting distributions are well posed and wellconditioned. Our mathematical analysis also has implications for forward renderingespecially the e#cient rendering of objects under complex lighting conditions specified by environment maps. The results, especially the analytic formulae derived for Lambertian surfaces, are also relevant in computer vision in the areas of recognition, photometric stereo and structure from motion.
Learning with Kernel Machine Architectures
, 2000
"... This thesis studies the problem of supervised learning using a family of machines, namely kernel learning machines. A number of standard learning methods belong to this family, such as Regularization Networks (RN) and Support Vector Machines (SVM). The thesis presents a theoretical justification of ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This thesis studies the problem of supervised learning using a family of machines, namely kernel learning machines. A number of standard learning methods belong to this family, such as Regularization Networks (RN) and Support Vector Machines (SVM). The thesis presents a theoretical justification of these machines within a unified framework based on the statistical learning theory of Vapnik. The generalization performance of RN and SVM is studied within this framework, and bounds on the generalization error of these machines are proved. In the second
Learning with Kernel Machine Architectures
, 2000
"... This thesis studies the problem of supervised learning using a family of machines, namely kernel learning machines. A number of standard learning methods belong to this family, such as Regularization Networks (RN) and Support Vector Machines (SVM). The thesis presents a theoretical justification of ..."
Abstract
 Add to MetaCart
This thesis studies the problem of supervised learning using a family of machines, namely kernel learning machines. A number of standard learning methods belong to this family, such as Regularization Networks (RN) and Support Vector Machines (SVM). The thesis presents a theoretical justification of these machines withina unified framework based on the statistical learning theory of Vapnik. The generalization performance of RN and SVM is studied within this framework, and bounds on the generalization error of these machines are proved. In the second part, the thesis goes beyond standard onelayer learning machines, and probes into the problem of learning using hierarchical learning schemes. In particular it investigates the question: what happens when instead of training one machine using the available examples we train many of them, each in a di#erent way, and then combine the machines? Two types of ensembles are defined: voting combinations and adaptive combinations. The statistical...
PUPT1827 Eigenvalue Dynamics and the Matrix Chain
, 1999
"... We introduce a general method for transforming the equations of motion following from a DasJevickiSakita Hamiltonian, with boundary conditions, into a boundary value problem in onedimensional quantum mechanics. For the particular case of a onedimensional chain of interacting N × N Hermitean matri ..."
Abstract
 Add to MetaCart
(Show Context)
We introduce a general method for transforming the equations of motion following from a DasJevickiSakita Hamiltonian, with boundary conditions, into a boundary value problem in onedimensional quantum mechanics. For the particular case of a onedimensional chain of interacting N × N Hermitean matrices, the corresponding large N boundary value problem is mapped into a linear Fredholm equation with HilbertSchmidt type kernel. The equivalence of this kernel, in special cases, to a second order differential operator allows us recover all previously known explicit solutions for the matrix eigenvalues. In the general case, the distribution of eigenvalues is formally derived through a series of saddlepoint approximations. The critical behaviour of the system, including a previously observed KosterlitzThouless transition, is interpreted in terms of the stationary points. In particular we show that a previously conjectured infinite series of subleading critical points are due to expansion about unstable stationary points and The utility of studying the statistical mechanics of systems which can be encoded in terms
Intermittency in Cluster Models; Correlation and Fluctuation Approaches
"... Intermittent correlations/fluctuations in the particle spectra of highenergy collisions are studied using correlation and fluctuation descriptions of cluster models. It is shown that in both methods a leading contribution to intermittency may be connected with a cluster structure of multiparticle p ..."
Abstract
 Add to MetaCart
Intermittent correlations/fluctuations in the particle spectra of highenergy collisions are studied using correlation and fluctuation descriptions of cluster models. It is shown that in both methods a leading contribution to intermittency may be connected with a cluster structure of multiparticle processes. 1 Introduction The study of correlations of finalstate particles emitted at various positions of rapidity in hadronhadron interactions has revealed a tendency for particles to be grouped in clusters over a range of rapidity of about 1 to 2 units [14]. These shortrange correlations in hadronhadron collisions have been interpreted in terms of cluster models [58], in which the observed hadrons are decay products of clusters. The e + e \Gamma data show a similar effect [9]. Moreover, the cluster scheme is useful for Monte Carlo simulation of hadronization in e + e \Gamma annihilation (cluster fragmentation model [10]). In this paper we shall analyse the intermittent b...