Results 1  10
of
145
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1304 (43 self)
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leaveoneout method and the VCdimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
Regularization networks and support vector machines
 Advances in Computational Mathematics
, 2000
"... Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization a ..."
Abstract

Cited by 269 (33 self)
 Add to MetaCart
Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples – in particular the regression problem of approximating a multivariate function from sparse data. Radial Basis Functions, for example, are a special case of both regularization and Support Vector Machines. We review both formulations in the context of Vapnik’s theory of statistical learning which provides a general foundation for the learning problem, combining functional analysis and statistics. The emphasis is on regression: classification is treated as a special case.
An equivalence between sparse approximation and Support Vector Machines
 A.I. Memo 1606, MIT Arti cial Intelligence Laboratory
, 1997
"... This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995), ..."
Abstract

Cited by 205 (7 self)
 Add to MetaCart
This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. The pathname for this publication is: aipublications/15001999/AIM1606.ps.Z This paper shows a relationship between two di erent approximation techniques: the Support Vector Machines (SVM), proposed by V.Vapnik (1995), and a sparse approximation scheme that resembles the Basis Pursuit DeNoising algorithm (Chen, 1995 � Chen, Donoho and Saunders, 1995). SVM is a technique which can be derived from the Structural Risk Minimization Principle (Vapnik, 1982) and can be used to estimate the parameters of several di erent approximation schemes, including Radial Basis Functions, algebraic/trigonometric polynomials, Bsplines, and some forms of Multilayer Perceptrons. Basis Pursuit DeNoising is a sparse approximation technique, in which a function is reconstructed by using a small number of basis functions chosen from a large set (the dictionary). We show that, if the data are noiseless, the modi ed version of Basis Pursuit DeNoising proposed in this paper is equivalent to SVM in the following sense: if applied to the same data set the two techniques give the same solution, which is obtained by solving the same quadratic programming problem. In the appendix we also present a derivation of the SVM technique in the framework of regularization theory, rather than statistical learning theory, establishing a connection between SVM, sparse approximation and regularization theory.
A Theory of Networks for Approximation and Learning
 Laboratory, Massachusetts Institute of Technology
, 1989
"... Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, t ..."
Abstract

Cited by 195 (24 self)
 Add to MetaCart
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. Wedevelop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods suchasParzen windows and potential functions and to several neural network algorithms, suchas Kanerva's associative memory,backpropagation and Kohonen's topology preserving map. They also haveaninteresting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Transport equations for elastic and other waves in random media
 Wave Motion
, 1996
"... We derive and analyze transport equations for the energy density ofwaves of any kind in a random medium. The equations take account of nonuniformities of the background medium, scattering by random inhomogeneities, polarization e ects, coupling of di erent types of waves, etc. We also show that di u ..."
Abstract

Cited by 121 (34 self)
 Add to MetaCart
We derive and analyze transport equations for the energy density ofwaves of any kind in a random medium. The equations take account of nonuniformities of the background medium, scattering by random inhomogeneities, polarization e ects, coupling of di erent types of waves, etc. We also show that di usive behavior occurs on long time and distance scales and we determine the di usion coe cients. The results are specialized to acoustic, electromagnetic, and elastic waves. The analysis is based on the governing equations of motion and uses the Wigner distribution.
A Theoretical Framework for Convex Regularizers in PDEBased Computation of Image Motion
, 2000
"... Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness consta ..."
Abstract

Cited by 77 (20 self)
 Add to MetaCart
Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness constancy assumption holds, and a regularizer that encourages global or piecewise smoothness of the flow field. In this paper we present a systematic classification of rotation invariant convex regularizers by exploring their connection to diffusion filters for multichannel images. This taxonomy provides a unifying framework for datadriven and flowdriven, isotropic and anisotropic, as well as spatial and spatiotemporal regularizers. While some of these techniques are classic methods from the literature, others are derived here for the first time. We prove that all these methods are wellposed: they posses a unique solution that depends in a continuous way on the initial data. An interesting structural relation between isotropic and anisotropic flowdriven regularizers is identified, and a design criterion is proposed for constructing anisotropic flowdriven regularizers in a simple and direct way from isotropic ones. Its use is illustrated by several examples.
Pointwise semigroup methods and stability of viscous shock waves
 Indiana Univ. Math. J
, 1998
"... Abstract. Considered as rest points of ODE on L p, stationary viscous shock waves present a critical case for which standard semigroup methods do not su ce to determine stability. More precisely, there is no spectral gap between stationary modes and essential spectrum of the linearized operator abou ..."
Abstract

Cited by 63 (32 self)
 Add to MetaCart
Abstract. Considered as rest points of ODE on L p, stationary viscous shock waves present a critical case for which standard semigroup methods do not su ce to determine stability. More precisely, there is no spectral gap between stationary modes and essential spectrum of the linearized operator about the wave, a fact which precludes the usual analysis by decomposition into invariant subspaces. For this reason, there have been until recently no results on shock stability from the semigroup perspective except in the scalar or totally compressive case ([Sat], [K.2], resp.), each of which can be reduced to the standard semigroup setting by Sattinger's method of weighted norms. We overcome this di culty in the general case by the introduction of new, pointwise semigroup techniques, generalizing earlier work of Howard [H.1], Kapitula [K.12], and Zeng [Ze,LZe]. These techniques allow us to do \hard " analysis in PDE within the dynamical systems/semigroup framework: in particular, to obtain sharp, global pointwise bounds on the Green's function of the linearized operator around the wave, su cient for the analysis of linear and nonlinear stability. The method is general, and should nd applications
A unified framework for Regularization Networks and Support Vector Machines
, 1999
"... This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Researc ..."
Abstract

Cited by 50 (13 self)
 Add to MetaCart
This report describers research done at the Center for Biological & Computational Learning and the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. This research was sponsored by theN ational Science Foundation under contractN o. IIS9800032, the O#ce ofN aval Research under contractN o.N 0001493 10385 and contractN o.N 000149510600. Partial support was also provided by DaimlerBenz AG, Eastman Kodak, Siemens Corporate Research, Inc., ATR and AT&T. Contents Introductic 3 2 OverviF of stati.48EF learni4 theory 5 2.1 Unifo6 Co vergence and the VapnikChervo nenkis bo und ............. 7 2.2 The metho d o Structural Risk Minimizatio ..................... 10 2.3 #unifo8 co vergence and the V # ..................... 10 2.4 Overviewo fo urappro6 h ............................... 13 3 Reproduci9 Kernel HiT ert Spaces: a briL overviE 14 4RegulariEqq.L Networks 16 4.1 Radial Basis Functio8 ................................. 19 4.2 Regularizatioz generalized splines and kernel smo oxy rs .............. 20 4.3 Dual representatio o f Regularizatio Netwo rks ................... 21 4.4 Fro regressioto 5 Support vector machiT9 22 5.1 SVMin RKHS ..................................... 22 5.2 Fro regressioto 6SRMforRNsandSVMs 26 6.1 SRMfo SVMClassificatio .............................. 28 6.1.1 Distributio dependent bo undsfo SVMC .................. 29 7 A BayesiL Interpretatiq ofRegulariTFqEL and SRM? 30 7.1 Maximum A Po terio6 Interpretatio o f ............... 30 7.2 Bayesian interpretatio o f the stabilizer in the RN andSVMfunctio6I6 ...... 32 7.3 Bayesian interpretatio o f the data term in the Regularizatio andSVMfunctioy8 33 7.4 Why a MAP interpretatio may be misleading .................... 33 Connectine between SVMs and Sparse Ap...
Describing Surfaces
 Computer Vision, Graphics, and Image Processing
, 1985
"... This paper continues ou,' work' on vlsuM representations of threedimensional surfaces [Brady and Yuille 1984b]. The theoretical component o our work is a study of classes of surface curves as a source of constraint on the surface on which they lie, and as a basis for describing it. We analyze bound ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
This paper continues ou,' work' on vlsuM representations of threedimensional surfaces [Brady and Yuille 1984b]. The theoretical component o our work is a study of classes of surface curves as a source of constraint on the surface on which they lie, and as a basis for describing it. We analyze bounding contours, sin face intersections, lines of cunature, and asymptotes. Our experimental work hives.igates whether the information suggested by our theoretical study can be computed reliably mid efficiently. We demonstrate algorithms that compute lines of curvature of a (Gaussian smoothed) surface; determine planar patches and umbi!ic regions; extract axes of surfaces of revolution and tube surfaces. We report preliminary results on adapting the curvature primM sketch algorithms of Asada and Brady [1984] to detect and describe surface intersections. () Massachusetts Institute of Technology, 1984 This report describes research done at the Artificial Intelligeice Laboratory of the Massachusetts Institute of Technology. Support for the ]aboratory's Artificial Intelligence reseat.oh is provided in par. by the Adwmced Research Projects Agency of the Department of Defense under Office of Naval Research contract N0001480C0505, the Office of Nax'al Research under contract number N000t477C0389, ,and the System Development Foundation. This wcrk was done while Haruo Asada was a visiting scientist at MIT on leave from Toshiba Corporation, Japan, and while Jean Ponce was a visking s.ientist on leave from I.'RIA, Paris, Fro,nee. ' Pr't of (t6:7)
Tracking Leukocytes In Vivo With Shape And Size Constrained Active Contours
, 2002
"... Inflammatory disease is initiated by leukocytes (white blood cells) rolling along the inner surface Hning of small blood vessels called postcapillary venules. Studying the number and velocity of rolling leukocytes is essential to understanding and successfully treating inflammatory diseases. Potent ..."
Abstract

Cited by 44 (12 self)
 Add to MetaCart
Inflammatory disease is initiated by leukocytes (white blood cells) rolling along the inner surface Hning of small blood vessels called postcapillary venules. Studying the number and velocity of rolling leukocytes is essential to understanding and successfully treating inflammatory diseases. Potential inhibitors of leukocyte recruitment can be screened by leukocyte rolling assays and successful inhibitors validated by intravital microscopy. In this paper we present an active contour or snakebased technique to automatically track the movement of the leukocytes. The novelty of the proposed method Hes in the energy functional that constrains the shape and size of the active contour. This paper introduces a significant enhancement over existing gradientbased snakes in the form of a modified gradient vector flow. Using the gradient vector flow, we can track leukocytes rolling at high speeds that are not amenable to tracking with the existing edgebased techniques. We also propose a new energy based implicit sampling method of the points on the active contour that replaces the computationally expensive explicit method. To enhance the performance of this shape and size constrained snake model we have coupled it with Kalman f'fiter, so that during coasting (when the leukocytes are completely occluded or obscured), the tracker may infer the location of the center of the leukocyte. Finally we have compared the performance of the proposed snake tracker with that of the correlation and centroidbased trackers. The proposed snake tracker results in superior performance measures such as reduced error in locating the leukocyte under tracking and improvements in the percentage of frames successfully tracked. For screening and drug validation, the tracker shows promise as an automat...