Results 1  10
of
22
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 2272 (11 self)
 Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
ObjectCentered Surface Reconstruction: Combining MultiImage Stereo and Shading
 International Journal of Computer Vision
, 1995
"... Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an objectcentered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rig ..."
Abstract

Cited by 120 (19 self)
 Add to MetaCart
Our goal is to reconstruct both the shape and reflectance properties of surfaces from multiple images. We argue that an objectcentered representation is most appropriate for this purpose because it naturally accommodates multiple sources of data, multiple images (including motion sequences of a rigid object), and selfocclusions. We then present a specific objectcentered reconstruction method and its implementation. The method begins with an initial estimate of surface shape provided, for example, by triangulating the result of conventional stereo. The surface shape and reflectance properties are then iteratively adjusted to minimize an objective function that combines information from multiple input images. The objective function is a weighted sum of stereo, shading, and smoothness components, where the weight varies over the surface. For example, the stereo component is weighted more strongly where the surface projects onto highly textured areas in the images, and less strongly othe...
Predictive ApplicationPerformance Modeling in a Computational Grid Environment
, 1999
"... This paper describes and evaluates the application of three local learning algorithms  nearestneighbor, weightedaverage, and locallyweighted polynomial regression  for the prediction of runspecific resourceusage on the basis of runtime input parameters supplied to tools. A twolevel knowl ..."
Abstract

Cited by 60 (12 self)
 Add to MetaCart
This paper describes and evaluates the application of three local learning algorithms  nearestneighbor, weightedaverage, and locallyweighted polynomial regression  for the prediction of runspecific resourceusage on the basis of runtime input parameters supplied to tools. A twolevel knowledge base allows the learning algorithms to track shortterm fluctuations in the performance of computing systems, and the use of instance editing techniques improves the scalability of the performancemodeling system. The learning algorithms assist PUNCH, a networkcomputing system at Purdue University, in emulating an ideal user in terms of its resource management and usage policies. 1. Introduction It is now recognized that the heterogeneous nature of the networkcomputing environment cannot be effectively exploited without some form of adaptive or demanddriven resource management (e.g., [10, 11, 12, 14, 18, 27]). A demanddriven resource management system can be characterized by its a...
Frequency analysis of gradient estimators in volume rendering
 IEEE Transactions on Visualization and Computer Graphics
, 1996
"... email � mark�nt.el.utwente.nl rendering �nal version ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
email � mark�nt.el.utwente.nl rendering �nal version
Speaker Independent AudioVisual Database For Bimodal ASR
 Proc. Europ. Tut. Work. AudioVisual Speech Proc., Rhodes
, 1997
"... This paper describes the audiovisual database collected at AT&T Labs#Research for the study of bimodal speech recognition. To date, this database consists of twomultiple speaker parts, namely isolated confusable words and connected letters, thus allowing the study of some popular and relatively sim ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
This paper describes the audiovisual database collected at AT&T Labs#Research for the study of bimodal speech recognition. To date, this database consists of twomultiple speaker parts, namely isolated confusable words and connected letters, thus allowing the study of some popular and relatively simple speaker independent audiovisual recognition tasks. In addition, a single speaker connected digits database is collected to facilitate speedy development and testing of various algorithms. Intentionally,no lip markings are used on the subjects during data collection. Development of robust and speaker independent algorithms for mouth location and lip contour extraction is thus necessary in order to obtain informative features about visual speech #visual front end#. We describe our approach to this problem, and we report our automatic speechreading and audiovisual speech recognition results on the single speaker connected digits task. 1.
Using 3Dimensional Meshes To Combine ImageBased and GeometryBased Constraints
 IN EUROPEAN CONFERENCE ON COMPUTER VISION
, 1994
"... A unified framework for 3D shape reconstruction allows us to combine imagebased and geometrybased information sources. The image information is akin to stereo and shapefromshading, while the geometric information may be provided in the form of 3D points, 3D features or 2D silhouettes. A form ..."
Abstract

Cited by 24 (4 self)
 Add to MetaCart
A unified framework for 3D shape reconstruction allows us to combine imagebased and geometrybased information sources. The image information is akin to stereo and shapefromshading, while the geometric information may be provided in the form of 3D points, 3D features or 2D silhouettes. A formal integration framework is critical in recovering complicated surfaces because the information from a single source is often insufficient to provide a unique answer. Our approach to shape recovery is to deform a generic objectcentered 3D representation of the surface so as to minimize an objective function. This objective function is a weighted sum of the contributions of the various information sources. We describe these various terms individually, our weighting scheme, and our optimization method. Finally, we present results on anumber of difficult images of real scenes for which a single source of information would have proved insufficient.
From Regular Images to Animated Heads: A Least Squares Approach
 in European Conference on Computer Vision
, 1998
"... We show that we can effectively fit arbitrarily complex animation models to noisy image data. Our approach is based on leastsquares adjustment using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edge ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
We show that we can effectively fit arbitrarily complex animation models to noisy image data. Our approach is based on leastsquares adjustment using of a set of progressively finer control triangulations and takes advantage of three complementary sources of information: stereo data, silhouette edges and 2D feature points.
Analysis and Computation of Immersed Boundaries, With Application to Pulp Fibres
 University of British Columbia
, 1997
"... We accept this thesis as conforming ..."
A Mean Field Stochastic Theory for SpeciesRich Assembled Communities
"... A dynamical model of an ecological community is analyzed within a "meanfield approximation" in which one of the species interacts with the combination of all of the other species in the community. Within this approximation the model may be formulated as a master equation describing a onestep stoch ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A dynamical model of an ecological community is analyzed within a "meanfield approximation" in which one of the species interacts with the combination of all of the other species in the community. Within this approximation the model may be formulated as a master equation describing a onestep stochastic process. The stationary distribution is obtained in closed form and is shown to reduce to a logseries or lognormal distribution, depending on the values that the parameters describing the model take on. A hyperbolic relationship between the connectance of the matrix of interspecies interactions and the average number of species, exists for a range of parameter values. The time evolution of the model at short and intermediate times is analyzed using van Kampen's approximation, which is valid when the number of individuals in the community is large. Good agreement with numerical simulations is found. The large time behavior, and the approach to the stationary state, is obtained by solvi...
Bayesian Analysis IV: Noise And Computing Time Considerations
 J. Magn. Reson
, 1991
"... . Probability theory, when interpreted as logic, enables one to ask many questions not possible with the frequency interpretation of probability theory. Often, answering these questions can be computationally intensive. If these techniques are to find their way into general use in NMR, a way that al ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
. Probability theory, when interpreted as logic, enables one to ask many questions not possible with the frequency interpretation of probability theory. Often, answering these questions can be computationally intensive. If these techniques are to find their way into general use in NMR, a way that allows one to calculate the probability for the frequencies, amplitudes, and decay rate constants quickly and easily must be found. In this paper, a procedure that allows one to compute the posterior probability for the frequencies, amplitudes, and decay rate constants from a series of zeropadded discrete Fourier transforms of the complex FID data when the data have been multiplied by a decaying exponential is described. Additionally, the calculation is modified to include prior information about the noise, and it is shown that obtaining a sample of the noise is almost as important as obtaining a signal sample, because it allows one to investigate complicated spectra using simple models. Thre...