Results 21  30
of
496
An Adaptive Learning and Reduction of Uncertainty in Stochastic Volatility
"... The essential feature of the work is a reduction of uncertainty in latent volatility due to a Bayesian learning procedure. Starting from a discretetime stochastic volatility model, we derive a recurrence equation for the variance of the innovation term in latent volatility equation. This equation d ..."
Abstract
 Add to MetaCart
The essential feature of the work is a reduction of uncertainty in latent volatility due to a Bayesian learning procedure. Starting from a discretetime stochastic volatility model, we derive a recurrence equation for the variance of the innovation term in latent volatility equation. This equation
Cost Complexity of . . . Reduction to Realizable Active Learning
, 2009
"... Proactive Learning is a generalized form of active learning with multiple oracles exhibiting different reliabilities (label noise) and costs. We propose a general approach for Proactive Learning that explicitly addresses the cost vs. reliability tradeoff for oracle and instance selection. We formula ..."
Abstract
 Add to MetaCart
formulate the problem in the PAC learning framework with bounded noise, and transform it into realizable active learning via a reduction technique, while keeping the overall query cost small. We propose two types of sequential hypothesis tests (denoted as SeqHT) that estimate the label of a given query from
Local Dimensionality Reduction For Locally Weighted Learning Abstract
"... Incremental learning of sensorimotor transformations in high dimensional spaces is one of the basic prerequisites for the success of autonomous robot devices as well as biological movement systems. So far, due to sparsity of data in high dimensional spaces, learning in such settings requires a signi ..."
Abstract
 Add to MetaCart
systems are locally low dimensional and dense. Under this assumption, we derive a learning algorithm, Locally Adaptive Subspace Regression, that exploits this property by combining a local dimensionality reduction as a preprocessing step with a nonparametric learning technique, locally weighted regression
Finding Optimal Derivation Strategies in Redundant Knowledge Bases
 Artificial Intelligence
, 1990
"... A backward chaining process uses a collection of rules to reduce a given goal to a sequence of database retrievals. A "derivation strategy" is an ordering on these steps, specifying when to use each rule and when to perform each retrieval. Given the costs of reductions and retrievals, and ..."
Abstract

Cited by 30 (16 self)
 Add to MetaCart
A backward chaining process uses a collection of rules to reduce a given goal to a sequence of database retrievals. A "derivation strategy" is an ordering on these steps, specifying when to use each rule and when to perform each retrieval. Given the costs of reductions and retrievals
FINITE SAMPLE APPROXIMATION RESULTS FOR PRINCIPAL COMPONENT ANALYSIS: A MATRIX PERTURBATION APPROACH
"... Principal Component Analysis (PCA) is a standard tool for dimensional reduction of a set of n observations (samples), each with p variables. In this paper, using a matrix perturbation approach, we study the nonasymptotic relation between the eigenvalues and eigenvectors of PCA computed on a finite ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
Principal Component Analysis (PCA) is a standard tool for dimensional reduction of a set of n observations (samples), each with p variables. In this paper, using a matrix perturbation approach, we study the nonasymptotic relation between the eigenvalues and eigenvectors of PCA computed on a finite
Continuous nonlinear dimensionality reduction by kernel eigenmaps
 Int. Joint Conf. Artif. Intel
, 2003
"... We equate nonlinear dimensionality reduction (NLDR) to graph embedding with side information about the vertices, and derive a solution to either problem in the form of a kernelbased mixture of affine maps from the ambient space to the target space. Unlike most spectral NLDR methods, the central eig ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We equate nonlinear dimensionality reduction (NLDR) to graph embedding with side information about the vertices, and derive a solution to either problem in the form of a kernelbased mixture of affine maps from the ambient space to the target space. Unlike most spectral NLDR methods, the central
Learning Object Intrinsic Structure for Robust Visual Tracking
 Proceedings of Int. Conf. on Computer Vision and Pattern Recognition
, 2003
"... In this paper, a novel method to learn the intrinsic object structure for robust visual tracking is proposed. The basic assumption is that the parameterized object state lies on a low dimensional manifold and can be learned from training data. Based on this assumption, firstly we derived the dimensi ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
the dimensionality reduction and density estimation algorithm for unsupervised learning of object intrinsic representation, the obtained nonrigid part of object state reduces even to 2 dimensions. Secondly the dynamical model is derived and trained based on this intrinsic representation. Thirdly the learned
Deriving Probabilistic Databases with Inference Ensembles
"... Abstract — Many realworld applications deal with uncertain or missing data, prompting a surge of activity in the area of probabilistic databases. A shortcoming of prior work is the assumption that an appropriate probabilistic model, along with the necessary probability distributions, is given. We a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
address this shortcoming by presenting a framework for learning a set of inference ensembles, termed metarule semilattices, or MRSL, from the complete portion of the data. We use the MRSL to infer probability distributions for missing data, and demonstrate experimentally that high accuracy is achieved
Discriminant analysis with tensor representation
 in Proc. IEEE Conf. Comput. Vision Pattern Recognit., 2005
, 2005
"... In this paper, we present a novel approach to solving the supervised dimensionality reduction problem by encoding an image object as a general tensor of 2nd or higher order. First, we propose a Discriminant Tensor Criterion (DTC), whereby multiple interrelated lowerdimensional discriminative subspa ..."
Abstract

Cited by 53 (13 self)
 Add to MetaCart
subspaces are derived for feature selection. Then, a novel approach called kmode Clusterbased Discriminant Analysis is presented to iteratively learn these subspaces by unfolding the tensor along different tensor dimensions. We call this algorithm Discriminant Analysis with Tensor Representation (DATER
Reduction of learning time for robots using automatic state abstraction
 in Proc. of the First European Symposium on Robotics
, 2006
"... Summary. The required learning time and curse of dimensionality restrict the applicability of Reinforcement Learning(RL) on real robots. Difficulty in inclusion of initial knowledge and understanding the learned rules must be added to the mentioned problems. In this paper we address automatic state ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
as an optimization problem and derive a new algorithm that adapts decision tree learning techniques to state abstraction. The proof of performance is supported by strong evidences from simulation results in nondeterministic environments. Simulation results show encouraging enhancements in the required number
Results 21  30
of
496