Results 1  10
of
11
Image and video colorization using vectorvalued reproducing kernel Hilbert spaces
, 2010
"... Motivated by the setting of reproducing kernel Hilbert space (RKHS) and its extensions considered in machine learning, we propose an RKHS framework for image and video colorization. We review and study RKHS especially in vectorial cases and provide various extensions for colorization problems. Theor ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Motivated by the setting of reproducing kernel Hilbert space (RKHS) and its extensions considered in machine learning, we propose an RKHS framework for image and video colorization. We review and study RKHS especially in vectorial cases and provide various extensions for colorization problems. Theory as well as a practical algorithm is proposed with a number of numerical experiments.
Causal reasoning by evaluating the complexity of conditional densities with kernel methods
 Neurocomputing
"... We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of Reproducing Kernel Hilbert Spaces (RKHS) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several e ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We propose a method to quantify the complexity of conditional probability measures by a Hilbert space seminorm of the logarithm of its density. The concept of Reproducing Kernel Hilbert Spaces (RKHS) is a flexible tool to define such a seminorm by choosing an appropriate kernel. We present several examples with artificial datasets where our kernelbased complexity measure is consistent with our intuitive understanding of complexity of densities. The intention behind the complexity measure is to provide a new approach to inferring causal directions. The idea is that the factorization of the joint probability measure P(effect,cause) into P(effectcause)P(cause) leads typically to “simpler” and “smoother ” terms than the factorization into P(causeeffect)P(effect). Since the conventional constraintbased approach of causal discovery is not able to determine the causal direction between only two variables, our inference principle can in particular be helpful when combined with other existing methods. We provide several simple examples with realworld data where the true causal directions indeed lead to simpler (conditional) densities.
Local Linear Semisupervised Regression
, 2009
"... The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or In many machine learning application domains, obtaining labeled data is ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or In many machine learning application domains, obtaining labeled data is expensive but obtaining unlabeled data is much cheaper. For this reason there has been growing interest in algorithms that are able to take advantage of unlabeled data. In this report we propose an algorithm for using unlabeled data in a regression problem. The idea behind the method is to do manifold regularization using local linear estimators. This is the first extension of local linear regression to the semisupervised setting. We present experimental results on both synthetic and real data and show that this method tends to perform better than methods which only utilize the labeled
Active Learning by Spherical Subdivision
"... We introduce a computationally feasible, “constructive ” active learning method for binary classification. The learning algorithm is initially formulated for separable classification problems, for a hyperspherical data space with constant data density, and for great spheres as classifiers. In order ..."
Abstract
 Add to MetaCart
We introduce a computationally feasible, “constructive ” active learning method for binary classification. The learning algorithm is initially formulated for separable classification problems, for a hyperspherical data space with constant data density, and for great spheres as classifiers. In order to reduce computational complexity the version space is restricted to spherical simplices and learning procedes by subdividing the edges of maximal length. We show that this procedure optimally reduces a tight upper bound on the generalization error. The method is then extended to other separable classification problems using products of spheres as data spaces and isometries induced by charts of the sphere. An upper bound is provided for the probability of disagreement between classifiers (hence the generalization error) for nonconstant data densities on the sphere. The emphasis of this work lies on providing mathematically exact performance estimates for active learning strategies.
Learning by Combining Native Features with Similarity Functions
, 2009
"... The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or The notion of exploiting data dependent hypothesis spaces is an excitin ..."
Abstract
 Add to MetaCart
The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or The notion of exploiting data dependent hypothesis spaces is an exciting new direction in machine learning with strong theoretical foundations[66]. A very practical motivation for these techniques is that they allow us to exploit unlabeled data in new ways [2]. In this work we investigate a particular technique for combining “native” features with features derived from a similarity function. We also describe a novel
Techniques for Exploiting Unlabeled Data
, 2008
"... not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or In many machine learning application domains obtaining labeled data is expensive but obtaining unlabeled data is much cheaper. For this reason there has been g ..."
Abstract
 Add to MetaCart
not be interpreted as representing the official policies, either expressed or implied, of any sponsoring institution, the U.S. government or In many machine learning application domains obtaining labeled data is expensive but obtaining unlabeled data is much cheaper. For this reason there has been growing interest in algorithms that are able to take advantage of unlabeled data. In this thesis we develop several methods for taking advantage of unlabeled data in classification and regression tasks. Specific contributions include: • A method for improving the performance of the graph mincut algorithm of Blum and Chawla [12] by taking randomized mincuts. We give theoretical motivation for this approach and we present empirical results showing that randomized mincut tends to outperform the original graph mincut algorithm, especially when the number of labeled examples is very small. • An algorithm for semisupervised regression based on manifold regularization
Image and Video Colorization Using VectorValued Reproducing Kernel Hilbert Spaces
, 2010
"... Abstract Motivated by the setting of reproducing kernel Hilbert space (RKHS) and its extensions considered in machine learning, we propose an RKHS framework for image and video colorization. We review and study RKHS especially in vectorial cases and provide various extensions for colorization proble ..."
Abstract
 Add to MetaCart
Abstract Motivated by the setting of reproducing kernel Hilbert space (RKHS) and its extensions considered in machine learning, we propose an RKHS framework for image and video colorization. We review and study RKHS especially in vectorial cases and provide various extensions for colorization problems. Theory as well as a practical algorithm is proposed with a number of numerical experiments. Keywords Function extension · Vectorvalued reproducing kernel Hilbert spaces · Nonlocal kernels · Image colorization · Least square regression · Color inpainting 1
PROVENANCE OF EXPLORATORY TASKS IN SCIENTIFIC VISUALIZATION: MANAGEMENT AND APPLICATIONS
, 2009
"... This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: ..."
Abstract
 Add to MetaCart
This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair:
ARTICLE IN PRESS
, 2008
"... www.elsevier.com/locate/neucom Causal reasoning by evaluating the complexity of conditional densities with kernel methods ..."
Abstract
 Add to MetaCart
www.elsevier.com/locate/neucom Causal reasoning by evaluating the complexity of conditional densities with kernel methods
Nonlinear Robust Regression Using Kernel Principal Component Analysis and REstimators
"... In recent years, many algorithms based on kernel principal component analysis (KPCA) have been proposed including kernel principal component regression (KPCR). KPCR can be viewed as a nonlinearization of principal component regression (PCR) which uses the ordinary least squares (OLS) for estimating ..."
Abstract
 Add to MetaCart
In recent years, many algorithms based on kernel principal component analysis (KPCA) have been proposed including kernel principal component regression (KPCR). KPCR can be viewed as a nonlinearization of principal component regression (PCR) which uses the ordinary least squares (OLS) for estimating its regression coefficients. We use PCR to dispose the negative effects of multicollinearity in regression models. However, it is well known that the main disadvantage of OLS is its sensitiveness to the presence of outliers. Therefore, KPCR can be inappropriate to be used for data set containing outliers. In this paper, we propose a novel nonlinear robust technique using hybridization of KPCA and Restimators. The proposed technique is compared to KPCR and gives better results than KPCR.