Results 1  10
of
244,356
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data
, 2008
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract

Cited by 533 (38 self)
 Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying
Monotone Smoothing of Noisy Data
, 2014
"... We consider the problem of recovering monotonicity in noisy data. 1 ..."
RECONSTRUCTION OF DISCONTINUITIES IN NOISY DATA
"... Abstract. One is given noisy data of a discontinuous piecewisesmooth function along with a bound on its second derivative. The locations of the points of discontinuity of f and their jump sizes are not assumed known, but are instead retrieved stably from the noisy data. The novelty of this paper i ..."
Abstract
 Add to MetaCart
Abstract. One is given noisy data of a discontinuous piecewisesmooth function along with a bound on its second derivative. The locations of the points of discontinuity of f and their jump sizes are not assumed known, but are instead retrieved stably from the noisy data. The novelty of this paper
Theory Refinement with Noisy Data
, 1992
"... This paper presents a method for revising an approximate domain theory based on noisy data. The basic idea is to avoid making changes to the theory that account for only a small amount of data. This method is implemented in the EITHER propositional Hornclause theory revision system. The paper prese ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
This paper presents a method for revising an approximate domain theory based on noisy data. The basic idea is to avoid making changes to the theory that account for only a small amount of data. This method is implemented in the EITHER propositional Hornclause theory revision system. The paper
REGULARIZED INTERPOLATION FOR NOISY DATA
"... Interpolation is a vital tool in biomedical signal processing. Although there exists a substantial literature dedicated to noisefree conditions, much less is known in the presence of noise. Here, we document the breakdown of standard interpolation for noisy data and study the performance improve ..."
Abstract
 Add to MetaCart
Interpolation is a vital tool in biomedical signal processing. Although there exists a substantial literature dedicated to noisefree conditions, much less is known in the presence of noise. Here, we document the breakdown of standard interpolation for noisy data and study the per
Factorization with Missing and Noisy Data
"... Abstract. Several factorization techniques have been proposed for tackling the Structure from Motion problem. Most of them provide a good solution, while the amount of missing data is within an acceptable ratio. Focussing on this problem, we propose an incremental multiresolution scheme able to d ..."
Abstract
 Add to MetaCart
to deal with a high rate of missing data, as well as noisy data. It is based on an iterative approach that applies a classical factorization technique in an incrementally reduced space. Information recovered following a coarsetofine strategy is used for both, filling in the missing entries of the input
Correcting Noisy Data
 Machine Learning
, 1999
"... Inductive learning aims at constructing a generalized description of a given set of data, so that future similar instances can be classified correctly. The performance on this task depends crucially on the quality of the data. We investigate here an approach to handling noise in the training data b ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
by identifying possible noisy attributes and/or class in each instance, and replacing such values with more appropriate ones. The resulting data set would preserve much of the original information, but conform more to the ideal noisefree case. A classifier built from this corrected data should have a
Online Learning of Noisy Data
"... Abstract—We study online learning of linear and kernelbased predictors, when individual examples are corrupted by random noise, and both examples and noise type can be chosen adversarially and change over time. We begin with the setting where some auxiliary information on the noise distribution is ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
is provided, and we wish to learn predictors with respect to the squared loss. Depending on the auxiliary information, we show how one can learn linear and kernelbased predictors, using just 1 or 2 noisy copies of each example. We then turn to discuss a general setting where virtually nothing is known about
redundant and noisy data
, 2004
"... Constructive approximations of the q = 1/2 MaxEnt distribution from ..."
limited noisy data
, 2008
"... Determining the number of components in a linear mixture model is a fundamental problem in many scientific fields, including chemometrics and signal processing. In this paper we present a new method to automatically determine the number of components from a limited number of (possibly) high dimensio ..."
Abstract
 Add to MetaCart
dimensional noisy samples. The proposed method, based on the eigenvalues of the sample covariance matrix, combines a matrix perturbation approach for the interaction of signal and noise eigenvalues, with recent results from random matrix theory regarding the behavior of noise eigenvalues. We present
Results 1  10
of
244,356