Results 1  10
of
1,043
Regularized discriminant analysis
 J. Amer. Statist. Assoc
, 1989
"... Linear and quadratic discriminant analysis are considered in the small sample highdimensional setting. Alternatives to the usual maximum likelihood (plugin) estimates for the covariance matrices are proposed. These alternatives are characterized by two parameters, the values of which are customize ..."
Abstract

Cited by 468 (2 self)
 Add to MetaCart
Linear and quadratic discriminant analysis are considered in the small sample highdimensional setting. Alternatives to the usual maximum likelihood (plugin) estimates for the covariance matrices are proposed. These alternatives are characterized by two parameters, the values of which
Provable Data Possession at Untrusted Stores
, 2007
"... We introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the serv ..."
Abstract

Cited by 302 (9 self)
 Add to MetaCart
in widelydistributed storage systems. We present two provablysecure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data
Efficient learning of sparse representations with an energybased model
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NIPS 2006
, 2006
"... We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying nonlinearity that turns a code vector into a quasibinary sparse code vector. Given an input, the optimal code minimizes the distance b ..."
Abstract

Cited by 219 (15 self)
 Add to MetaCart
We describe a novel unsupervised method for learning sparse, overcomplete features. The model uses a linear encoder, and a linear decoder preceded by a sparsifying nonlinearity that turns a code vector into a quasibinary sparse code vector. Given an input, the optimal code minimizes the distance
Distance transforms of sampled functions
 Cornell Computing and Information Science
, 2004
"... This paper provides lineartime algorithms for solving a class of minimization problems involving a cost function with both local and spatial terms. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary ..."
Abstract

Cited by 175 (9 self)
 Add to MetaCart
This paper provides lineartime algorithms for solving a class of minimization problems involving a cost function with both local and spatial terms. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced
Coil sensitivity encoding for fast MRI. In:
 Proceedings of the ISMRM 6th Annual Meeting,
, 1998
"... New theoretical and practical concepts are presented for considerably enhancing the performance of magnetic resonance imaging (MRI) by means of arrays of multiple receiver coils. Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementa ..."
Abstract

Cited by 193 (3 self)
 Add to MetaCart
reconstruction from multiple receiver data. Using the framework of linear algebra, two different reconstruction strategies have been derived. In their general forms the resulting formulae hold for arbitrary sampling patterns in kspace. A detailed discussion is dedicated to the most practical case, namely
Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria
 IEEE Trans. On Audio, Speech and Lang. Processing
, 2007
"... Abstractâ€”An unsupervised learning algorithm for the separation of sound sources in onechannel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a timevarying gain ..."
Abstract

Cited by 189 (30 self)
 Add to MetaCart
values, and the gains and the spectra are then alternatively updated using multiplicative update rules until the values converge. Simulation experiments were carried out using generated mixtures of pitched musical instrument samples and drum sounds. The performance of the proposed method was compared
Phase retrieval using alternating minimization
 In NIPS
, 2013
"... Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between estima ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Phase retrieval problems involve solving linear equations, but with missing sign (or phase, for complex numbers) information. Over the last two decades, a popular generic empirical approach to the many variants of this problem has been one of alternating minimization; i.e. alternating between
Fast surface reconstruction using the level set method
 In VLSM â€™01: Proceedings of the IEEE Workshop on Variational and Level Set Methods
, 2001
"... In this paper we describe new formulations and develop fast algorithms for implicit surface reconstruction based on variational and partial differential equation (PDE) methods. In particular we use the level set method and fast sweeping and tagging methods to reconstruct surfaces from scattered data ..."
Abstract

Cited by 151 (12 self)
 Add to MetaCart
data set. The data set might consist of points, curves and/or surface patches. A weighted minimal surfacelike model is constructed and its variational level set formulation is implemented with optimal efficiency. The reconstructed surface is smoother than piecewise linear and has a natural scaling
Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
 Mathematics of Computation
"... Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arisi ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems
Stochastic linear optimization under bandit feedback
 In submission
, 2008
"... In the classical stochastic karmed bandit problem, in each of a sequence of T rounds, a decision maker chooses one of k arms and incurs a cost chosen from an unknown distribution associated with that arm. The goal is to minimize regret, defined as the difference between the cost incurred by the alg ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
by the algorithm and the optimal cost. In the linear optimization version of this problem (first considered by Auer [2002]), we view the arms as vectors in Rn, and require that the costs be linear functions of the chosen vector. As before, it is assumed that the cost functions are sampled independently from
Results 1  10
of
1,043