Results 11  20
of
39
Mining discriminative components with lowrank and sparsity constraints for face recognition
 In KDD
, 2012
"... This paper introduces a novel image decomposition approach for an ensemble of correlated images, using lowrank and sparsity constraints. Each image is decomposed as a combination of three components: one common component, one condition component, which is assumed to be a lowrank matrix, and a spa ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper introduces a novel image decomposition approach for an ensemble of correlated images, using lowrank and sparsity constraints. Each image is decomposed as a combination of three components: one common component, one condition component, which is assumed to be a lowrank matrix, and a sparse residual. For a set of face images of N subjects, the decomposition finds N common components, one for each subject, K lowrank components, each capturing a different global condition of the set (e.g., different illumination conditions), and a sparse residual for each input image. Through this decomposition, the proposed approach recovers a clean face image (the common component) for each subject and discovers the conditions (the condition components and the sparse residuals) of the images in the set. The set of N +K images containing only the common and the lowrank components form a compact and discriminative representation for the original images. We design a classifier using only these N + K images. Experiments on commonlyused face data sets demonstrate the effectiveness of the approach for face recognition through comparing with the leading stateoftheart in the literature. The experiments further show good accuracy in classifying the condition of an input image, suggesting that the components from the proposed decomposition indeed capture physically meaningful features of the input.
ON THE FIELD OF VALUES OF OBLIQUE PROJECTIONS ∗
, 2010
"... Abstract. We highlight some properties of the field of values (or numerical range) W (P) of an oblique projector P on a Hilbert space, i.e., of an operator satisfying P 2 = P. If P is neither null nor the identity, we present a direct proof showing that W (P) = W (I − P), i.e., the field of values ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We highlight some properties of the field of values (or numerical range) W (P) of an oblique projector P on a Hilbert space, i.e., of an operator satisfying P 2 = P. If P is neither null nor the identity, we present a direct proof showing that W (P) = W (I − P), i.e., the field of values of an oblique projection coincides with that of its complementary projection. We also show that W (P) is an elliptical disk with foci at 0 and 1 and eccentricity 1/‖P ‖. These two results combined provide a new proof of the identity ‖P ‖ = ‖I−P ‖. We discuss the relation between the minimal canonical angle between the range and the null space of P and the shape of W (P). In the finite dimensional case, we show a relation between the eigenvalues of matrices related to these complementary projections and present a second proof to the fact that W (P) is an elliptical disk. Key words. Idempotent operators. Oblique Projections. Field of Values. Numerical Range.
Reinterpretation and Enhancement Of SignalSubspaceBased Imaging Methods For Extended Scatterers
, 2009
"... Interior sampling and exterior sampling (enclosure) signalsubspacebased imaging methodologies for extended scatterers derived in previous work are reformulated and reinterpreted in terms of the concepts of angles and distances between subspaces. The insight gained from this reformulation naturally ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Interior sampling and exterior sampling (enclosure) signalsubspacebased imaging methodologies for extended scatterers derived in previous work are reformulated and reinterpreted in terms of the concepts of angles and distances between subspaces. The insight gained from this reformulation naturally paves the way for a broader, more encompassing inversion methodology based on a crosscoherence matrix associated to the singular vectors of the scattering or response matrix and the singular vectors intrinsic to a given, hypothesized support region for the scatterers (under a known background Green function associated to a known embedding medium where the scatterers reside). A number of new imaging functionals based on that crosscoherence matrix emerge, being of particular interest imaging functionals based on informationtheoretic concepts applied to an interpretation of the entries in that matrix as probability amplitudes. The resulting approach is based on entropy minimization, and it has the enormous advantage of not requiring for its implementation the estimation of a cutoff in the singular value spectrum separating signal versus noise subspaces, which is a common computational difficulty in both imaging and shape reconstruction contexts. The theoretical and computational concepts developed in the paper are illustrated for electromagnetic scattering examples in twodimensional space. Both imaging and shape reconstruction contexts are considered, and in the shape reconstruction context it is also shown how to combine the signal subspace approach with the level set method.
THE CANONICAL DECOMPOSITION OF CND AND NUMERICAL GRÖBNER AND BORDER BASES∗
"... Abstract. This article introduces the canonical decomposition of the vector space of multivariate polynomials for a given monomial ordering. Its importance lies in solving multivariate polynomial systems, computing Gröbner bases, and solving the ideal membership problem. An SVDbased algorithm is p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. This article introduces the canonical decomposition of the vector space of multivariate polynomials for a given monomial ordering. Its importance lies in solving multivariate polynomial systems, computing Gröbner bases, and solving the ideal membership problem. An SVDbased algorithm is presented that numerically computes the canonical decomposition. It is then shown how, by introducing the notion of divisibility into this algorithm, a numerical Gröbner basis can also be computed. In addition, we demonstrate how the canonical decomposition can be used to decide whether the affine solution set of a multivariate polynomial system is zerodimensional and to solve the ideal membership problem numerically. The SVDbased canonical decomposition algorithm is also extended to numerically compute border bases. A tolerance for each of the algorithms is derived using perturbation theory of principal angles. This derivation shows that the condition number of computing the canonical decomposition and numerical Gröbner basis is essentially the condition number of the Macaulay matrix. Numerical experiments with both exact and noisy coefficients are presented and discussed.
A fast incremental multilinear principal component analysis algorithm
 Int. J. Innov. Comput. Inf. Control
"... Abstract. This study establishes the mathematical foundation for a fast incremental multilinear method which combines the traditional sequential KarhunenLoeve (SKL) algorithm with the newly developed incremental modified fast Principal Component Analysis algorithm (IMFPCA). In accordance with the ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This study establishes the mathematical foundation for a fast incremental multilinear method which combines the traditional sequential KarhunenLoeve (SKL) algorithm with the newly developed incremental modified fast Principal Component Analysis algorithm (IMFPCA). In accordance with the characteristics of the data structure, the proposed algorithm achieves both computational efficiency and high accuracy for incremental subspace updating. Moreover, the theoretical foundation is analyzed in detail as to the competing aspects of IMFPCA and SKL with respect to the different data unfolding schemes. Besides the general experiments designed to test the performance of the proposed algorithm, incremental face recognition system was developed as a realworld application for the proposed algorithm.
RayleighRitz approximation for the linear response eigenvalue problem
, 2013
"... Large scale eigenvalue computation is about approximating certain invariant subspaces associated with the interested part of the spectrum, and the interested eigenvalues are then extracted from projecting the problem by approximate invariant subspaces into a much smaller eigenvalue problem. In th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Large scale eigenvalue computation is about approximating certain invariant subspaces associated with the interested part of the spectrum, and the interested eigenvalues are then extracted from projecting the problem by approximate invariant subspaces into a much smaller eigenvalue problem. In the case of the linear response eigenvalue problem (aka the random phase eigenvalue problem), it is the pair of deflating subspaces associated with the first few smallest positive eigenvalues that needs to be computed. This paper is concerned with approximation accuracy relationships between a pair of approximate deflating subspaces and approximate eigenvalues extracted by the pair. Lower and upper bounds on eigenvalue approximation errors are obtained in terms of canonical angles between exact and computed pair of deflating subspaces. These bounds can also be interpreted as lower/upper bounds on the canonical angles in terms of eigenvalue approximation errors. They are useful in analyzing numerical solutions to linear response eigenvalue problems. Key words. Linear response eigenvalue problem, eigenvalue approximation, RayleighRitz
On security threats for robust perceptual hashing
, 2009
"... Perceptual hashing has to deal with the constraints of robustness, accuracy and security. After modeling the process of hash extraction and the properties involved in this process, two different security threats are studied, namely the disclosure of the secret feature space and the tampering of the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Perceptual hashing has to deal with the constraints of robustness, accuracy and security. After modeling the process of hash extraction and the properties involved in this process, two different security threats are studied, namely the disclosure of the secret feature space and the tampering of the hash. Two different approaches for performing robust hashing are presented: RandomBased Hash (RBH) where the security is achieved using a random projection matrix and ContentBased Hash (CBH) were the security relies on the difficulty to tamper the hash. As for digital watermarking, different security setups are also devised: the Batch Hash Attack, the Group Hash Attack, the Unique Hash Attack and the Sensitivity Attack. A theoretical analysis of the information leakage in the context of RandomBased Hash is proposed. Finally, practical attacks are presented: (1) Minor Component Analysis is used to estimate the secret projection of RandomBased Hashes and (2) Salient point tampering is used to tamper the hash of ContentBased Hashes systems.
Angles Between Subspaces and Their Tangents
, 2013
"... Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, e.g., data mining. Traditionally, PABS are introduced via their cosines. The cosines and sines of PABS are commonly defined using the singular value decomp ..."
Abstract
 Add to MetaCart
(Show Context)
Principal angles between subspaces (PABS) (also called canonical angles) serve as a classical tool in mathematics, statistics, and applications, e.g., data mining. Traditionally, PABS are introduced via their cosines. The cosines and sines of PABS are commonly defined using the singular value decomposition. We utilize the same idea for the tangents, i.e., explicitly construct matrices, such that their singular values are equal to the tangents of PABS, using several approaches: orthonormal and nonorthonormal bases for subspaces, as well as projectors. Such a construction has applications, e.g., in analysis of convergence of subspace iterations for eigenvalue problems.