Results 1 
4 of
4
Face recognition from a single image per person: A survey
 PATTERN RECOGNITION
, 2006
"... One of the main challenges faced by the current face recognition techniques lies in the difficulties of collecting samples. Fewer samples per person mean less laborious effort for collecting them, lower costs for storing and processing them. Unfortunately, many reported face recognition techniques ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
(Show Context)
One of the main challenges faced by the current face recognition techniques lies in the difficulties of collecting samples. Fewer samples per person mean less laborious effort for collecting them, lower costs for storing and processing them. Unfortunately, many reported face recognition techniques rely heavily on the size and representative of training set, and most of them will suffer serious performance drop or even fail to work if only one training sample per person is available to the systems. This situation is called “one sample per person ” problem: given a stored database of faces, the goal is to identify a person from the database later in time in any different and unpredictable poses, lighting, etc from just one image. Such a task is very challenging for most current algorithms due to the extremely limited representative of training sample. Numerous techniques have been developed to attack this problem, and the purpose of this paper is to categorize and evaluate these algorithms. The prominent algorithms are described and critically analyzed. Relevant issues such as data collection, the influence of the small sample size, and system evaluation are discussed, and several promising directions for future research are also proposed in this paper.
Controling the Magnification Factor of SelfOrganizing Feature Maps
, 1995
"... The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional map ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
The magnification exponents ¯ occuring in adaptive map formation algorithms like Kohonen's selforganizing feature map deviate for the information theoretically optimal value ¯ = 1 as well as from the values which optimize, e.g., the mean square distortion error (¯ = 1=3 for onedimensional maps). At the same time, models for categorical perception such as the "perceptual magnet" effect which are based on topographic maps require negative magnification exponents ¯ ! 0. We present an extension of the selforganizing feature map algorithm which utilizes adaptive local learning step sizes to actually control the magnification properties of the map. By change of a single parameter, maps with optimal information transfer, with various minimal reconstruction errors, or with an inverted magnification can be generated. Analytic results on this new algorithm are complemented by numerical simulations. 1. Introduction The representation of information in topographic maps is a common property of...
Improving the effectiveness of selforganizing map networks using a circular Kohonen layer
 Proc. of the 30 th Hawaii Int. Conf. on System Sciences
, 1997
"... Kohonen's selforganizing map (SOM) network is one of the most important network architectures developed during the 1980's. The main function of SOM networks is to map the input data from an ndimensional space to a lower dimensional (usually one or twodimensional) plot while maintaining t ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Kohonen's selforganizing map (SOM) network is one of the most important network architectures developed during the 1980's. The main function of SOM networks is to map the input data from an ndimensional space to a lower dimensional (usually one or twodimensional) plot while maintaining the original topological relations. A well known limitation of the Kohonen network is the “boundary effect ” of nodes on or near the edge of the network. The boundary effect is responsible for retaining the undue influence of initial random weights assigned to the nodes of the network leading to ineffective topological representations. To overcome this limitation, we introduce and evaluate a modified, “circular ” weight adjustment procedure. This procedure is applicable to a class of problems where the actual coordinates of the output map do not need to correspond to the original input topology. We tested the circular method with an example problem from the domain of group technology, typical of such class of problems. 1.
Optimal Magnification Factors in SelfOrganizing Feature Maps
 In Proc. ICANN'95
, 1995
"... Introduction Kohonen's selforganizing feature maps (SOFMs) [8] usually exhibit a selective magnification of often stimulated regions of their input space. This amounts to a larger transmission of information about the stimulus ensemble than in maps with a constant resolution. Such a selective ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Introduction Kohonen's selforganizing feature maps (SOFMs) [8] usually exhibit a selective magnification of often stimulated regions of their input space. This amounts to a larger transmission of information about the stimulus ensemble than in maps with a constant resolution. Such a selective magnification is not only observed in biological maps, but is also often regarded as a desirable design objective in technical contexts. For at least three reasons, the magnification properties of SOFMs deserve further investigation: 1. An analysis by Ritter and Schulten [10] demonstrated that the SOFM algorithm does not yield a maximum entropy map (i.e. does not transmit the maximum amount of information). 2. As a related argument we observe that it depends on the error criterion one applies which magnification properties are to be regarded as optimal. For example, a minimal worst case error is achieved by maps with all receptive fields (or Voronoy polygons) being of equal extension, i.