Results 1 - 10
of
12
Face pose estimation in uncontrolled environments
- In BMVC
, 2009
"... Automatic estimation of head pose faciliates human facial analysis. It has widespread applications such as, gaze direction detection, video teleconferencing and human computer interaction (HCI). It can also be integrated in a multi-view face detection and recognition system. Most current methods est ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
(Show Context)
Automatic estimation of head pose faciliates human facial analysis. It has widespread applications such as, gaze direction detection, video teleconferencing and human computer interaction (HCI). It can also be integrated in a multi-view face detection and recognition system. Most current methods estimate pose in a limited range or treat pose as a classification problem by assigning the face to one of many discrete poses [1,2]. Mainly tested on images taken in controlled environments e.g. the FacePix dataset [3] (Fig. 1a).
Synthesizing structured image hybrids
- ACM TOG
, 2010
"... input exemplars output hybrids Figure 1: Image hybrids. Given a set of input images (left), our algorithm automatically produces arbitrarily many hybrid images (right). Example-based texture synthesis algorithms generate novel texture images from example data. A popular hierarchical pixel-based appr ..."
Abstract
-
Cited by 10 (0 self)
- Add to MetaCart
(Show Context)
input exemplars output hybrids Figure 1: Image hybrids. Given a set of input images (left), our algorithm automatically produces arbitrarily many hybrid images (right). Example-based texture synthesis algorithms generate novel texture images from example data. A popular hierarchical pixel-based approach uses spatial jitter to introduce diversity, at the risk of breaking coarse structure beyond repair. We propose a multiscale descriptor that enables appearance-space jitter, which retains structure. This idea enables repurposing of existing texture synthesis implementations for a qualitatively different problem statement and class of inputs: generating hybrids of structured images. 1
Infrared Face Recognition: A Literature Review
- In Proceedings of the International Joint Conference on Neural Networks
, 2013
"... Abstract—Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the st ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Automatic face recognition (AFR) is an area with immense practical potential which includes a wide range of commercial and law enforcement applications, and it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in AFR continues to improve, benefiting from advances in a range of different fields including image processing, pattern recognition, computer graphics and phys-iology. However, systems based on visible spectrum images continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease their accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. I.
High-Resolution Face Fusion for Gender Conversion
"... Abstract—This paper presents an integrated face image fusion framework, which combines a hierarchical compositional paradigm with seamless image-editing techniques, for gender conversion. In our framework a high-resolution face is represented by a probabilistic graphical model that decomposes a huma ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents an integrated face image fusion framework, which combines a hierarchical compositional paradigm with seamless image-editing techniques, for gender conversion. In our framework a high-resolution face is represented by a probabilistic graphical model that decomposes a human face into several parts (facial components) constrained by explicit spatial configurations (relationships). Benefiting from this representation, the proposed fusion strategy is able to largely preserve the face identity of each facial component while applying gender transformation. Given a face image, the basic idea is to select reference facial components from the opposite-gender group as templates and transform the appearance of the given image toward the selected facial components. Our fusion approach decomposes a face image into two parts—sketchable and nonsketchable ones. For the sketchable regions (e.g., the contours of facial components and wrinkle lines, etc.), we use a graph-matching algorithm to find the best templates and transform the structure (shape), while for the nonsketchable regions (e.g., the texture area of facial components, skin, etc.), we learn active appearance models and transform the texture attributes in the corresponding principal component analysis space. Both objective and subjective quantitative evaluation results on 200 Asian frontal-face images selected from the public Lotus Hill Image database show that the proposed approach is able to give plausible gender conversion results. Index Terms—And–Or graph, face fusion, gender conversion. I.
AGHAJANIAN, PRINCE: FACE POSE ESTIMATION 1 Face Pose Estimation in Uncontrolled Environments
"... Automatic estimation of head pose from a face image is a sub-problem of human face analysis with widespread applications such as gaze direction detection and human computer interaction. Most current methods estimate pose in a limited range or treat pose as a classification problem by assigning the f ..."
Abstract
- Add to MetaCart
(Show Context)
Automatic estimation of head pose from a face image is a sub-problem of human face analysis with widespread applications such as gaze direction detection and human computer interaction. Most current methods estimate pose in a limited range or treat pose as a classification problem by assigning the face to one of many discrete poses. Moreover they have mainly been tested on images taken in controlled environments. We address the problem of estimating pose as a continuous regression problem on “real world ” images with large variations in background, illumination and expression. We propose a probabilistic framework with a general representation that does not rely on locating facial features. Instead we represent a face with a non-overlapping grid of patches. This representation is used in a generative model for automatic estimation of head pose ranging from −90 ◦ to 90 ◦ in images taken in uncontrolled environments. Our methods achieve a correlation of 0.88 with the human estimates of pose. 1
PUTER GRAPHICS]: Computational Geometry and Object
"... Figure 1: High resolution passive facial performance capture. From left to right: Acquisition setup; one reference frame; the reconstructed geometry (1 million polygons); final textured result; and two different frames. We introduce a purely passive facial capture approach that uses only an array of ..."
Abstract
- Add to MetaCart
Figure 1: High resolution passive facial performance capture. From left to right: Acquisition setup; one reference frame; the reconstructed geometry (1 million polygons); final textured result; and two different frames. We introduce a purely passive facial capture approach that uses only an array of video cameras, but requires no template facial geometry, no special makeup or markers, and no active lighting. We obtain initial geometry using multi-view stereo, and then use a novel approach for automatically tracking texture detail across the frames. As a result, we obtain a high-resolution sequence of compatibly triangulated and parameterized meshes. The resulting sequence can be rendered with dynamically captured textures, while also consistently applying texture changes such as virtual makeup.
2D/3D VIRTUAL FACE MODELING
"... We propose a novel and simple framework that solves two popular problems in digital photography: 2D face synthesis and 3D face modeling. 2D face synthesis aims at creating a new face, usually by mixing two or more portraits. We extend this notion to the combination of human and statue faces. The goa ..."
Abstract
- Add to MetaCart
(Show Context)
We propose a novel and simple framework that solves two popular problems in digital photography: 2D face synthesis and 3D face modeling. 2D face synthesis aims at creating a new face, usually by mixing two or more portraits. We extend this notion to the combination of human and statue faces. The goal of 3D face modeling is to reconstruct a face in three di-mensions from one or several images. These two tasks are often treated as separate problems although they both con-sider face modeling. In this paper, we propose a unified and general framework for both 2D and 3D cases that runs in a fully automatic manner. Our work also creates stereoscopic views for entertainment 3D display. Experimental results and subjective tests have confirmed the validity of our approach. Index Terms — face synthesis, 3D face, 3D display, stereoscopic view 1.
Volume xx (200y), Number z, pp. 1–13 Interactive Sketch-Driven Image Synthesis
"... Figure 1: Our interactive system guides a user to specify pose and appearance using sketching, in order to synthesize novel images from a labeled collection of training images. The user first sketches elliptical “masses ” (left), then contours (center), mimicking a traditional sketching workflow. On ..."
Abstract
- Add to MetaCart
(Show Context)
Figure 1: Our interactive system guides a user to specify pose and appearance using sketching, in order to synthesize novel images from a labeled collection of training images. The user first sketches elliptical “masses ” (left), then contours (center), mimicking a traditional sketching workflow. Once the pose is specified, the artist can constrain the appearance and render a novel image (right). Top row: user sketch input and feedback guidelines; Bottom row: rendered previews. We present an interactive system for composing realistic images of an object under arbitrary pose and appearance specified by sketching. Our system draws inspiration from a traditional illustration workflow: The user first sketches rough “masses ” of the object, as ellipses, to define an initial abstract pose that can then be refined with more detailed contours as desired. The system is made robust to partial or inaccurate sketches using a reduced-dimensionality model of pose space learnt from a labelled collection of photos. Throughout the composition process, interactive visual feedback is provided to guide the user. Finally, the user’s partial or complete sketch, complemented with appearance requirements, is used to constrain the automatic synthesis of a novel, high-quality, realistic image.
Modeling Object Appearance using Context-Conditioned Component Analysis
"... Subspace models have been very successful at modeling the appearance of structured image datasets when the vi-sual objects have been aligned in the images (e.g., faces). Even with extensions that allow for global transformations or dense warps of the image, the set of visual objects whose appearance ..."
Abstract
- Add to MetaCart
(Show Context)
Subspace models have been very successful at modeling the appearance of structured image datasets when the vi-sual objects have been aligned in the images (e.g., faces). Even with extensions that allow for global transformations or dense warps of the image, the set of visual objects whose appearance may be modeled by such methods is limited. They are unable to account for visual objects where oc-clusion leads to changing visibility of different object parts (without a strict layered structure) and where a one-to-one mapping between parts is not preserved. For example bunches of bananas contain different numbers of bananas but each individual banana shares an appearance subspace. In this work we remove the image space alignment lim-itations of existing subspace models by conditioning the models on a shape dependent context that allows for the complex, non-linear structure of the appearance of the vi-sual object to be captured and shared. This allows us to exploit the advantages of subspace appearance models with non-rigid, deformable objects whilst also dealing with com-plex occlusions and varying numbers of parts. We demon-strate the effectiveness of our new model with examples of structured inpainting and appearance transfer. 1.