Results 1 - 10
of
32
Real-time Visual Tracking under Arbitrary Illumination Changes
"... In this paper, we investigate how to improve the robustness of visual tracking methods with respect to generic lighting changes. We propose a new approach to the direct image alignment of either Lambertian or non-Lambertian objects under shadows, inter-reflections, glints as well as ambient, diffuse ..."
Abstract
-
Cited by 33 (3 self)
- Add to MetaCart
(Show Context)
In this paper, we investigate how to improve the robustness of visual tracking methods with respect to generic lighting changes. We propose a new approach to the direct image alignment of either Lambertian or non-Lambertian objects under shadows, inter-reflections, glints as well as ambient, diffuse and specular reflections which may vary in power, type, number and space. The method is based on a proposed model of illumination changes together with an appropriate geometric model of image motion. The parameters related to these models are obtained through an efficient second-order optimization technique which minimizes directly the intensity discrepancies. Comparison results with existing direct methods show significant improvements in the tracking performance. Extensive experiments confirm the robustness and reliability of our method. 1.
Unsupervised Face Alignment by Robust Nonrigid Mapping
- Proc. ICCV 2009, Kyoto
, 2009
"... We propose a novel approach to unsupervised facial im-age alignment. Differently from previous approaches, that are confined to affine transformations on either the entire face or separate patches, we extract a nonrigid mapping be-tween facial images. Based on a regularized face model, we frame unsu ..."
Abstract
-
Cited by 19 (8 self)
- Add to MetaCart
(Show Context)
We propose a novel approach to unsupervised facial im-age alignment. Differently from previous approaches, that are confined to affine transformations on either the entire face or separate patches, we extract a nonrigid mapping be-tween facial images. Based on a regularized face model, we frame unsupervised face alignment into the Lucas-Kanade image registration approach. We propose a robust optimiza-tion scheme to handle appearance variations. The method is fully automatic and can cope with pose variations and ex-pressions, all in an unsupervised manner. Experiments on a large set of images showed that the approach is effective. 1.
Real-time dense visual tracking under large lighting variations
- in Proc. Conf
, 2011
"... This paper proposes a model for large illumination variations to improve direct 3D tracking techniques since they are highly prone to illumination changes. Within this context dense monocular and multi-camera tracking techniques are presented which each perform in real-time (45Hz). The proposed appr ..."
Abstract
-
Cited by 13 (7 self)
- Add to MetaCart
(Show Context)
This paper proposes a model for large illumination variations to improve direct 3D tracking techniques since they are highly prone to illumination changes. Within this context dense monocular and multi-camera tracking techniques are presented which each perform in real-time (45Hz). The proposed approach exploits the relative advantages of both model-based and visual odometry techniques for tracking. In the case of direct model-based tracking, photometric models are usually acquired under significantly greater lighting differences than those observed by the current camera view, however, model-based approaches avoid drift. Incremental visual odometry, on the other hand, has relatively less lighting variation but integrates drift. To solve this problem a hybrid approach is proposed to simultaneously minimise drift via a 3D model whilst using locally consistent illumination to correct large photometric differences. Direct 6 dof tracking is performed by an accurate method, which directly minimizes dense image measurements iteratively, using non-linear optimisation. A stereo technique for automatically acquiring the 3D photometric model has also been optimised for the purpose of this paper. Real experiments are shown on complex 3D scenes for a hand-held camera undergoing fast 3D movement and various illumination changes including daylight, artificial-lights, significant shadows, non-Lambertian reflections, occlusions and saturations. 1
Joint Estimation of Deformable Motion and Photometric
- Parameters in Single View Videos. Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on
, 2009
"... In this paper we present a method for joint deformation and illumination parameter estimation from monocular image sequences exploiting direct image information. We are particularly interested in augmented reality applications, where a new texture is rendered onto a moving and deforming surface in t ..."
Abstract
-
Cited by 11 (6 self)
- Add to MetaCart
(Show Context)
In this paper we present a method for joint deformation and illumination parameter estimation from monocular image sequences exploiting direct image information. We are particularly interested in augmented reality applications, where a new texture is rendered onto a moving and deforming surface in the original video in real-time. Realistic retexturing not only requires geometric registration but also photometric parameter retrieval for convincing illusion. The contribution of this paper lies in a method for deformable surface augmentation exploiting a combination of an extended optical flow equation with a mesh-based shape and illumination prior that allows simultaneous deformation and photometric parameter estimation. Taking into account the photometric part by relaxing the brightness constancy assumption of the estimation not only allows realistic augmentation results but also improves spatial tracking. 1.
T.S.: A fast 2d shape recovery approach by fusing features and appearance
- IEEE Trans. Pattern Anal. Mach. Intell
"... ..."
(Show Context)
Direct Image Alignment of Projector-Camera Systems with Planar Surfaces
"... Projector-camera systems use computer vision to analyze their surroundings and display feedback directly onto real world objects, as embodied by spatial augmented reality. To be effective, the display must remain aligned even when the target object moves, but the added illumination causes problems f ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
Projector-camera systems use computer vision to analyze their surroundings and display feedback directly onto real world objects, as embodied by spatial augmented reality. To be effective, the display must remain aligned even when the target object moves, but the added illumination causes problems for traditional algorithms. Current solutions consider the displayed content as interference and largely depend on channels orthogonal to visible light. They cannot directly align projector images with real world surfaces, even though this may be the actual goal. We propose instead to model the light emitted by projectors and reflected into cameras, and to consider the displayed content as additional information useful for direct alignment. We implemented in software an algorithm that successfully executes on planar surfaces of diffuse reflectance properties at almost two frames per second with subpixel accuracy. Although slow, our work proves the viability of the concept, paving the way for future optimization and generalization. 1.
A Hands-On Approach to High-Dynamic-Range and Superresolution Fusion
"... This paper discusses a new framework to enhance image and video quality. Recent advances in high-dynamic-range image fusion and superresolution make it possible to extend the intensity range or to increase the resolution of the image beyond the limitations of the sensor. In this paper, we propose a ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
This paper discusses a new framework to enhance image and video quality. Recent advances in high-dynamic-range image fusion and superresolution make it possible to extend the intensity range or to increase the resolution of the image beyond the limitations of the sensor. In this paper, we propose a new way to combine both of these fusion methods in a two-stage scheme. To achieve robust image enhancement in practical application scenarios, we adapt state-of-theart methods for automatic photometric camera calibration, controlled image acquisition, image fusion and tonemapping. With respect to high-dynamic-range reconstruction, we show that only two input images can sufficiently capture the dynamic range of the scene. The usefulness and performance of this system is demonstrated on images taken with various types of cameras. 1.
Real-time direct tracking of color images in the presence of illumination variation
- In IEEE International Conference on Robotics and Automation
, 2011
"... Abstract — This paper introduces a novel color tracking model for image registration that exploits directly the color information provided by standard color cameras. Furthermore, unlike previous approaches, the color tracking model is de-signed to handle both global and local illumination changes wi ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
(Show Context)
Abstract — This paper introduces a novel color tracking model for image registration that exploits directly the color information provided by standard color cameras. Furthermore, unlike previous approaches, the color tracking model is de-signed to handle both global and local illumination changes within a robust framework that also rejects outliers such as occluding objects, shadows, etc. In order to demonstrate the proposed approach a planar template tracking algorithm is used, however, the approach is also valid for a general class of direct tracking algorithms. In particular, the objective function is defined to be minimized directly in the CFA (color filter array) space instead of the common RGB space. It will be shown that this not only takes advantage of the discernibility of color measurements but also drastically improves the efficiency of the vision processing pipeline and ultimately improves real-time performance. A robust global illumination model is then combined with a robust M-estimation technique that is shown to handle global and local illumination changes without requiring enlarging the state space of the estimator towards non-real time proportions in order to model every local illumination variation. Results from synthetic and real sequences are presented to demonstrate the proposed concept. I.
Shadow Resistant Direct Image Registration
"... Abstract. Direct image registration methods usually treat shadows as outliers. We propose a method which registers images in a 1D shadow invariant space. Shadow invariant image formation is possible by projecting color images, expressed in a log-chromaticity space, onto an ‘intrinsic line’. The slop ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Direct image registration methods usually treat shadows as outliers. We propose a method which registers images in a 1D shadow invariant space. Shadow invariant image formation is possible by projecting color images, expressed in a log-chromaticity space, onto an ‘intrinsic line’. The slope of the line is a camera dependent parameter, usually obtained in a prior calibration step. In this paper, calibration is avoided by jointly determining the ‘invariant slope ’ with the registration parameters. The method deals with images taken by different cameras by using a different slope for each image and compensating for photometric variations. Prior information about the camera is, thus, not required. The method is assessed on synthetic and real data.