Results 11  20
of
41
ImageBased GeometricallyCorrect Photorealistic Scene/Object Modeling (IBPhM): A Review
 in Proc. 3rd Asian Conf. on Computer Vision
, 1998
"... . There are emerging interests from both computer vision and computer graphics communities in obtaining photorealistic modeling of a scene or an object from real images. This paper presents a tentative review of the computer vision techniques used in such modeling which guarantee the generated views ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. There are emerging interests from both computer vision and computer graphics communities in obtaining photorealistic modeling of a scene or an object from real images. This paper presents a tentative review of the computer vision techniques used in such modeling which guarantee the generated views to be geometrically correct. The topics covered include mosaicking for building environment maps, CADlike modeling for building 3D geometric models together with texture maps extracted from real images, imagebased rendering for synthesizing new views from uncalibrated images, and techniques for modeling the appearance variation of a scene or an object under different illumination conditions. Major issues and difficulties are addressed. Keywords: photorealistic modeling, imagebased rendering, multipleview geometry, photometric models, CAD, camera calibration, 3D reconstruction, uncalibrated images, domain knowledge, illumination variation. 1 Introduction Considerable effort in computer...
The design and implementation of a Bayesian CAD modeler for
 Advanced Robotics
, 2001
"... We present a Bayesian CAD modeler for robotic applications. We address the problem of taking into account the propagation of geometric uncertainties when solving inverse geometric problems. The proposed method may be seen as a generalization of constraintbased approaches in which we explicitly mo ..."
Abstract

Cited by 14 (12 self)
 Add to MetaCart
We present a Bayesian CAD modeler for robotic applications. We address the problem of taking into account the propagation of geometric uncertainties when solving inverse geometric problems. The proposed method may be seen as a generalization of constraintbased approaches in which we explicitly model geometric uncertainties. Using our methodology, a geometric constraint is expressed as a probability distribution on the system parameters and the sensor measurements, instead of a simple equality or inequality. To solve geometric problems in this framework, we propose an original resolution method able to adapt to problem complexity.
Contour Matching Using Epipolar Geometry
, 1998
"... Generally contour matching is difficult, and the problem becomes more difficult when the motion between successive image frames is large. When the image frames are obtained while the camera is in motion, we can use constraints about the scene and the camera. In this paper we propose a contour matchi ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Generally contour matching is difficult, and the problem becomes more difficult when the motion between successive image frames is large. When the image frames are obtained while the camera is in motion, we can use constraints about the scene and the camera. In this paper we propose a contour matching algorithm which guarantees an accurate matching result, even for large motion contours. The key idea is to use epipolar geometry and geometric constraints. This contour matching method can be applied to contours from two or three different views. All of the processes are fully automatic and are successfully implemented and tested with many images.
Sensitivity analysis of EKF and iterated EKF pose estimation for positionbased visual servoing
 in Proc. IEEE Conference on Control Applications CCA 2005
"... Abstract — Robust and realtime relative pose estimation is an integral part of a positionbased visual servoing (PBVS) system. Traditionally, extended Kalman Filter (EKF) has been used to solve for the nonlinear relative endeffector to object pose equations from a set of 2D3D point correspondence ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Robust and realtime relative pose estimation is an integral part of a positionbased visual servoing (PBVS) system. Traditionally, extended Kalman Filter (EKF) has been used to solve for the nonlinear relative endeffector to object pose equations from a set of 2D3D point correspondences. However, the performance of the estimation filter and the convergence of the pose estimates are highly sensitive to tuning of filter parameters, camera calibration, and image processing. In this paper, the application of Iterated EKF (IEKF) for a robust highspeed PBVS system is studied. We also provide a detailed analysis of the stability and sensitivity of the EKF and IEKF pose estimation to uncertainties in (1) tuning of filter parameters, namely, process and measurement noise covariance matrices, initial state estimate, and sampling time (speed of PBVS system), (2) features selection, and (3) calibration of camera intrinsic parameters. Experimental results show that IEKF outperforms the standard EKF without bandwidth sacrifice and should be used to improve the robustness of the PBVS system to uncertainties. I.
3D reconstruction from projection matrices in a Carm based 3Dangiography system
 In First International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI
, 1998
"... Abstract. 3D reconstruction of arterial vessels from planar radiographs obtained at several angles around the object has gained increasing interest. The motivating application has been interventional angiography. In order to obtain a threedimensional reconstruction from a Carm mounted XRay Image ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Abstract. 3D reconstruction of arterial vessels from planar radiographs obtained at several angles around the object has gained increasing interest. The motivating application has been interventional angiography. In order to obtain a threedimensional reconstruction from a Carm mounted XRay Image Intensifier (XRII) traditionally the trajectory of the source and the detector system is characterized and the pixel size is estimated. The main use of the imaging geometry characterization is to provide a correct 3D2D mapping between the 3D voxels to be reconstructed and the 2D pixels on the radiographic images. We propose using projection matrices directly in a voxel driven backprojection for the reconstruction as opposed to that of computing all the geometrical parameters, including the imaging parameters. We discuss the simplicity of the entire calibrationreconstruction process, and the fact that it makes the computation of the pixel size, source to detector distance, and other explicit imaging parameters unnecessary.
Randomness and Geometric Features in Computer Vision
 In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'96
, 1996
"... It is often necessary to handle randomness and geometry in computer vision, for instance to match and fuse together noisy geometric features such as points, lines or 3D frames, or to estimate a geometric transformation from a set of matched features. However, the proper handling of these geometric f ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
It is often necessary to handle randomness and geometry in computer vision, for instance to match and fuse together noisy geometric features such as points, lines or 3D frames, or to estimate a geometric transformation from a set of matched features. However, the proper handling of these geometric features is far more difficult than for points, and a number of paradoxes can arise. We analyse in this article three basic problems: (1) what is a uniform random distribution of features, (2) how to define a distance between features, and (3) what is the "mean feature" of a number of feature measurements, and we propose generic methods to solve them.
Visual planesbased Simultaneous Localization And Model Refinement for augmented reality
"... This paper presents a method for camera pose tracking that uses a partial knowledge about the scene. The method is based on monocular vision Simultaneous Localization And Mapping (SLAM). With respect to classical SLAM implementations, this approach uses previously known information about the environ ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a method for camera pose tracking that uses a partial knowledge about the scene. The method is based on monocular vision Simultaneous Localization And Mapping (SLAM). With respect to classical SLAM implementations, this approach uses previously known information about the environment (rough map of the walls) and profits from the various available databases and blueprints to constraint the problem. This method considers that the tracked image patches belong to known planes (with some uncertainty in their localization) and that SLAM map can be represented by associations of cameras and planes. In this paper, we propose an adapted SLAM implementation and detail the considered models. We show that this method gives good results for a real sequence with complex motion for augmented reality (AR) application. 1
Egomotion Estimation of a Multicamera System through Line Correspondence
 ICIP
"... In this paper we propose a method for estimating the egomotion of a calibrated multicamera system from an analysis of the luminance edges. The method works entirely in the 3 0 space as all edges of each one set of views are previously localized, matched and backprojected onto the object space. In ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
In this paper we propose a method for estimating the egomotion of a calibrated multicamera system from an analysis of the luminance edges. The method works entirely in the 3 0 space as all edges of each one set of views are previously localized, matched and backprojected onto the object space. In fact, it searches for the rigid motion that best merges the sets of 3 0 contours extracted from each one of the multiviews. The method uses both straight and curved 3 0 contours. 1.
SelfMaintaining Camera Calibration Over Time
 In Proc. IEEE Computer Society Conf. CVPR
, 1997
"... The success of an intelligent robotic system depends on the performance of its visionsystem which in turn depends to a great extend upon the quality of its calibration. During the execution of a task the visionsystem is subject to external influences such as vibrations, thermal expansion etc. whic ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
The success of an intelligent robotic system depends on the performance of its visionsystem which in turn depends to a great extend upon the quality of its calibration. During the execution of a task the visionsystem is subject to external influences such as vibrations, thermal expansion etc. which affect and possibly render invalid the initial calibration. Moreover, it is possible that the parameters of the visionsystem like e.g. the zoom or the focus are altered intentionally in order to perform specific visiontasks. This paper describes a technique for automatically maintaining calibration of stereovision systems over time without using again any particular calibration apparatus. It uses all available information, i.e. both spatial and temporal data. Uncertainty is systematically manipulated and maintained. Synthetical and real data are used to validate the proposed technique, and the results compare very favourably with those given by classical calibration methods. Keywords: C...
Model Based Pose Estimation from Uncertain Data
 PhD thesis, Hebrew Univeristy in Jerusalem
, 1993
"... This work was carried out under the supervision of Dr. Michael Werman and Professor Shmuel Peleg Modelbased pose estimation is a process which determines the location and orientation of a given object relative to a speci c viewer. The input supplied to the process consists of a description of the o ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This work was carried out under the supervision of Dr. Michael Werman and Professor Shmuel Peleg Modelbased pose estimation is a process which determines the location and orientation of a given object relative to a speci c viewer. The input supplied to the process consists of a description of the object, generally denoted the model, and a set of measurements of the object taken by the viewer. The study of pose estimation is important in many areas of computer vision, such as object recognition, object tracking, robot navigation, motion detection, etc. This thesis deals with similar problems of determining, from sensory data, the exact position and orientation of a 3D object represented by a model. The di culty in solving pose estimation problems is mainly due to the fact that the sensory data from which the pose should be determined is imprecise and noisy. A slight error in measurement mayhave a large e ect on the precision of the solution or may not even allow any solution. The problem of imprecision is especially di cult when the