Results 11  20
of
35
A Projective Framework for Structure and Motion Recovery from Two Views of a Piecewise Planar Scene
, 2000
"... In this paper, we consider the problem of finding an optimal reconstruction from two views of a piecewise planar scene. We consider the general case of uncalibrated cameras, hence place us in a projectlye framework. In this case, there is no meaningful metric information about the object space that ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this paper, we consider the problem of finding an optimal reconstruction from two views of a piecewise planar scene. We consider the general case of uncalibrated cameras, hence place us in a projectlye framework. In this case, there is no meaningful metric information about the object space that could be used to define optimization criteria. Taking into account that the images are then the only spaces where an optimization process makes sense, there is a need at each step of the reconstruction process, from the detection of planar structures to motion estimation and actual 3D reconstruction, of a consistent image level representation of geometric 3D structures. In our case, we need to represent camera motion and 3D points that are subject to coplanarity constraints. It is well known that camera motion between two views can be represented on the image level via the epipolar geometry (fundamental matrix). Coplanarity constraints can be expressed via a collection of 2D homographies. Unfortunately, these algebraic entities are overparameterized in the sense that the 2D homographies must in addition obey constraints imposed by the epipolar geometry. We are thus looking for a minimal and consistent representation of motion (epipolar geometry) and structure (points+homographies) that in addition should be easy to use for minimizing reprojection error in a bundle adjustment manner. In this paper, we propose such a representation and use it to devise fast and accurate estimation methods for each step of the reconstruction process, including image point matching, plane detection and optimal triangulation of planes and points on planes. We make extensive use of the quasilinear optimization principle. A great number of experimental results show that the new methods give sup...
Online HandEye Calibration
, 1999
"... In this paper, we address the problem of handeye calibration of a robot mounted video camera. In a first time, we derive a new linear formulation of the problem. This allows an algebraic analysis of the cases that usual approaches do not consider. In a second time, we extend this new formulation in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper, we address the problem of handeye calibration of a robot mounted video camera. In a first time, we derive a new linear formulation of the problem. This allows an algebraic analysis of the cases that usual approaches do not consider. In a second time, we extend this new formulation into an online handeye calibration method. This method allows to get rid of the calibration object required by the standard approaches and use unknown scenes instead. Finally, experimental results validate both methods.
Bayesian Analysis IV: Noise And Computing Time Considerations
 J. Magn. Reson
, 1991
"... . Probability theory, when interpreted as logic, enables one to ask many questions not possible with the frequency interpretation of probability theory. Often, answering these questions can be computationally intensive. If these techniques are to find their way into general use in NMR, a way that al ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
. Probability theory, when interpreted as logic, enables one to ask many questions not possible with the frequency interpretation of probability theory. Often, answering these questions can be computationally intensive. If these techniques are to find their way into general use in NMR, a way that allows one to calculate the probability for the frequencies, amplitudes, and decay rate constants quickly and easily must be found. In this paper, a procedure that allows one to compute the posterior probability for the frequencies, amplitudes, and decay rate constants from a series of zeropadded discrete Fourier transforms of the complex FID data when the data have been multiplied by a decaying exponential is described. Additionally, the calculation is modified to include prior information about the noise, and it is shown that obtaining a sample of the noise is almost as important as obtaining a signal sample, because it allows one to investigate complicated spectra using simple models. Thre...
Numerical simulation of periodic nuclear magnetic resonance problems: fast calculation of carousel averages
, 1998
"... Many nuclear magnetic resonance (NMR) problems involve spin evolution in the presence of a periodic Hamiltonian. In many cases, calculation of the spin response requires a c̀arousel average’, meaning a summation of NMR signals over all cyclic permutations of the perturbation. Symmetry theorems are ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Many nuclear magnetic resonance (NMR) problems involve spin evolution in the presence of a periodic Hamiltonian. In many cases, calculation of the spin response requires a c̀arousel average’, meaning a summation of NMR signals over all cyclic permutations of the perturbation. Symmetry theorems are derived which in many cases allow this carousel average to be performed with minimal computational e ort. This has implications for the computation of NMRspectra under asynchronous decoupling sequences, and for the computation of powder NMR spectra in the presence of sample rotation. Calculated magicangle spinning powder spectra of l[13C3] alanine are compared with experimental results. 1.
Efficient collision detection for curved solid objects
 Proceedings of the Seventh ACM Symposium on Solid Modeling and Applications
, 2002
"... The designforassembly technique requires realistic physically based simulation algorithms and in particular efficient geometric collision detection routines. Instead of approximating mechanical parts by large polygonal models, we work with the much smaller original CADdata directly, thus avoiding ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The designforassembly technique requires realistic physically based simulation algorithms and in particular efficient geometric collision detection routines. Instead of approximating mechanical parts by large polygonal models, we work with the much smaller original CADdata directly, thus avoiding precision and tolerance problems. We present a generic algorithm, which can decide whether two solids intersect or not. We identify classes of objects for which this algorithm can be efficiently specialized, and describe in detail how this specialization is done. These classes are objects that are bounded by quadric surface patches and conic arcs, objects that are bounded by natural quadric patches, torus patches, line segments and circular arcs, and objects that are bounded by quadric surface patches, segments of quadric intersection curves and segments of cubic spline curves. We show that all necessary geometric predicates can be evaluated by finding the roots of univariate polynomials of degree at most 4 for the first two classes, and at most 8 for the third class. In order to speed up the intersection tests we use bounding volume hierarchies. With the help of numerical optimization techniques we succeed in calculating smallest enclosing spheres and bounding boxes for a given set of surface patches fulfilling the properties mentioned above.
Some Methods for Training Mixtures of Experts
"... Recently, a modular architecture of neural networks known as a mixture of experts (ME) [8][9] has attracted quite some attention. MEs are mixture models which attempt to solve problems using a divideandconquer strategy � that is, they learn to decompose complex problems in simpler subproblems. In ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Recently, a modular architecture of neural networks known as a mixture of experts (ME) [8][9] has attracted quite some attention. MEs are mixture models which attempt to solve problems using a divideandconquer strategy � that is, they learn to decompose complex problems in simpler subproblems. In particular, the gating network of a ME learns to partition the input space (in a soft way, so
On the Study of Watermarking Application in WWW  Modeling, Performance Analysis, and Applications of Digital Image Watermarking Systems
, 1999
"... As the Internet becomes more and more populous, people concern more about the copyright protection issue for digital data such as images and audio. Digital watermarking technique can hide data in images or audio to indicate the data owner or recipient. Therefore, it can protect the copyright. Motiva ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
As the Internet becomes more and more populous, people concern more about the copyright protection issue for digital data such as images and audio. Digital watermarking technique can hide data in images or audio to indicate the data owner or recipient. Therefore, it can protect the copyright. Motivated by copyright protection in the Internet, we propose an Internet Image Library (IIL) using watermarks to protect the copyright. With this watermarking application  IIL in mind, we analyze and propose new watermark systems to meet this application. A lot of...
Absence of 1/f spectra in Dow Jones daily average
 International Journal of Bifurcation and Chaos
, 1991
"... The power spectrum of the daily Dow Jones industrial average is calculated. It has been shown that the spectrum is P (f) 1=f 1:8, very close to that of the random walk series (1/f² noise). In contrast to some previous belief, the Dow Jones index as well as other stock prices time series are not 1/f ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The power spectrum of the daily Dow Jones industrial average is calculated. It has been shown that the spectrum is P (f) 1=f 1:8, very close to that of the random walk series (1/f² noise). In contrast to some previous belief, the Dow Jones index as well as other stock prices time series are not 1/f noise. The distribution of the daily change of the Dow Jones industrial average is also calculated. Several fittings of the distribution are carried out (for both the price change and the logarithm of the price change). It has been observed that the occurrence of the big loss on Black Monday (negative change of 508) does not fit the distribution of the smaller price fluctuations (e.g., smaller than 100). This lack of scaling for the frequency of occurrence from the large stock price losses to small price fluctuations can be compared with the much better scaling law in the frequency of occurrence for global earthquakes.
Calibration of Digital Amateur Cameras
, 2001
"... We introduce a novel outlook on the selfcalibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior offline calibration allows to neglect t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a novel outlook on the selfcalibration task, by considering images taken by a camera in motion, allowing for zooming and focusing. Apart from the complex relationship between the lens control settings and the intrinsic camera parameters, a prior offline calibration allows to neglect the setting of focus, and to fix the principal point and aspect ratio throughout distinct views. Thus, the calibration matrix is dependent only on the zoom position. Given a fully calibrated reference view, one has only one parameter to estimate for any other view of the same scene, in order to calibrate it and to be able to perform metric reconstructions. We provide a closeform solution, and validate the reliability of the algorithm with experiments on real images. An important advantage of our method is a reduced  to one  number of critical camera configurations, associated with it. Moreover, we propose a method for computing the epipolar geometry of two views, taken from different positions and with different (spatial) resolutions; the idea is to take an appropriate third view, that is "easy" to match with the other two.