Results 1  10
of
42
Robot Pose Estimation in Unknown Environments by Matching 2D Range Scans
, 1994
"... A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot use ..."
Abstract

Cited by 228 (8 self)
 Add to MetaCart
A mobile robot exploring an unknown environment has no absolute frame of reference for its position, other than features it detects through its sensors. Using distinguishable landmarks is one possible approach, but it requires solving the object recognition problem. In particular, when the robot uses twodimensional laser range scans for localization, it is difficult to accurately detect and localize landmarks in the environment (such as corners and occlusions) from the range scans. In this paper, we develop two new iterative algorithms to register a range scan to a previous scan so as to compute relative robot positions in an unknown environment, that avoid the above problems. The first algorithm is based on matching data points with tangent directions in two scans and minimizing a distance function in order to solve the displacementbetween the scans. The second algorithm establishes correspondences between points in the two scans and then solves the pointtopoint leastsquares probl...
Krylov Projection Methods For Model Reduction
, 1997
"... This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov p ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
This dissertation focuses on efficiently forming reducedorder models for large, linear dynamic systems. Projections onto unions of Krylov subspaces lead to a class of reducedorder models known as rational interpolants. The cornerstone of this dissertation is a collection of theory relating Krylov projection to rational interpolation. Based on this theoretical framework, three algorithms for model reduction are proposed. The first algorithm, dual rational Arnoldi, is a numerically reliable approach involving orthogonal projection matrices. The second, rational Lanczos, is an efficient generalization of existing Lanczosbased methods. The third, rational power Krylov, avoids orthogonalization and is suited for parallel or approximate computations. The performance of the three algorithms is compared via a combination of theory and examples. Independent of the precise algorithm, a host of supporting tools are also developed to form a complete modelreduction package. Techniques for choosing the matching frequencies, estimating the modeling error, insuring the model's stability, treating multipleinput multipleoutput systems, implementing parallelism, and avoiding a need for exact factors of large matrix pencils are all examined to various degrees.
Shape Reconstruction of 3D Bilaterally Symmetric Surfaces
, 2000
"... . The paper presents a new approach for shape recovery based on integrating geometric and photometric information. We consider 3D bilaterally symmetric objects, that is, objects which are symmetric with respect to a plane (e.g., faces), and their reconstruction from a single image. Both the viewpoin ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
. The paper presents a new approach for shape recovery based on integrating geometric and photometric information. We consider 3D bilaterally symmetric objects, that is, objects which are symmetric with respect to a plane (e.g., faces), and their reconstruction from a single image. Both the viewpoint and the illumination are not necessarily frontal. Furthermore, no correspondence between symmetric points is required. The basic idea is that an image taken from a general, non frontal viewpoint, under nonfrontal illuminationcan be regarded as a pair of images. Each image of the pair is one half of the object, taken from different viewing positions and with different lighting directions. Thus, oneimagevariants of geometric stereo and of photometric stereo can be used. Unlike the separate invocation of these approaches, which require point correspondence between the two images, we show that integrating the photometric and geometric information suffice to yield a dense correspondence bet...
Structured Sampling And Reconstruction Of Illumination For Image Synthesis
, 1994
"... An important goal of image synthesis is to achieve accurate, efficient and consistent sampling and reconstruction of illumination varying over surfaces in an environment. A new approach is introduced for the treatment of diffuse polyhedral environments lit by area light sources, based on the identif ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
An important goal of image synthesis is to achieve accurate, efficient and consistent sampling and reconstruction of illumination varying over surfaces in an environment. A new approach is introduced for the treatment of diffuse polyhedral environments lit by area light sources, based on the identification of important properties of illumination structure. The properties of unimodality and curvature of illumination in unoccluded environments are used to develop a high quality sampling algorithm which includes error bounds. An efficient algorithm is presented to partition the scene polygons into a mesh of cells, in which the visible part of the source has the same topology. A fast incremental algorithm is presented to calculate the backprojection, which is an abstract representation of this topology. The behaviour of illumination in the penumbral regions is carefully studied, and is shown to be monotonic and well behaved within most of the mesh cells. An algorithm to reduce the mesh siz...
Location errors in wireless embedded sensor networks: Sources, models, and effects on applications
 ACM SIGMOBILE Mobile Computing and Communications Review
, 2002
"... Wireless sensor networks monitor the physical world by taking measurements of physical phenomena. Those measurements, and consequently the results computed from the measurements, may be significantly inaccurate. Therefore, in order to properly design and use wireless sensor networks, one must develo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Wireless sensor networks monitor the physical world by taking measurements of physical phenomena. Those measurements, and consequently the results computed from the measurements, may be significantly inaccurate. Therefore, in order to properly design and use wireless sensor networks, one must develop methods that take error sources, error propagation through optimization software, and ultimately their impact on applications, into consideration. In this paper, we focus on location discovery induced errors. We have selected location discovery as the object of our case study since essentially all sensor network computation and communication tasks are dependent on geographical node location data. First, we model the error in input parameters of the location discovery process. Then, we study the impact of errors on three selected applications: exposure, best and worstcase coverage, and shortest path routing. Furthermore, we examine how the choice of a specific objective function optimized during the location discovery process impacts the errors in results of different applications. I.
Optimal Spline Fitting to Planar Shape
, 1993
"... Parametric spline models are used extensively in representing and coding planar curves. For many applications, it is desirable to be able to derive the spline representation from a set of sample points of the planar shape. The problem we address in this paper is to find a cubic spline model to op ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Parametric spline models are used extensively in representing and coding planar curves. For many applications, it is desirable to be able to derive the spline representation from a set of sample points of the planar shape. The problem we address in this paper is to find a cubic spline model to optimally approximate a given planar shape. We solve this problem by treating the control points which define the spline as variables and apply an optimization technique to minimize an error norm so as to find the best locations of the control points. The error norm, which is defined as the total squared distance of the curve sample points from the spline model, reflects the discrepancy between the spline and the original curve. The objective function for the optimization process is the error norm plus a term which ensures convergence to the correct solution. The initial locations of the control points are selected heuristically. We also describe an extension of this method, which allow...
Optimal Sensitivity Analysis of Linear Least Squares
, 2003
"... Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least s ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least squares problems. It is shown that such formulas can be used to evaluate condition numbers. For full rank problems, Frobenius norm condition numbers are determined exactly, and spectral norm condition numbers are determined within a factor of squareroottwo. As a result, the necessary and sufficient criteria for well conditioning are established. A source of ill conditioning is found that helps explain the failure of simple iterative refinement. Some textbook discussions of ill conditioning are found to be fallacious, and some error bounds in the literature are found to unnecessarily overestimate the error. Finally, several open questions are described.
Navigation And RetroTraverse On A Remotely Operated Vehicle
 IEEE Sigapore International Conference on Intelligent Control and Instrumentation
, 1992
"... During the retrotraverse function, a computer controlled vehicle automatically retraces a previously recorded path that is stored as a series of xy points. Two navigation systems were tested: an onboard inertial navigation system and a radio grid system. Two steering algorithms were used. In the ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
During the retrotraverse function, a computer controlled vehicle automatically retraces a previously recorded path that is stored as a series of xy points. Two navigation systems were tested: an onboard inertial navigation system and a radio grid system. Two steering algorithms were used. In the state space method, the steering command is proportional to the lateralposition error and the heading error. In the pure pursuit method, the vehicle steers toward a goal point on the path a specified distance in front of the vehicle. On a smooth path these methods are identical. On a piecewise linear path, the pure pursuit method is significantly superior in performance. With this method, the vehicle follows paths accurately up to speeds of 70 kph. Introduction The U.S. Army Laboratory Command is studying the control of unmanned land vehicles as part of the Defense Department's Robotics Testbed (RT) Program. In the RT scenario, humans remotely operate several Robotic CombatVehicles (RCVs) f...
A Survey of Reverse Engineering and Program Comprehension
 In ODU CS 551 – Software Engineering Survey
, 1996
"... Reverse engineering has been a standard practice in the hardware community for some time. It has only been within the last ten years that reverse engineering, or "program comprehension," has grown into the current subdiscipline of software engineering. Traditional software engineering is primarily ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Reverse engineering has been a standard practice in the hardware community for some time. It has only been within the last ten years that reverse engineering, or "program comprehension," has grown into the current subdiscipline of software engineering. Traditional software engineering is primarily focused on the development and design of new software. However, most programmers work on software that other people have designed and developed. Up to 50% of a software maintainers time can be spent determining the intent of source code. The growing demand to reevaluate and reimplement legacy software systems, brought on by the proliferation of clientserver and World Wide Web technologies, has underscored the need for reverse engineering tools and techniques. This paper introduces the terminology of reverse engineering and gives some of the obstacles that make reverse engineering difficult. Although reverse engineering remains heavily dependent on the human component, a number of automated t...
A study of a nonlinear optimization problem using a distributed genetic algorithm
 In Proceedings of the International Conference on Parallel Processing
, 1996
"... Abstract – Genetic algorithms have been used successfully as a global optimization method when the search space is very large. To characterize and analyze the performance of genetic algorithms on a cluster of workstations, a parallel version of the GENESIS 5.0 was developed using PVM 3.3. This versi ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract – Genetic algorithms have been used successfully as a global optimization method when the search space is very large. To characterize and analyze the performance of genetic algorithms on a cluster of workstations, a parallel version of the GENESIS 5.0 was developed using PVM 3.3. This version, called VMGENESIS, was used to study a nonlinear leastsquares problem. Performance results show that linear speedups can be achieved if the basic distributed genetic algorithm is combined with a simple dynamic loadbalancing mechanism. Results also show that the quality of search changes significantly with the number of processors involved in the computation and with the frequency of communication. 1