Results 1  10
of
480
SemiAutomatic Generation of Transfer Functions for Direct Volume Rendering
 In IEEE Symposium on Volume Visualization
, 1998
"... Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes generating an informative rendering challenging. In particular, the specification of the transfer function ..."
Abstract

Cited by 289 (7 self)
 Add to MetaCart
(Show Context)
Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes generating an informative rendering challenging. In particular, the specification of the transfer function  the mapping from data values to renderable optical properties  is frequently a timeconsuming and unintuitive task. Ideally, the data being visualized should itself suggest an appropriate transfer function that brings out the features of interest without obscuring them with elements of little importance. We demonstrate that this is possible for a large class of scalar volume data, namely that where the regions of interest are the boundaries between different materials. A transfer function which makes boundaries readily visible can be generated from the relationship between three quantities: the data value and its first and second directional derivatives along the gradient direction. ...
Parameter Estimation Techniques: A Tutorial with Application to Conic Fitting
, 1995
"... Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter estimation. These include linear leastsquares (pseudoinverse and eigen a ..."
Abstract

Cited by 276 (8 self)
 Add to MetaCart
(Show Context)
Almost all problems in computer vision are related in one form or another to the problem of estimating parameters from noisy data. In this tutorial, we present what is probably the most commonly used techniques for parameter estimation. These include linear leastsquares (pseudoinverse and eigen analysis); orthogonal leastsquares; gradientweighted leastsquares; biascorrected renormalization; Kalman filtering; and robust techniques (clustering, regression diagnostics, Mestimators, least median of squares). Particular attention has been devoted to discussions about the choice of appropriate minimization criteria and the robustness of the different techniques. Their application to conic fitting is described.
Robust Analysis of Feature Spaces: Color Image Segmentation
, 1997
"... A general technique for the recovery of significant image features is presented. The technique is basedon the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks of the current methods (including robust clustering) are avoided. Featurespace of any natu ..."
Abstract

Cited by 223 (6 self)
 Add to MetaCart
A general technique for the recovery of significant image features is presented. The technique is basedon the mean shift algorithm, a simple nonparametric procedure for estimating density gradients. Drawbacks of the current methods (including robust clustering) are avoided. Featurespace of any naturecan beprocessed, and as an example, color image segmentation is discussed. The segmentation is completely autonomous, only its class is chosen by the user. Thus, the same program can produce a high quality edge image, or provide, by extracting all the significant colors, a preprocessor for contentbased query systems. A 512 x 512 color image is analyzed in less than 10 seconds on a standard workstation. Gray level images are handled as color images having only the lightness coordinate.
Gesture recognition: A survey
 IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS  PART C
, 2007
"... Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging fr ..."
Abstract

Cited by 185 (0 self)
 Add to MetaCart
(Show Context)
Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent and efficient human–computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide a survey on gesture recognition with particular emphasis on hand gestures and facial expressions. Applications involving hidden Markov models, particle filtering and condensation, finitestate machines, optical flow, skin color, and connectionist models are discussed in detail. Existing challenges and future research possibilities are also highlighted.
Robust mapping and localization in indoor environments using sonar data
 INT. J. ROBOTICS RESEARCH
, 2002
"... In this paper we describe a new technique for the creation of featurebased stochastic maps using standard Polaroid sonar sensors. The fundamental contributions of our proposal are: (1) a perceptual grouping process that permits the robust identification and localization of environmental features, su ..."
Abstract

Cited by 174 (30 self)
 Add to MetaCart
(Show Context)
In this paper we describe a new technique for the creation of featurebased stochastic maps using standard Polaroid sonar sensors. The fundamental contributions of our proposal are: (1) a perceptual grouping process that permits the robust identification and localization of environmental features, such as straight segments and corners, from the sparse and noisy sonar data; (2) a map joining technique that allows the system to build a sequence of independent limitedsize stochastic maps and join them in a globally consistent way; (3) a robust mechanism to determine which features in a stochastic map correspond to the same environment feature, allowing the system to update the stochastic map accordingly, and perform tasks such as revisiting and loop closing. We demonstrate the practicality of this approach by building a geometric map of a medium size, real indoor environment, with several people moving around the robot. Maps built from laser data for the same experiment are provided for comparison.
Robust parameter estimation in computer vision
 SIAM Reviews
, 1999
"... Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techni ..."
Abstract

Cited by 162 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Estimation techniques in computer vision applications must estimate accurate model parameters despite smallscale noise in the data, occasional largescale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are leastmedian of
Statistical Approaches to FeatureBased Object Recognition
, 1997
"... . This paper examines statistical approaches to modelbased object recognition. Evidence is presented indicating that, in some domains, normal (Gaussian) distributions are more accurate than uniform distributions for modeling feature fluctuations. This motivates the development of new maximumlikeli ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
. This paper examines statistical approaches to modelbased object recognition. Evidence is presented indicating that, in some domains, normal (Gaussian) distributions are more accurate than uniform distributions for modeling feature fluctuations. This motivates the development of new maximumlikelihood and MAP recognition formulations which are based on normal feature models. These formulations lead to an expression for the posterior probability of the pose and correspondences given an image. Several avenues are explored for specifying a recognition hypothesis. In the first approach, correspondences are included as a part of the hypotheses. Search for solutions may be ordered as a combinatorial search in correspondence space, or as a search over pose space, where the same criterion can equivalently be viewed as a robust variant of chamfer matching. In the second approach, correspondences are not viewed as being a part of the hypotheses. This leads to a criterion that is a smooth funct...
A Tensor Framework for Multidimensional Signal Processing
 Linkoping University, Sweden
, 1994
"... ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the lengt ..."
Abstract

Cited by 66 (8 self)
 Add to MetaCart
(Show Context)
ii About the cover The figure on the cover shows a visualization of a symmetric tensor in three dimensions, G = λ1ê1ê T 1 + λ2ê2ê T 2 + λ3ê3ê T 3 The object in the figure is the sum of a spear, a plate and a sphere. The spear describes the principal direction of the tensor λ1ê1ê T 1, where the length is proportional to the largest eigenvalue, λ1. The plate describes the plane spanned by the eigenvectors corresponding to the two largest eigenvalues, λ2(ê1ê T 1 + ê2ê T 2). The sphere, with a radius proportional to the smallest eigenvalue, shows how isotropic the tensor is, λ3(ê1ê T 1 + ê2ê T 2 + ê3ê T 3). The visualization is done using AVS [WWW94]. I am very grateful to Johan Wiklund for implementing the tensor viewer module used. This thesis deals with filtering of multidimensional signals. A large part of the thesis is devoted to a novel filtering method termed “Normalized convolution”. The method performs local expansion of a signal in a chosen filter basis which
Graphtheoretic scagnostics
 In Proc. 2005 IEEE Symp. on Information Visualization (INFOVIS
, 2005
"... We introduce Tukey and Tukey scagnostics and develop graphtheoretic methods for implementing their procedure on large datasets. ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
(Show Context)
We introduce Tukey and Tukey scagnostics and develop graphtheoretic methods for implementing their procedure on large datasets.