Results 1 - 10
of
61
Learning motion patterns of people for compliant robot motion
, 2004
"... Whenever people move through their environments they do not move randomly. Instead, they usually follow specific trajectories or motion patterns corresponding to their intentions. Knowledge about such patterns enables a mobile robot to robustly keep track of persons in its environment and to improve ..."
Abstract
-
Cited by 105 (3 self)
- Add to MetaCart
(Show Context)
Whenever people move through their environments they do not move randomly. Instead, they usually follow specific trajectories or motion patterns corresponding to their intentions. Knowledge about such patterns enables a mobile robot to robustly keep track of persons in its environment and to improve its behavior. In this paper we propose a technique for learning collections of trajectories that characterize typical motion patterns of persons. Data recorded with laser-range finders are clustered using the expectation maximization algorithm. Based on the result of the clustering process, we derive a hidden Markov model that is applied to estimate the current and future positions of persons based on sensory input. We also describe how to incorporate the probabilistic belief about the potential trajectories of persons into the path planning process of a mobile robot. We present several experiments carried out in different environments with a mobile robot equipped with a laser-range scanner and a camera system. The results demonstrate that our approach can reliably learn motion patterns of persons, can robustly estimate and predict positions of persons, and can be used to improve the navigation behavior of a mobile robot.
Segmenting foreground objects from a dynamic textured background via a robust kalman filter
- in IEEE Proceedings of the International Conference on Computer Vision
, 2003
"... The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile trafc, moving crowds, escalators, etc. We ha ..."
Abstract
-
Cited by 100 (0 self)
- Add to MetaCart
(Show Context)
The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile trafc, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the non-stationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an Autoregressive Moving Average Model (ARMA). A ro-bust Kalman lter algorithm iteratively estimates the intrin-sic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. 1
3D hand pose reconstruction using specialized mappings
- In Proc. International Conf. on Computer Vision (ICCV), Vol.1
, 2001
"... A system for recovering 3D hand pose from monocu-lar color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA’s fundamental components are a set of specialized forwa ..."
Abstract
-
Cited by 90 (10 self)
- Add to MetaCart
(Show Context)
A system for recovering 3D hand pose from monocu-lar color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA’s fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corre-sponding visual features. The joint angle data in the train-ing set is obtained via a CyberGlove, a glove with 22 sen-sors that monitor the angular motions of the palm and fin-gers. In training, the visual features are generated using a computer graphics module that renders the hand from ar-bitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values rep-resent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task. 1
Appearance Models for Occlusion Handling
- 2nd IEEE Workshop on Performance Evaluation of Tracking and Surveillance
, 2001
"... Objects in the world exhibit complex interactions. When captured in a video sequence, some interactions manifest themselves as occlusions. A visual tracking system must be able to track objects which are partially or even fully occluded. In this paper we present a method of tracking objects through ..."
Abstract
-
Cited by 78 (11 self)
- Add to MetaCart
(Show Context)
Objects in the world exhibit complex interactions. When captured in a video sequence, some interactions manifest themselves as occlusions. A visual tracking system must be able to track objects which are partially or even fully occluded. In this paper we present a method of tracking objects through occlusions using appearance models. These models are used to localize objects during partial occlusions, detect complete occlusions and resolve depth ordering of objects during occlusions. This paper presents a tracking system which successfully deals with complex real world interactions, as demonstrated on the PETS 2001 dataset. 1.
Continuous tracking within and across camera streams
- IEEE Int’l Conf. on Computer Vision and Pattern Recognition
, 2003
"... This paper presents a new approach for continuous tracking of moving objects observed by multiple, heterogeneous cameras. Our approach simultaneously processes video streams from stationary and Pan-Tilt-Zoom cameras. The detection of moving objects from moving camera streams is performed by defining ..."
Abstract
-
Cited by 77 (14 self)
- Add to MetaCart
(Show Context)
This paper presents a new approach for continuous tracking of moving objects observed by multiple, heterogeneous cameras. Our approach simultaneously processes video streams from stationary and Pan-Tilt-Zoom cameras. The detection of moving objects from moving camera streams is performed by defining an adaptive background model that takes into account the camera motion approximated by an affine transformation. We address the tracking problem by separately modeling motion and appearance of the moving objects using two probabilistic models. For the appearance model, multiple color distribution components are proposed for ensuring a more detailed description of the object being tracked. The motion model is obtained using a Kalman Filter (KF) process, which predicts the position of the moving object. The tracking is performed by the maximization of a joint probability model. The novelty of our approach consists in modeling the multiple trajectories observed by the moving and stationary cameras in the same KF framework. It allows deriving a more accurate motion measurement for objects simultaneously viewed by the two cameras and an automatic handling of occlusions, errors in the detection and camera handoff. We demonstrate the performances of the system on several video surveillance sequences. 1.
3D Trajectory Recovery for Tracking Multiple Objects and Trajectory Guided Recognition of Actions
- In CVPR
, 1999
"... A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation) , and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter ..."
Abstract
-
Cited by 73 (4 self)
- Add to MetaCart
(Show Context)
A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation) , and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utlized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion histo...
Real-time multiple objects tracking with occlusion handling in dynamic scenes
- In CVPR
, 2005
"... This work presents a real-time system for multiple object tracking in dynamic scenes. A unique characteristic of the system is its ability to cope with longduration and complete occlusion without a prior knowledge about the shape or motion of objects. The system produces good segment and tracking re ..."
Abstract
-
Cited by 48 (7 self)
- Add to MetaCart
(Show Context)
This work presents a real-time system for multiple object tracking in dynamic scenes. A unique characteristic of the system is its ability to cope with longduration and complete occlusion without a prior knowledge about the shape or motion of objects. The system produces good segment and tracking results at a frame rate of 15-20 fps for image size of 320x240, as demonstrated by extensive experiments performed using video sequences under different conditions indoor and outdoor with long-duration and complete occlusions in changing background.
People tracking in surveillance applications
- In Proceedings of the 2nd IEEE International workshop on PETS
, 2001
"... This paper presents a real-time algorithm that allows robust tracking of multiple objects in complex environments. Foreground pixels are detected using luminance contrast and grouped into blobs. Blobs from two consecutive frames are matched creating the matching matrices. Tracking is performed using ..."
Abstract
-
Cited by 47 (1 self)
- Add to MetaCart
(Show Context)
This paper presents a real-time algorithm that allows robust tracking of multiple objects in complex environments. Foreground pixels are detected using luminance contrast and grouped into blobs. Blobs from two consecutive frames are matched creating the matching matrices. Tracking is performed using direct and inverse matching matrices. This method successfully solves blobs merging and splitting. Results from indoor and outdoor scenarios are shown. 1.
Tracking Multiple People with a Multi-Camera System
- IEEE Workshop on Multi-Object Tracking
, 2001
"... We present a multi-camera system based on Bayesian modality fusion to track multiple people in an indoor environment. Bayesian networks are used to combine multiple modalities for matching subjects between consecutive image frames and between multiple camera views. Unlike other occlusion reasoning m ..."
Abstract
-
Cited by 45 (0 self)
- Add to MetaCart
(Show Context)
We present a multi-camera system based on Bayesian modality fusion to track multiple people in an indoor environment. Bayesian networks are used to combine multiple modalities for matching subjects between consecutive image frames and between multiple camera views. Unlike other occlusion reasoning methods, we use multiple cameras in order to obtain continuous visual information of people in either or both cameras so that they can be tracked through interactions. Results demonstrate that the system can maintain people’s identities by using multiple cameras cooperatively. 1.
TJ, Partial Observation vs Blind Tracking through Occlusion
- British Machine Vision Conference (BMVC 2002
, 2002
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
-
Cited by 39 (8 self)
- Add to MetaCart
(Show Context)
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.