• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Robust Real-Time Periodic Motion Detection, Analysis, and Applications, (2000)

by R Cutler, L S Davis
Venue:IEEE Trans. on PAMI,
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 301
Next 10 →

Tracking multiple humans in complex situations

by Tao Zhao, Ram Nevatia - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2004
"... Abstract—Tracking multiple humans in complex situations is challenging. The difficulties are tackled with appropriate knowledge in the form of various models in our approach. Human motion is decomposed into its global motion and limb motion. In the first part, we show how multiple human objects are ..."
Abstract - Cited by 134 (3 self) - Add to MetaCart
Abstract—Tracking multiple humans in complex situations is challenging. The difficulties are tackled with appropriate knowledge in the form of various models in our approach. Human motion is decomposed into its global motion and limb motion. In the first part, we show how multiple human objects are segmented and their global motions are tracked in 3D using ellipsoid human shape models. Experiments show that it successfully applies to the cases where a small number of people move together, have occlusion, and cast shadow or reflection. In the second part, we estimate the modes (e.g., walking, running, standing) of the locomotion and 3D body postures by making inference in a prior locomotion model. Camera model and ground plane assumptions provide geometric constraints in both parts. Robust results are shown on some difficult sequences. Index Terms—Multiple-human segmentation, multiple-human tracking, visual surveillance, human shape model, human locomotion model. 1
(Show Context)

Citation Context

...ewpoints for which the system was trained. For motion-based human detection, motion periodicity is an important feature since human locomotion is periodic; an overview of these approaches is given in =-=[8]-=-. Some of the techniques are view dependent, and usually require multiple cycles of observation. It should be noted that the motion of human shadow and reflection is also periodic. In Song et al. [39]...

Probabilistic Methods for Finding People

by S. Ioffe, D. A. Forsyth - INTERNATIONAL JOURNAL OF COMPUTER VISION , 2001
"... Finding people in pictures presents a particularly difficult object recognition problem. We show how to find people by finding candidate body segments, and then constructing assemblies of segments that are consistent with the constraints on the appearance of a person that result from kinematic prope ..."
Abstract - Cited by 126 (2 self) - Add to MetaCart
Finding people in pictures presents a particularly difficult object recognition problem. We show how to find people by finding candidate body segments, and then constructing assemblies of segments that are consistent with the constraints on the appearance of a person that result from kinematic properties. Since a reasonable model of a person requires at least nine segments, it is not possible to inspect every group, due to the huge combinatorial complexity. We propose two

Action recognition by learning mid-level motion features

by Alireza Fathi, Greg Mori - In CVPR , 2008
"... This paper presents a method for human action recognition based on patterns of motion. Previous approaches to action recognition use either local features describing small patches or large-scale features describing the entire human figure. We develop a method constructing mid-level motion features w ..."
Abstract - Cited by 125 (7 self) - Add to MetaCart
This paper presents a method for human action recognition based on patterns of motion. Previous approaches to action recognition use either local features describing small patches or large-scale features describing the entire human figure. We develop a method constructing mid-level motion features which are built from low-level optical flow information. These features are focused on local regions of the image sequence and are created using a variant of AdaBoost. These features are tuned to discriminate between different classes of action, and are efficient to compute at run-time. A battery of classifiers based on these mid-level features is created and used to classify input sequences. State-of-theart results are presented on a variety of standard datasets. 1.
(Show Context)

Citation Context

...ich captures both motion and shape, represented as evolving silhouettes extracted using background subtraction. These templates are described using their global Hu moment properties. Cutler and Davis =-=[4]-=- and Liu et al. [14] classify periodic motions. Efros et al. [8] recognize the actions of small scale figures using features derived from blurred optical flow estimates. Another group of methods uses ...

Pedestrian Detection for Driving Assistance Systems: Single-frame Classification and System Level Performance

by Amnon Shashua, Yoram Gdalyahu, Gaby Hayun - IN PROCEEDINGS OF IEEE INTELLIGENT VEHICLES SYMPOSIUM , 2004
"... We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on cluste ..."
Abstract - Cited by 123 (2 self) - Add to MetaCart
We describe the functional and architectural breakdown of a monocular pedestrian detection system. We describe in detail our approach for single-frame classification based on a novel scheme of breaking down the class variability by repeatedly training a set of relatively simple classifiers on clusters of the training set. Single-frame classification performance results and system level performance figures for daytime conditions are presented with a discussion about the remaining gap to meet a daytime normal weather condition production system.

Multi-cue pedestrian detection and tracking from a moving vehicle

by D. M. Gavrila, S. Munder , 2006
"... This paper presents a multi-cue vision system for the real-time detection and tracking of pedestrians from a moving vehicle. The detection component involves a cascade of modules, each utilizing complementary visual criteria to successively narrow down the image search space, balancing robustness an ..."
Abstract - Cited by 122 (20 self) - Add to MetaCart
This paper presents a multi-cue vision system for the real-time detection and tracking of pedestrians from a moving vehicle. The detection component involves a cascade of modules, each utilizing complementary visual criteria to successively narrow down the image search space, balancing robustness and efficiency considerations. Novel is the tight integration of the consecutive modules: (sparse) stereo-based ROI generation, shape-based detection, texture-based classification and (dense) stereo-based verification. For example, shape-based detection activates a weighted combination of texture-based classifiers, each attuned to a particular body pose. Performance of

A Computational Model for Periodic Pattern Perception Based on Frieze and Wallpaper Groups

by Yanxi Liu, Robert T. Collins, Yanghai Tsin - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2004
"... We present a computational model for periodic pattern perception based on the mathematical theory of crystallographic groups. In each N-dimensional Euclidean space, a finite number of symmetry groups can characterize the structures of an infinite variety of periodic patterns. In 2D space, there ar ..."
Abstract - Cited by 91 (18 self) - Add to MetaCart
We present a computational model for periodic pattern perception based on the mathematical theory of crystallographic groups. In each N-dimensional Euclidean space, a finite number of symmetry groups can characterize the structures of an infinite variety of periodic patterns. In 2D space, there are seven frieze groups describing monochrome patterns that repeat along one direction and 17 wallpaper groups for patterns that repeat along two linearly independent directions to tile the plane. We develop a set of computer algorithms that "understand" a given periodic pattern by automatically finding its underlying lattice, identifying its symmetry group, and extracting its representative motifs. We also extend this computational model for near-periodic patterns using geometric AIC. Applications of such a computational model include pattern indexing, texture synthesis, image compression, and gait analysis.
(Show Context)

Citation Context

...AL.: A COMPUTATIONAL MODEL FOR PERIODIC PATTERN PERCEPTION BASED ON FRIEZE AND WALLPAPER GROUPS 365 Fig. 13. Human walking gait is more symmetrical than a dog’s gait pattern (dog sequence courtesy o=-=f [3]-=-). Fig. 14. (a) Spatio-temporal gait representations are generated by projecting the body silhouette along its columns and rows, then stacking these 1D projections over time to form frieze-like patter...

Gait Sequence Analysis using Frieze Patterns

by Yanxi Liu, Robert Collins, Yanghai Tsin - Proc. of European Conf. on Computer Vision , 2002
"... We analyze walking people using a gait sequence representation that bypasses the need for frame-to-frame tracking of body parts. ..."
Abstract - Cited by 71 (2 self) - Add to MetaCart
We analyze walking people using a gait sequence representation that bypasses the need for frame-to-frame tracking of body parts.
(Show Context)

Citation Context

...he duration from the current time instant at which the same pattern reappears. Their representation is eective in studying varying speed cyclic motions and detecting irregularities. Cutler and Davis [=-=5]-=- also measure self-similarity over time to form an evolving 2D pattern. Time-frequency analysis of this pattern summarizes interesting properties of the motion, such as object class and number of obje...

View-Independent Action Recognition from Temporal Self-Similarities

by Imran N. Junejo, Emilie Dexter, Ivan Laptev, Patrick Pérez - SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
"... This paper addresses recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation, we develop an action descriptor that captures the structure of tempo ..."
Abstract - Cited by 69 (6 self) - Add to MetaCart
This paper addresses recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation, we develop an action descriptor that captures the structure of temporal similarities and dissimilarities within an action sequence. Despite this temporal self-similarity descriptor not being strictly view-invariant, we provide intuition and experimental validation demonstrating its high stability under view changes. Self-similarity descriptors are also shown stable under performance variations within a class of actions, when individual speed fluctuations are ignored. If required, such fluctuations between two different instances of the same action class can be explicitly recovered with dynamic time warping, as will be demonstrated, to achieve cross-view action synchronization. More central to present work, temporal ordering of local selfsimilarity descriptors can simply be ignored within a bag-offeatures type of approach. Sufficient action discrimination is still retained this way to build a view-independent action recognition system. Interestingly, self-similarities computed from different image features possess similar properties and can be used in a complementary fashion. Our method is simple and requires neither structure recovery nor multi-view correspondence estimation. Instead, it relies on weak geometric properties and combines them with machine learning for efficient cross-view action recognition. The method is validated on three public datasets. It has similar or superior performance compared to related methods and it performs well even in extreme conditions such as when recognizing actions from top views while using side views only for training.
(Show Context)

Citation Context

...multiple views. Our method avoids these limitations. We compare our results with [33] on the common benchmark in Section 5.3. The methods most closely related to our approach are those of [36], [37], =-=[38]-=-, [39]. For image and video matching, Shechtman and Irani [36] recently explored local selfsimilarity descriptors. The descriptors are constructed by correlating the image (or video) patch centered at...

Motion-based Recognition of People in EigenGait Space

by Chiraz BenAbdelkader , Ross Cutler, Larry Davis , 2002
"... A motion-based, correspondence-free technique for human gait recognition in monocular video is presented. We contend that the planar dynamics of a walking person are encoded in a 2D plot consisting of the pairwise image similarities of the sequence of images of the person, and that gait recognition ..."
Abstract - Cited by 65 (1 self) - Add to MetaCart
A motion-based, correspondence-free technique for human gait recognition in monocular video is presented. We contend that the planar dynamics of a walking person are encoded in a 2D plot consisting of the pairwise image similarities of the sequence of images of the person, and that gait recognition can be achieved via standard pattern classification of these plots. We use background modelling to track the person for a number of frames and extract a sequence of segmented images of the person. The self-similarity plot is computed via correlation of each pair of images in this sequence. For recognition, the method applies Principal Component Analysis to reduce the dimensionality of the plots, then uses the k-nearest neighbor rule in this reduced space to classify an unknown person. This method is robust to tracking and segmentation errors, and to variation in clothing and background. It is also invariant to small changes in camera viewpoint and walking speed. The method is tested on outdoor sequences of 44 people with 4 sequences of each taken on two different days, and achieves a classification rate of 77%. It is also tested on indoor sequences of 7 people walking on a treadmill, taken from 8 different viewpoints and on 7 different days. A classification rate of 78% is obtained for near-fronto-parallel views, and 65% on average over all view.

Cross-View Action Recognition from Temporal Self-similarities

by Imran N. Junejo, Emilie Dexter, Ivan Laptev, Patrick Pérez, Inria Rennes, Bretagne Atlantique
"... Abstract. This paper concerns recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation we develop an action descriptor that captures the structure o ..."
Abstract - Cited by 62 (5 self) - Add to MetaCart
Abstract. This paper concerns recognition of human actions under view changes. We explore self-similarities of action sequences over time and observe the striking stability of such measures across views. Building upon this key observation we develop an action descriptor that captures the structure of temporal similarities and dissimilarities within an action sequence. Despite this descriptor not being strictly view-invariant, we provide intuition and experimental validation demonstrating the high stability of self-similarities under view changes. Self-similarity descriptors are also shown stable under action variations within a class as well as discriminative for action recognition. Interestingly, self-similarities computed from different image features possess similar properties and can be used in a complementary fashion. Our method is simple and requires neither structure recovery nor multi-view correspondence estimation. Instead, it relies on weak geometric properties and combines them with machine learning for efficient cross-view action recognition. The method is validated on three public datasets, it has similar or superior performance compared to related methods and it performs well even in extreme conditions such as when recognizing actions from top views while using side views for training only. 1
(Show Context)

Citation Context

...public datasets and demonstrate the practicality and the potential of the proposed method. Section 5 concludes the paper. 1.1 Related Work The methods most closely related to our approach are that of =-=[10,11,12,13]-=-. Recently for image and video matching [10] explored local self-similarity descriptors. The descriptors are constructed by correlating the image (or video) patch centered at a pixel to its surroundin...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University