• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

HumanEva: Synchronized video and motion capture dataset for evaluation of articulated human motion (2006)

Cached

  • Download as a PDF

Download Links

  • [www.cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [www.cs.brown.edu]
  • [www.cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [files.is.tue.mpg.de]
  • [cs.brown.edu]
  • [www.cs.brown.edu]
  • [www.cs.brown.edu]
  • [www.cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [files.is.tue.mpg.de]
  • [cs.brown.edu]
  • [cs.brown.edu]
  • [cs.brown.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Leonid Sigal , Alexandru O. Balan , Michael J. Black
Citations:265 - 15 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@TECHREPORT{Sigal06humaneva:synchronized,
    author = {Leonid Sigal and Alexandru O. Balan and Michael J. Black},
    title = { HumanEva: Synchronized video and motion capture dataset for evaluation of articulated human motion},
    institution = {},
    year = {2006}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HUMANEVA datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40, 000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.

Keyphrases

articulated human motion    motion capture dataset    current state    pose estimation    important role    new method    baseline algorithm    prior model    last year    standard set    human pose estimation    image frame    pure motion capture data    time instant    algorithm parameter    systematic quantitative evaluation    multiple subject    likelihood function    multi-view laboratory environment    image observation model    multi-view video    annealed particle filtering    standard bayesian framework    pose estimation algorithm    motion prior    predefined action    hardware system    error measure    research community    synchronized motion capture    human motion    sequential importance resampling    present data    humaneva datasets   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University