• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Discriminative Mixture-of-Templates for Viewpoint Classification (2010)

Cached

  • Download as a PDF

Download Links

  • [www2.seattle.intel-research.net]
  • [www2.seattle.intel-research.net]
  • [www.cs.berkeley.edu]
  • [www.eecs.berkeley.edu]
  • [www.cs.berkeley.edu]
  • [seattle.intel-research.net]
  • [seattle.intel-research.net]
  • [www.seattle.intel-research.net]
  • [www2.seattle.intel-research.net]
  • [seattle.intel-research.net]
  • [www.seattle.intel-research.net]
  • [seattle.intel-research.net]
  • [www2.seattle.intel-research.net]
  • [homes.cs.washington.edu]
  • [www.seattle.intel-research.net]
  • [www2.seattle.intel-research.net]
  • [homes.cs.washington.edu]
  • [www.seattle.intel-research.net]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Chunhui Gu , Xiaofeng Ren
Venue:EUROPEAN CONFERENCE ON COMPUTER VISION
Citations:59 - 3 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Gu10discriminativemixture-of-templates,
    author = {Chunhui Gu and Xiaofeng Ren},
    title = {Discriminative Mixture-of-Templates for Viewpoint Classification},
    booktitle = {EUROPEAN CONFERENCE ON COMPUTER VISION},
    year = {2010},
    publisher = {}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.

Keyphrases

discriminative mixture-of-templates    viewpoint classification    discriminative learning    object category    mixture-of-templates approach    state-of-the-art approach    large number    generative model    object database    everyday object    joint viewpoint classification    holistic template    savarese et    category detection    mixture model    robust viewpoint classification    different level    linear appearance model    object part    continuous case    multiple component    viewpoint pose    continuous viewpoint estimation    canonical viewpoint    discrete view    object viewpoint classification    car database    viewpoint accuracy    linear regression    mixture component    natural extension    great promise    groundtruth annotation   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University