• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 165
Next 10 →

Learning realistic human actions from movies

by Ivan Laptev, Marcin Marszałek, Cordelia Schmid, Benjamin Rozenfeld - IN: CVPR. , 2008
"... The aim of this paper is to address recognition of natural human actions in diverse and realistic video settings. This challenging but important subject has mostly been ignored in the past due to several problems one of which is the lack of realistic and annotated video datasets. Our first contribut ..."
Abstract - Cited by 738 (48 self) - Add to MetaCart
contribution is to address this limitation and to investigate the use of movie scripts for automatic annotation of human actions in videos. We evaluate alternative methods for action retrieval from scripts and show benefits of a text-based classifier. Using the retrieved action samples for visual learning, we

Simultaneous visual recognition of manipulation actions and manipulated objects

by Hedvig Kjellström, Javier Romero, David Martínez, Danica Kragić - In ECCV , 2008
"... Abstract. The visual analysis of human manipulation actions is of interest for e.g. human-robot interaction applications where a robot learns how to perform a task by watching a human. In this paper, a method for classifying manipulation actions in the context of the objects manipulated, and classif ..."
Abstract - Cited by 40 (4 self) - Add to MetaCart
Abstract. The visual analysis of human manipulation actions is of interest for e.g. human-robot interaction applications where a robot learns how to perform a task by watching a human. In this paper, a method for classifying manipulation actions in the context of the objects manipulated

Dense saliency-based spatiotemporal feature points for action recognition

by Konstantinos Rapantzikos, Yannis Avrithis, Stefanos Kollias
"... Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a dif ..."
Abstract - Cited by 40 (5 self) - Add to MetaCart
Several spatiotemporal feature point detectors have been recently used in video analysis for action recognition. Feature points are detected using a number of measures, namely saliency, cornerness, periodicity, motion activity etc. Each of these measures is usually intensity-based and provides a

Action Recognition using Exemplar-based Embedding

by Daniel Weinl, Edmond Boyer
"... In this paper, we address the problem of representing human actions using visual cues for the purpose of learning and recognition. Traditional approaches model actions as space-time representations which explicitly or implicitly encode the dynamics of an action through temporal dependencies. In cont ..."
Abstract - Cited by 42 (1 self) - Add to MetaCart
In this paper, we address the problem of representing human actions using visual cues for the purpose of learning and recognition. Traditional approaches model actions as space-time representations which explicitly or implicitly encode the dynamics of an action through temporal dependencies

Learning optimal features for visual pattern recognition

by Kai Labusch A, Udo Siewert B, Thomas Martinetz A, Erhardt Barth A
"... The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation, and statis ..."
Abstract - Add to MetaCart
The optimal coding hypothesis proposes that the human visual system has adapted to the statistical properties of the environment by the use of relatively simple optimality criteria. We here (i) discuss how the properties of different models of image coding, i.e. sparseness, decorrelation

Cross-View Action Recognition via View Knowledge Transfer

by Jingen Liu, Mubarak Shah, Benjamin Kuipers, Silvio Savarese
"... In this paper, we present a novel approach to recognizing human actions from different views by view knowledge transfer. An action is originally modelled as a bag of visual-words (BoVW), which is sensitive to view changes. We argue that, as opposed to visual words, there exist some higher level feat ..."
Abstract - Cited by 48 (4 self) - Add to MetaCart
features which can be shared across views and enable the connection of action models for different views. To discover these features, we use a bipartite graph to model two view-dependent vocabularies, then apply bipartite graph partitioning to co-cluster two vocabularies into visual-word clusters called

TRAJECTORY FEATURE FUSION FOR HUMAN ACTION RECOGNITION

by Sameh Megrhi, Azeddine Beghdadi
"... This paper addresses the problem of human action detection /recognition by investigating interest points (IP) trajectory cues and by reducing undesirable small camera motion. We first detect speed up robust feature (SURF) to segment video into frame volume (FV) that contains small actions. This segm ..."
Abstract - Add to MetaCart
reduce the impact of camera motion by consid-ering moving IPs beyond a minimum motion angle and by using motion boundary histogram (MBH). Feature-fusion based action recognition is performed to generate robust and discriminative codebook using K-mean clustering. We em-ploy a bag-of-visual-words Support

Submodular Attribute Selection for Action Recognition

by Jinging Zheng, Zhuolin Jiang, P. Jonathon Phillips - in Video,” NIPS , 2014
"... In real-world action recognition problems, low-level features cannot adequately characterize the rich spatial-temporal structures in action videos. In this work, we encode actions based on attributes that describes actions as high-level con-cepts e.g., jump forward or motion in the air. We base our ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
analysis on two types of action attributes. One type of action attributes is generated by humans. The second type is data-driven attributes, which are learned from data using dictio-nary learning methods. Attribute-based representation may exhibit high variance due to noisy and redundant attributes. We

Visual human+machine learning

by Raphael Fuchs, Jürgen Waser, Meister Eduard Gröller - IEEE Transactions on Visualization and Computer Graphics
"... Abstract — In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition ca ..."
Abstract - Cited by 17 (3 self) - Add to MetaCart
Abstract — In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition

Adaptive Unsupervised Multi-View Feature Selection for Visual Concept Recognition

by Yinfu Feng, Jun Xiao, Yueting Zhuang, Xiaoming Liu
"... Abstract. To reveal and leverage the correlated and complemental information between different views, a great amount of multi-view learning algorithms have been proposed in recent years. However, unsupervised feature selection in multi-view learning is still a challenge due to lack of data labels th ..."
Abstract - Add to MetaCart
-view visual similar graph learning into a unified framework. To solve the objective function of AUMFS, a simple yet efficient iterative method is proposed. We apply AUMFS to three visual concept recognition applications (i.e., social image con-cept recognition, object recognition and video-based human action
Next 10 →
Results 1 - 10 of 165
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University