• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 7,069
Next 10 →

Automatic annotation of everyday movements

by Deva Ramanan, D. A. Forsyth , 2003
"... This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor’s activity while in view. The system does not require a fixed background, and is automatic. The system works by (1) track ..."
Abstract - Cited by 88 (5 self) - Add to MetaCart
This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor’s activity while in view. The system does not require a fixed background, and is automatic. The system works by (1

On automatic annotation of meeting databases

by Daniel Gatica-Perez, Iain McCowan, Mark Barnard, Samy Bengio, Hervé Bourlard - IN PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP , 2003
"... In this paper, we present meetings as an application domain for multimedia content analysis. Meeting databases are a rich data source suitable for a variety of audio, visual and multi-modal tasks, including speech recognition, people and action recognition, and information retrieval. We specically f ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
focus on the task of semantic annotation of audio-visual (AV) events, where annotation consists of assigning labels (event names) to the data. In order to develop an automatic annotation system in a principled manner, it is essential to have a well-dened task, a standard corpus and an objective

ON AUTOMATIC ANNOTATION OF MEETING DATABASES

by Daniel Gutica-perez, Iuin Mccowan, Mark Barnard, Samy Bengio, Herv Bourlard
"... In this paper, we discuss meetings as an application domain for multimedia content analysis. Meeting databases are a rich data source suitable for a variety of audio, visual and multi-modal tasks, including speech recognition, people and action recognition, and information retrieval. We specifically ..."
Abstract - Add to MetaCart
specifically focus on the task of semantic annotation of audio-visual (AV) events, where annotation consists of assigning labels (event names) to the data. In order to develop an automatic annotation system in a principled manner, it is essential to have a well-defined task, a standard corpus and an objective

The automatic annotation of bacterial genomes

by Emily J. Richardson , 2011
"... With the development of ultra-high-throughput technologies, the cost of sequencing bacterial genomes has been vastly reduced. As more genomes are sequenced, less time can be spentmanually annotating those genomes, result-ing in an increased reliance on automatic annotation pipelines. However, automa ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
With the development of ultra-high-throughput technologies, the cost of sequencing bacterial genomes has been vastly reduced. As more genomes are sequenced, less time can be spentmanually annotating those genomes, result-ing in an increased reliance on automatic annotation pipelines. However

On automatic annotation of meeting databases

by Martigny Valais Switzerl, Daniel Gatica-perez, Iain Mccowan, Mark Barnard, Samy Bengio, Hervé Bourlard, Daniel Gatica-perez, Iain Mccowan, Mark Barnard, Samy Bengio, Hervé Bourlard , 2003
"... Abstract. In this paper, we present meetings as an application domain for multimedia content analysis. Meeting databases are a rich data source suitable for a variety of audio, visual and multi-modal tasks, including speech recognition, people and action recognition, and information retrieval. We sp ..."
Abstract - Add to MetaCart
specifically focus on the task of semantic annotation of audio-visual (AV) events, where annotation consists of assigning labels (event names) to the data. In order to develop an automatic annotation system in a principled manner, it is essential to have a well-defined task, a standard corpus and an objective

LFG00 — Automatic Annotation

by Louisa Sadler, Josef Genabith, Andy Way
"... We present a method for automatically annotating treebank resources with functional structures. The method defines systematic patterns of correspondence between partial PS configurations and functional structures. These are applied to PS rules extracted from treebanks. The set of techniques which we ..."
Abstract - Add to MetaCart
We present a method for automatically annotating treebank resources with functional structures. The method defines systematic patterns of correspondence between partial PS configurations and functional structures. These are applied to PS rules extracted from treebanks. The set of techniques which

Automatic Annotation and Retrieval of Images

by Yuqing Song, Wei Wang, Aidong Zhang , 2002
"... We propose a novel approach for semantics-based image annotation and retrieval. Our approach is based on monotonic tree, a derivation of contour tree for discrete data. Monotonic tree provides a way to bridge the gap between the high-level semantics and low-level features. Each branch (subtree) of t ..."
Abstract - Cited by 5 (2 self) - Add to MetaCart
indicating the semantic features are automatically annotated to the images. Based on the semantic features extracted from images, high-level (semantics-based) querying and browsing of images can be achieved. The experimental results demonstrate the effectiveness of our approach.

Towards automatic annotation of communicative gesturing

by Kristiina Jokinen, Graham Wilcock
"... We report on-going work on automatic annotation of head and hand gestures in videos of conversational inter-action. The Anvil annotation tool was extended by two plugins for automatic face and hand tracking. The results of automatic annotation are compared with the human annotations on the same data ..."
Abstract - Add to MetaCart
We report on-going work on automatic annotation of head and hand gestures in videos of conversational inter-action. The Anvil annotation tool was extended by two plugins for automatic face and hand tracking. The results of automatic annotation are compared with the human annotations on the same

Automatic Annotation And Classification . . .

by A. Batliner, M. Nutt, V. Warnke, E. Nöth, J. Buckow, R. Huber, H. Niemann - IN PROC. EUROPEAN CONF. ON SPEECH COMMUNICATION AND TECHNOLOGY , 1999
"... During the last years, we have been working on the automatic classification of boundaries and accents in the German VERBMOBIL (VM) project (human-human communication, appointment scheduling dialogues). A sub-corpus was annotated manually with prosodic boundary and accent labels, and neural networks ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
During the last years, we have been working on the automatic classification of boundaries and accents in the German VERBMOBIL (VM) project (human-human communication, appointment scheduling dialogues). A sub-corpus was annotated manually with prosodic boundary and accent labels, and neural networks

Automatic Annotation of Everyday Movements

by Deva Ramanan And, Deva Ramanan, D. A. Forsyth - in NIPS , 2003
"... This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor's activity while in view. The system does not require a fixed background, and is automatic. The system works by ..."
Abstract - Add to MetaCart
This paper describes a system that can annotate a video sequence with: a description of the appearance of each actor; when the actor is in view; and a representation of the actor's activity while in view. The system does not require a fixed background, and is automatic. The system works
Next 10 →
Results 1 - 10 of 7,069
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University