• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 271
Next 10 →

Viewer Reaction to Different Captioned Television Speeds

by Carl Jensema Ph. D , 1997
"... A series of 24 short, 30-second video segments captioned at different speeds were shown to 578 people. The subjects used a five-point scale (Too Fast, Fast, OK, Slow, Too Slow) to make an assessment of each segment’s caption speed. The “OK ” speed, defined as the speed at which “Caption speed is com ..."
Abstract - Add to MetaCart
A series of 24 short, 30-second video segments captioned at different speeds were shown to 578 people. The subjects used a five-point scale (Too Fast, Fast, OK, Slow, Too Slow) to make an assessment of each segment’s caption speed. The “OK ” speed, defined as the speed at which “Caption speed

Inside Jokes: Identifying Humorous Cartoon Captions

by Dafna Shahaf, Eric Horvitz, Robert Mankoff
"... Humor is an integral aspect of the human experience. Mo-tivated by the prospect of creating computational models of humor, we study the influence of the language of cartoon captions on the perceived humorousness of the cartoons. Our studies are based on a large corpus of crowdsourced cartoon caption ..."
Abstract - Add to MetaCart
captions that were submitted to a contest hosted by the New Yorker. Having access to thousands of cap-tions submitted for the same image allows us to analyze the breadth of responses of people to the same visual stimulus. We first describe how we acquire judgments about the humorousness of different

Here are the different basic types of captions implemented:

by Axel Sommerfeldt
"... The caption package was superseeded by the new caption2 package. So please use caption2 instead; for migrating please see the enclosed manual. ..."
Abstract - Add to MetaCart
The caption package was superseeded by the new caption2 package. So please use caption2 instead; for migrating please see the enclosed manual.

From Captions to Visual Concepts and Back

by Hao Fang, Li Deng, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick , Geoffrey Zweig, et al. , 2014
"... This paper presents a novel approach for automatically generating image descriptions: visual detectors and language models learn directly from a dataset of image captions. We use Multiple Instance Learning to train visual detectors for words that commonly occur in captions, including many different ..."
Abstract - Cited by 15 (1 self) - Add to MetaCart
This paper presents a novel approach for automatically generating image descriptions: visual detectors and language models learn directly from a dataset of image captions. We use Multiple Instance Learning to train visual detectors for words that commonly occur in captions, including many different

Viewer Reaction to Different Television Captioning Speeds

by unknown authors
"... ideo segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers. Participants used a five-point scale to assess each segment's caption speed. The "OK " speed, defined as the rate at which "caption speed is c ..."
Abstract - Add to MetaCart
ideo segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers. Participants used a five-point scale to assess each segment's caption speed. The "OK " speed, defined as the rate at which "caption speed

Viewer Reaction to Different Television Captioning Speeds

by unknown authors
"... ideo segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers. Participants used a five-point scale to assess each segment's caption speed. The "OK " speed, defined as the rate at which "caption speed is c ..."
Abstract - Add to MetaCart
ideo segments captioned at different speeds were shown to a group of 578 people that included deaf, hard of hearing, and hearing viewers. Participants used a five-point scale to assess each segment's caption speed. The "OK " speed, defined as the rate at which "caption speed

Korean to English TV Caption Translator: "CaptionEye/KE"

by Seong-il Yang, Young-kil Kim, Young-ae Seo, Sung-kwon Choi, Sang-kyu Park
"... In this paper, we present CaptionEye/KE, a Korean to English machine translation system that is applied to a practical TV caption translation. And its experimental evaluation is performed on actual TV news caption texts that are extracted by caption extractor. This system adopts a highly robust HMM ..."
Abstract - Add to MetaCart
In this paper, we present CaptionEye/KE, a Korean to English machine translation system that is applied to a practical TV caption translation. And its experimental evaluation is performed on actual TV news caption texts that are extracted by caption extractor. This system adopts a highly robust HMM

From Captions to Visual Concepts and Back

by Rupesh K. Srivastava, Li Deng, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, et al.
"... This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in cap ..."
Abstract - Add to MetaCart
in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture

Event Detection in Baseball Video using Superimposed Caption Recognition

by Dongqing Zhang - Proc. ACM Multimedia, Jean Les Pins , 2002
"... We have developed a novel system for baseball video event detection and summarization using superimposed caption text detection and recognition. The system detects different types of semantic level events in baseball video including scoring and last pitch of each batter. The system has two component ..."
Abstract - Cited by 44 (2 self) - Add to MetaCart
We have developed a novel system for baseball video event detection and summarization using superimposed caption text detection and recognition. The system detects different types of semantic level events in baseball video including scoring and last pitch of each batter. The system has two

Crowdsourcing correction of speech recognition captioning errors

by M Wald - In Proceedings of W4A 2011
"... ABSTRACT In this paper, we describe a tool that facilitates crowdsourcing correction of speech recognition captioning errors to provide a sustainable method of making videos accessible to people who find it difficult to understand speech through hearing alone. Categories and Subject Descriptors K.4 ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
, meetings etc.) The provision of synchronized text captions (subtitles) with video enables all their different communication qualities and strengths to be available as appropriate for different contexts, content, tasks, learning styles, learning preferences and learning differences. For example, text can
Next 10 →
Results 1 - 10 of 271
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University