• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Automatic Image Annotation and Retrieval using Cross-Media Relevance Models (2003)

Cached

  • Download as a PDF

Download Links

  • [ciir.cs.umass.edu]
  • [ciir.cs.umass.edu]
  • [hpds.ee.kuas.edu.tw]
  • [maroo.cs.umass.edu]
  • [ciir-publications.cs.umass.edu]
  • [maroo.cs.umass.edu]
  • [ciir.cs.umass.edu]
  • [ciir-publications.cs.umass.edu]
  • [hpds.ee.kuas.edu.tw]
  • [www.cs.ait.ac.th]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by J. Jeon , V. Lavrenko , R. Manmatha
Citations:431 - 14 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Jeon03automaticimage,
    author = {J. Jeon and V. Lavrenko and R. Manmatha},
    title = {Automatic Image Annotation and Retrieval using Cross-Media Relevance Models},
    year = {2003}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Libraries have traditionally used manual image annotation for indexing and then later retrieving their image collections. However, manual image annotation is an expensive and labor intensive procedure and hence there has been great interest in coming up with automatic ways to retrieve images based on content. Here, we propose an automatic approach to annotating and retrieving images based on a training set of images. We assume that regions in an image can be described using a small vocabulary of blobs. Blobs are generated from image features using clustering. Given a training set of images with annotations, we show that probabilistic models allow us to predict the probability of generating a word given the blobs in an image. This may be used to automatically annotate and retrieve images given a word as a query. We show that relevance models. allow us to derive these probabilities in a natural way. Experiments show that the annotation performance of this cross-media rele- vance model is almost six times as good (in terms of mean precision) than a model based on word-blob co-occurrence model and twice as good as a state of the art model derived from machine translation. Our approach shows the usefulness of using formal information retrieval models for the task of image annotation and retrieval.

Keyphrases

cross-media relevance model    automatic image annotation    training set    manual image annotation    mean precision    machine translation    cross-media rele vance model    labor intensive procedure    automatic approach    great interest    automatic way    word-blob co-occurrence model    relevance model    image annotation    natural way    small vocabulary    image feature    probabilistic model    image collection    annotation performance    formal information retrieval model    art model   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University