Results 1 -
6 of
6
Memory cues for meeting video retrieval
- In CARPE’04: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
, 2004
"... We advocate a new approach to meeting video retrieval based on the use of memory cues. First we present a new survey involving 519 people in which we investigate the types of items people use to review meeting contents (e.g., minutes, video, etc.). Then we present a novel memory study involving 15 s ..."
Abstract
-
Cited by 36 (1 self)
- Add to MetaCart
(Show Context)
We advocate a new approach to meeting video retrieval based on the use of memory cues. First we present a new survey involving 519 people in which we investigate the types of items people use to review meeting contents (e.g., minutes, video, etc.). Then we present a novel memory study involving 15 subjects in which we investigate what people remember about past meetings (e.g., seating position, etc). Based on these studies and related research we propose a novel framework for meeting video retrieval based on memory cues. Our proposed system graphically represents important memory retrieval cues such as room layout, participant’s faces and sitting positions, etc.. Queries are formulated dynamically: as the user graphically manipulates the cues, the query results are shown. Our system (1) helps users easily express the cues they recall about a particular meeting, and (2) helps users remember new cues for meeting video retrieval. Finally, we present our approach to automatic indexing of meeting videos, present experiments, and discuss research issues in automatic indexing for retrieval using memory cues.
Recognition of human actions using motion history information extracted from the compressed video
, 2004
"... ..."
Recognition and understanding of meetings overview of the european
- AMI and AMIDA projects. IDIAP-RR 27, IDIAP
, 2008
"... www.amiproject.org The AMI and AMIDA projects are concerned with the recognition and interpretation of multiparty (face-to-face and remote) meetings. Within these projects we have developed the following: (1) an infrastructure for recording meetings using multiple microphones and cameras; (2) a one ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
www.amiproject.org The AMI and AMIDA projects are concerned with the recognition and interpretation of multiparty (face-to-face and remote) meetings. Within these projects we have developed the following: (1) an infrastructure for recording meetings using multiple microphones and cameras; (2) a one hundred hour, manually annotated meeting corpus; (3) a number of techniques for indexing, and summarizing of meeting videos using automatic speech recognition and computer vision, and (4) a extensible framework for browsing, and searching of meeting videos. We give an overview of the various techniques developed in AMI (mainly involving face-toface meetings), their integration into our meeting browser framework, and future plans for AMIDA (Augmented Multiparty Interaction with Distant Access), the follow-up project to AMI. Technical and business information related to these two projects can be found at www.amiproject.org, respectively on the Scientific and Business portals. 1.
Overview of the European AMI and AMIDA Projects
, 2008
"... www.amiproject.org The AMI and AMIDA projects are concerned with the recognition and interpretation of multiparty (face-to-face and remote) meetings. Within these projects we have developed the following: (1) an infrastructure for recording meetings using multiple microphones and cameras; (2) a one ..."
Abstract
- Add to MetaCart
www.amiproject.org The AMI and AMIDA projects are concerned with the recognition and interpretation of multiparty (face-to-face and remote) meetings. Within these projects we have developed the following: (1) an infrastructure for recording meetings using multiple microphones and cameras; (2) a one hundred hour, manually annotated meeting corpus; (3) a number of techniques for indexing, and summarizing of meeting videos using automatic speech recognition and computer vision, and (4) a extensible framework for browsing, and searching of meeting videos. We give an overview of the various techniques developed in AMI (mainly involving face-toface meetings), their integration into our meeting browser framework, and future plans for AMIDA (Augmented Multiparty Interaction with Distant Access), the follow-up project to AMI. Technical and business information related to these two projects can be found at www.amiproject.org, respectively on the Scientific and Business portals. 1.
Motion Activity for Video Indexing Copyright © 2003 By
, 2004
"... iii Dedicated to my family iv Acknowledgement I am deeply indebted to my research advisor and committee chair Prof. B. S. Manjunath. It is his constant support and encouragement that make it possible for me to finish my dissertation. He is not only my research advisor but also a great mentor in my l ..."
Abstract
- Add to MetaCart
iii Dedicated to my family iv Acknowledgement I am deeply indebted to my research advisor and committee chair Prof. B. S. Manjunath. It is his constant support and encouragement that make it possible for me to finish my dissertation. He is not only my research advisor but also a great mentor in my life. Working and interacting with him was truly an invaluable learning experience. My sincere thanks go to Dr. Jonathan Foote. Part of my dissertation work was inspired by my interaction with Dr. Foote during my summer internship at the FX Palo Alto Lab. I learned many research skills by working with him. I would like to thank Professor Yuan-Fang Wang and Professor Shivkumar Chandrasekaran for serving on my committee and for constructive discussions about my dissertation work. I would like to thank the graduate students in Image Processing and Vision Research
BUILDING A SMART MEETING ROOM: FROM INFRASTRUCTURE TO THE VIDEO GAP (RESEARCH AND OPEN ISSUES)
"... At FXPAL Japan we have built an (experimental) Smart Conference Room (SCR) that contains multiple cameras, microphones, displays, and capture devices. Based on our experience, in this paper we discuss research and open issues in constructing SCRs like the one built at FXPAL for the purpose of automa ..."
Abstract
- Add to MetaCart
At FXPAL Japan we have built an (experimental) Smart Conference Room (SCR) that contains multiple cameras, microphones, displays, and capture devices. Based on our experience, in this paper we discuss research and open issues in constructing SCRs like the one built at FXPAL for the purpose of automatic content analysis. Our discussion is grounded on a novel conceptual meeting model that consists of physical (from layout to cameras), conceptual (meeting types, actors), sensory (audio-visual capture), and content (syntax and semantics) components. We also discuss storage, retrieval, and deployment issues. 1.