Results 1 - 10
of
19
AVLAUGHTERCYCLE: AN AUDIOVISUAL LAUGHING MACHINE
"... The AVLaughterCycle project aims at developing an audiovisual laughing machine, capable of recording the laughter of a user and to respond to it with a machine-generated laughter linked with the input laughter. During the project, an audiovisual laughter database was recorded, including facial point ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
The AVLaughterCycle project aims at developing an audiovisual laughing machine, capable of recording the laughter of a user and to respond to it with a machine-generated laughter linked with the input laughter. During the project, an audiovisual laughter database was recorded, including facial points tracking, thanks to the Smart Sensor Integration software developed by the University of Augsburg. This tool is also used to extract audio features, which are sent to a module called MediaCycle, evaluating similarities between a query input and the files in a given database. MediaCycle outputs a link to the most similar laughter, sent to Greta, an Embodied Conversational Agent, who displays the facial animation corresponding to the laughter simultaneously with the audio laughter playing.
SSI/ModelUI- A Tool for the Acquisition and Annotation of Human Generated Signals
"... Humans are used to express their needs and goals through various channels, such as speech, mimic, posture, etc. The recognition and understanding of such behaviour is a key requirement towards a more natural ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Humans are used to express their needs and goals through various channels, such as speech, mimic, posture, etc. The recognition and understanding of such behaviour is a key requirement towards a more natural
The Smart Sensor Integration Framework and its Application
- in EU Projects”, in Workshop on Bioinspired Human-Machine Interfaces and Healthcare Applications (B-Interface 2010
"... Abstract. Affect sensing by machines is an essential part of next-generation human-computer interaction (HCI). However, despite the large effort carried out in this field during the last decades, only few applications exist, which are able to react to a user’s emotion in real-time. This is certainly ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Affect sensing by machines is an essential part of next-generation human-computer interaction (HCI). However, despite the large effort carried out in this field during the last decades, only few applications exist, which are able to react to a user’s emotion in real-time. This is certainly due to the fact that emotion recognition is a challenging part in itself. Another reason is that so far most effort has been put towards offline analysis and only few applications exist, which can react to a user’s emotion in real-time. In response to this deficit we have developed a framework called Smart Sensor Integration (SSI), which considerably jump-starts the development of multimodal online emotion recognition (OER) systems. In this paper, we introduce the SSI framework and describe how it is successfully applied in different projects under grant of the European Union, namely the CALLAS and METABO project, and the IRIS network. 1
Gaze Behavior during Interaction with a Virtual Character in Interactive Storytelling
"... In this paper, we present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling. The underlying narrative in which the approach was tested is based on a classical XIX th century psychological novel: Mada ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling. The underlying narrative in which the approach was tested is based on a classical XIX th century psychological novel: Madame Bovary, by Flaubert. At various stages of the narrative, the user can address the main character or respond to her using free-style spoken natural language input, impersonating her lover. An eye tracker was connected to enable the interactive gaze model to respond to user’s current gaze (i.e. looking into the virtual character’s eyes or not). We conducted a study with 19 students where we compared our interactive eye gaze model with a non-interactive eye gaze model that was informed by studies of human gaze behaviors, but had no information on where the user was looking. The interactive model achieved a higher score for user ratings than the non-interactive model.
CoMediAnnotate: towards more usable multimedia content annotation by adapting the user interface
"... Abstract — This project aims at improving the user experience re-garding multimedia content annotation. We evaluated and compared current timeline-based annotation tools, so as to elicit user requirements. We address two issues: 1) adapting the user interface, by supporting more input modalities thr ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract — This project aims at improving the user experience re-garding multimedia content annotation. We evaluated and compared current timeline-based annotation tools, so as to elicit user requirements. We address two issues: 1) adapting the user interface, by supporting more input modalities through a rapid prototyping tool and by offering alternative visualization techniques of temporal signals; and 2) covering more steps of the annotation workflow besides the task of annotation itself: notably recording multimodal signals. We developed input devices components for the OpenInterface (OI) platform for rapid prototyping of multimodal interfaces: multitouch screen, jog wheels and pen-based solutions. We modified an annotation tool created with the Smart Sensor Integration (SSI) toolkit and com-ponentized it in OI so as to bind its controls to different input devices. We produced mockups sketches towards a new design of an improved user interface for multimedia content annotation, and started developing a rough prototype using the Processing Development Environment. Our solution allows to produce several prototypes by varying the interaction pipeline: changing input modalities and using either the initial GUI of the annotation tool, or the newly-designed one. We target usability testing to validate our solution and determine which input modalities combination best suits given use cases.
cs.uta.fi
"... ac.uk Multimodal conversational dialogue systems consisting of numerous software components create challenges for the underlying software architecture and development practices. Typically, such systems are built on separate, often preexisting components developed by different organizations and integ ..."
Abstract
- Add to MetaCart
ac.uk Multimodal conversational dialogue systems consisting of numerous software components create challenges for the underlying software architecture and development practices. Typically, such systems are built on separate, often preexisting components developed by different organizations and integrated in a highly iterative way. The traditional dialogue system pipeline is not flexible enough to address the needs of highly interactive systems, which include parallel processing of multimodal input and output. We present an architectural solution for a multimodal conversational social dialogue system. 1
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING (TAC) 1 Towards E-Motion Based Music Retrieval A study of Affective Gesture Recognition
"... Abstract—The widespread availability of digitised music collections and mobile music players have enabled us to listen to music during many of our daily activities, such as exercise, commuting, relaxation, and many people enjoy that opportunity. A practical problem that comes along with the wish to ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—The widespread availability of digitised music collections and mobile music players have enabled us to listen to music during many of our daily activities, such as exercise, commuting, relaxation, and many people enjoy that opportunity. A practical problem that comes along with the wish to listen to music is that of music retrieval, the selection of desired music from a music collection. In this paper we propose a new approach to facilitate music retrieval. Modern smart phones are commonly used as music players, and are already equipped with inertial sensors that are suitable for obtaining motion information. In the proposed approach, emotion is derived automatically from arm gestures, and is used to query a music collection. We set-up predictive models for valence and arousal from empirical data, gathered in an experimental setup where inertial data recorded from arm movements is coupled to musical emotion. Part of the experiment is a preliminary study confirming that human subjects are generally capable of recognising affect from arm gestures. Model validation in the main study confirmed the predictive capabilities of the models.
Preface
"... Belgium (grant N◦716631). Its main goal is to foster the development of new media technologies through digital performances and installations, in connection with local companies and artists. numediart is organized around three major R&D themes: • HyFORGE- Hypermedia Navigation: Information index ..."
Abstract
- Add to MetaCart
Belgium (grant N◦716631). Its main goal is to foster the development of new media technologies through digital performances and installations, in connection with local companies and artists. numediart is organized around three major R&D themes: • HyFORGE- Hypermedia Navigation: Information indexing and retrieval rely classically on constrained languages to automatically describe contents and allow formulating queries, respectively. This approach becomes hardly applicable for multimedia contents such as music or video because of the disparity between computable low-level descriptors and desired high-level semantics- the so-called semantic gap. Alternatively, HyFORGE investigates human-in-the-loop approaches and innovative tools for structuring and searching multimedia contents. Along with audio and image processing, HyFORGE builds up on self-organizing models to derive enhanced views of multimedia collections and provide users with efficient browsing interfaces. • COMEDIA- Body & Media: COMEDIA is named from a French contraction between body and media or stage director and media, which nicely sums up the main objective of this axis: giving to bodies the means to be their own artistic director! Hence based on position on stage or choreography between multiple artists for the inter-relationship and gestures or voice for the intra-relationship, CO-
Journal on Multimodal User Interfaces manuscript No.
"... (will be inserted by the editor) ..."
(Show Context)