Results 11 - 20
of
109
2008b). The automaticity of emotion recognition
- Emotion
, 2007
"... Evolutionary accounts of emotion typically assume that humans evolved to quickly and efficiently recognize emotion expressions because these expressions convey fitness-enhancing messages. The present research tested this assumption in 2 studies. Specifically, the authors examined (a) how quickly per ..."
Abstract
-
Cited by 28 (10 self)
- Add to MetaCart
Evolutionary accounts of emotion typically assume that humans evolved to quickly and efficiently recognize emotion expressions because these expressions convey fitness-enhancing messages. The present research tested this assumption in 2 studies. Specifically, the authors examined (a) how quickly perceivers could recognize expressions of anger, contempt, disgust, embarrassment, fear, happiness, pride, sadness, shame, and surprise; (b) whether accuracy is improved when perceivers deliberate about each expression’s meaning (vs. respond as quickly as possible); and (c) whether accurate recognition can occur under cognitive load. Across both studies, perceivers quickly and efficiently (i.e., under cognitive load) recognized most emotion expressions, including the self-conscious emotions of pride, embarrassment, and shame. Deliberation improved accuracy in some cases, but these improvements were relatively small. Discussion focuses on the implications of these findings for the cognitive processes underlying emotion recognition.
Social Signal Processing: State-of-the-art and future perspectives of an emerging domain
- IN PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA
, 2008
"... The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next- ..."
Abstract
-
Cited by 27 (7 self)
- Add to MetaCart
(Show Context)
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes aset of recommendations for enabling the development of the next generation of socially-aware computing.
Audiovisual Laughter Detection Based on Temporal Features ABSTRACT
"... Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laughter from speech based on temporal features and we show that integrating the information from audio and video channels leads to ..."
Abstract
-
Cited by 22 (8 self)
- Add to MetaCart
(Show Context)
Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audio-visual approach to distinguishing laughter from speech based on temporal features and we show that integrating the information from audio and video channels leads to improved performance over single- modal approaches. Static features are extracted on an audio/video frame basis and then combined with temporal features extracted over a temporal window, describing the evolution of static features over time. The use of several different temporal features has been investigated and it has been shown that the addition of temporal information results in an improved performance over utilizing static information only. It is common to use a fixed set of temporal features which implies that all static features will exhibit the same behaviour over a temporal window. However, this does not always hold and we show that when AdaBoost is used as a feature selector, different temporal features for each static feature are selected, i.e., the temporal evolution of each static feature is described by different statistical measures. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves an F1 rate of over 89%.
The thrill of victory and the agony of defeat: spontaneous expressions of medal winners of the 2004 Athens olympic games
- Journal of Personality & Social Psychology
, 2006
"... Facial behaviors of medal winners of the judo competition at the 2004 Athens Olympic Games were coded with P. Ekman and W. V. Friesen’s (1978) Facial Affect Coding System (FACS) and interpreted using their Emotion FACS dictionary. Winners ’ spontaneous expressions were captured immediately when they ..."
Abstract
-
Cited by 22 (4 self)
- Add to MetaCart
(Show Context)
Facial behaviors of medal winners of the judo competition at the 2004 Athens Olympic Games were coded with P. Ekman and W. V. Friesen’s (1978) Facial Affect Coding System (FACS) and interpreted using their Emotion FACS dictionary. Winners ’ spontaneous expressions were captured immediately when they completed medal matches, when they received their medal from a dignitary, and when they posed on the podium. The 84 athletes who contributed expressions came from 35 countries. The findings strongly supported the notion that expressions occur in relation to emotionally evocative contexts in people of all cultures, that these expressions correspond to the facial expressions of emotion considered to be universal, that expressions provide information that can reliably differentiate the antecedent situations that produced them, and that expressions that occur without inhibition are different than those that occur in social and interactive settings.
The Facial Expression Coding System (FACES): Development, validation, and utility
- Psychological Assessment
, 2007
"... This article presents information on the development and validation of the Facial Expression Coding ..."
Abstract
-
Cited by 20 (1 self)
- Add to MetaCart
(Show Context)
This article presents information on the development and validation of the Facial Expression Coding
Can Duchenne smiles be feigned? New evidence on felt and false smiles.
- Emotion,
, 2009
"... We investigated the value of the Duchenne (D) smile as a spontaneous sign of felt enjoyment. Participants either smiled spontaneously in response to amusing material (spontaneous condition) or were instructed to pose a smile (deliberate condition). Similar amounts of D and non-Duchenne (ND) smiles ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
We investigated the value of the Duchenne (D) smile as a spontaneous sign of felt enjoyment. Participants either smiled spontaneously in response to amusing material (spontaneous condition) or were instructed to pose a smile (deliberate condition). Similar amounts of D and non-Duchenne (ND) smiles were observed in these 2 conditions (Experiment 1). When subsets of these smiles were presented to other participants, they generally rated spontaneous and deliberate D and ND smiles differently. Moreover, they distinguished between D smiles of varying intensity within the spontaneous condition (Experiment 2). Such a differentiation was also made when seeing the upper or lower face only (Experiment 3), but was impaired for static compared with dynamic displays (Experiment 4). The predictive value of the D smile in these judgment studies was limited compared with other features such as asymmetry, apex duration, and nonpositive facial actions, and was only significant for ratings of the upper face and static displays. These findings raise doubts about the reliability and validity of the D smile and question the usefulness of facial descriptions in identifying true feelings of enjoyment.
Regulating positive and negative emotions in daily life
- Journal of Personality
, 2008
"... ABSTRACT The present study examined how people regulate their emotions in daily life and how such regulation is related to their daily affective experience and psychological adjustment. Each day for an average of 3 weeks, participants described how they had regulated their emotions in terms of the r ..."
Abstract
-
Cited by 18 (0 self)
- Add to MetaCart
(Show Context)
ABSTRACT The present study examined how people regulate their emotions in daily life and how such regulation is related to their daily affective experience and psychological adjustment. Each day for an average of 3 weeks, participants described how they had regulated their emotions in terms of the reappraisal and suppression (inhibiting the expression) of positive and negative emotions, and they described their emotional experience, self-esteem, and psychological adjustment in terms of Beck’s triadic model of depression. Reappraisal was used more often than suppression, and suppressing positive emotions was used less than the other three strategies. In general, regulation through reappraisal was found to be beneficial, whereas regulation by suppression was not. Reappraisal of positive emotions was associated with increases in positive affect, self-esteem, and psychological adjustment, whereas suppressing positive emotions was associated with decreased positive emotion, selfesteem, and psychological adjustment, and increased negative emotions. Moreover, relationships between reappraisal and psychological adjustment and self-esteem were mediated by experienced positive affect, whereas relationships between suppression of positive emotions and self-esteem adjustment were mediated by negative affect. Emotions are central components of people’s lives, both interpersonally and intrapersonally, and emotional experiences can have powerful impacts on people’s functioning, both positive and negative.
Social signals, their function, and automatic analysis: a survey
- In Proceedings of the International Conference on Multimodal interfaces
, 2008
"... ABSTRACT Social Signal Processing (SSP) aims at the analysis of social behaviour in both Human-Human and Human-Computer interactions. SSP revolves around automatic sensing and interpretation of social signals, complex aggregates of nonverbal behaviours through which individuals express their attitu ..."
Abstract
-
Cited by 14 (2 self)
- Add to MetaCart
(Show Context)
ABSTRACT Social Signal Processing (SSP) aims at the analysis of social behaviour in both Human-Human and Human-Computer interactions. SSP revolves around automatic sensing and interpretation of social signals, complex aggregates of nonverbal behaviours through which individuals express their attitudes towards other human (and virtual) participants in the current social context. As such, SSP integrates both engineering (speech analysis, computer vision, etc.) and human sciences (social psychology, anthropology, etc.) as it requires multimodal and multidisciplinary approaches. As of today, SSP is still in its early infancy, but the domain is quickly developing, and a growing number of works is appearing in the literature. This paper provides an introduction to nonverbal behaviour involved in social signals and a survey of the main results obtained so far in SSP. It also outlines possibilities and challenges that SSP is expected to face in the next years if it is to reach its full maturity.
Fusion of audio and visual cues for laughter detection
- In ACM Intern. Conf. on Image and Video Retrieval
, 2008
"... Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to improved performance over singlemodal approach ..."
Abstract
-
Cited by 13 (6 self)
- Add to MetaCart
(Show Context)
Past research on automatic laughter detection has focused mainly on audio-based detection. Here we present an audiovisual approach to distinguishing laughter from speech and we show that integrating the information from audio and video channels leads to improved performance over singlemodal approaches. Each channel consists of 2 streams (cues), facial expressions and head movements for video and spectral and prosodic features for audio. We used decision level fusion to integrate the information from the two channels and experimented using the SUM rule and a neural network as the integration functions. The results indicate that even a simple linear function such as the SUM rule achieves very good performance in audiovisual fusion. We also experimented with different combinations of cues with the most informative being the facial expressions and the spectral features. The best combination of cues is the integration of facial expressions, spectral and prosodic features when a neural network is used as the fusion method. When tested on 96 audiovisual sequences, depicting spontaneously displayed (as opposed to posed) laughter and speech episodes, in a person independent way the proposed audiovisual approach achieves over 90 % recall rate and over 80 % precision.
Context-sensitive learning for enhanced audiovisual emotion classification
- IEEE Transactions on Affective Computing
, 2011
"... Abstract—Human emotional expression tends to evolve in a structured manner in the sense that certain emotional evolution patterns, i.e., anger to anger, are more probable than others, e.g., anger to happiness. Furthermore the perception of an emotional display can be affected by recent emotional dis ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Human emotional expression tends to evolve in a structured manner in the sense that certain emotional evolution patterns, i.e., anger to anger, are more probable than others, e.g., anger to happiness. Furthermore the perception of an emotional display can be affected by recent emotional displays. Therefore, the emotional content of past and future observations could offer relevant temporal context when classifying the emotional content of an observation. In this work, we focus on audio-visual recognition of the emotional content of improvised emotional interactions at the utterance level. We examine context-sensitive schemes for emotion recognition within a multimodal, hierarchical approach: bidirectional Long Short-Term Memory (BLSTM) neural networks, hierarchical Hidden Markov Model classifiers (HMMs) and hybrid HMM/BLSTM classifiers are considered for modeling emotion evolution within an utterance and between utterances over the course of a dialog. Overall, our experimental results indicate that incorporating long-term temporal context is beneficial for emotion recognition systems that encounter a variety of emotional manifestations. Context-sensitive approaches outperform those without context for classification tasks such as discrimination between valence levels or between clusters in the valence-activation space. The analysis of emotional transitions in our database sheds light into the flow of affective expressions revealing potentially useful patterns.