Results 1 - 10
of
57
Social Signal Processing: Survey of an Emerging Domain
, 2008
"... The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next- ..."
Abstract
-
Cited by 153 (32 self)
- Add to MetaCart
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially-aware computing.
The impacts of emoticons on message interpretation in computer-mediated communication
- Social Science Computer Review
, 2001
"... Emoticons are graphic representations of facial expressions that many e-mail users embed in their messages. These symbols are widely known and commonly recognized among computer-mediated communication (CMC) users, and they are described by most observers as substituting for the nonverbal cues that a ..."
Abstract
-
Cited by 83 (0 self)
- Add to MetaCart
Emoticons are graphic representations of facial expressions that many e-mail users embed in their messages. These symbols are widely known and commonly recognized among computer-mediated communication (CMC) users, and they are described by most observers as substituting for the nonverbal cues that are missing from CMC in comparison to face-to-face communication. Their empirical impacts, however, are undocumented. An experiment sought to determine the effects of three common emoticons on message interpretations. Hypotheses drawn from literature on nonverbal communication reflect several plausible relationships between emoticons and verbal messages. The results indicate that emoticons ’ contributions were outweighed by verbal content, but a negativity effect appeared such that any negative message aspect—verbal or graphic—shifts message interpretation in the direction of the negative element.
The perception of emotion by ear and by eye
- Cognitive Emotion
, 2000
"... Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a ..."
Abstract
-
Cited by 78 (6 self)
- Add to MetaCart
Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice. Experiment 1 showed that subjects can effectively combine information from the two sources, in that identi ® cation of the emotion in the face is biased in the direction of the simultaneously presented tone of voice. Experiment 2 showed that this effect occurs also under instructions to base the judgement exclusively on the face. Experiment 3 showed the reverse effect, a bias from the emotion in the face on judgement of the emotion in the voice. These results strongly suggest the existence of mandatory bidirectional links between affect detection structures in vision and audition. A now classical volume entitled Language by Ear and by Eye (Kavanagh & Mattingly, 1972) presents a number of studies drawing attention to the fact that language is presented to two different modalities, to the eyes in reading
Social Signal Processing: State-of-the-art and future perspectives of an emerging domain
- IN PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA
, 2008
"... The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next- ..."
Abstract
-
Cited by 27 (7 self)
- Add to MetaCart
(Show Context)
The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence – the ability to recognize human social signals and social behaviours like politeness, and disagreement – in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for Social Signal Processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes aset of recommendations for enabling the development of the next generation of socially-aware computing.
Building autonomous sensitive artificial listeners
- IEEE Transactions on Affective Computing
"... © 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to ..."
Abstract
-
Cited by 19 (7 self)
- Add to MetaCart
(Show Context)
© 2011 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. This preprint has been made available on the website of the authors and/or on the servers of their institutions in accordance with the IEEE PSPB Operations Manual, §8.1.9 C – see
Sources of accuracy in the empathic accuracy paradigm
- Emotion
, 2007
"... In the empathic accuracy paradigm, perceivers make inferences about the naturalistically occurring thoughts and feelings of stimulus persons, and these inferences are scored for accuracy against the stimulus persons ’ self-reported thoughts and feelings. The present study investigated sources of acc ..."
Abstract
-
Cited by 18 (1 self)
- Add to MetaCart
(Show Context)
In the empathic accuracy paradigm, perceivers make inferences about the naturalistically occurring thoughts and feelings of stimulus persons, and these inferences are scored for accuracy against the stimulus persons ’ self-reported thoughts and feelings. The present study investigated sources of accuracy in this paradigm by presenting the stimulus tape in several cue modalities (full video, audio, transcript, or silent video) and with differing instructions (infer thoughts and feelings, infer thoughts, or infer feelings). Verbal information contributed the most to accuracy, followed by vocal nonverbal cues. Visual nonverbal cues contributed the least, though still at levels above zero. When asked to infer feelings, perceivers appeared to shift attention toward visual nonverbal cues and away from verbal cues, and the reverse occurred when they were asked to infer thoughts. The study contributes to understanding of factors contributing to accuracy in the empathic accuracy paradigm.
Attributions of personality based on physical appearance, speech, and handwriting
- Journal of Personality and Social Psychology
, 1986
"... The effect of facial appearance, speech style, and handwriting on personality attributions was examined. The source consistency hypothesis predicted that an actor will receive consistent attributions across all three types of information. The differential information hypothesis predicted that differ ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
The effect of facial appearance, speech style, and handwriting on personality attributions was examined. The source consistency hypothesis predicted that an actor will receive consistent attributions across all three types of information. The differential information hypothesis predicted that different personality dimensions are used to differentiate the actors within each type of information. In a 3 X 6 multivariate analysis of variance (MANOVA) design, each judge rated a single actor/information combination on scales of social evaluation, intellectual evaluation, activity, potency, emotionality, and sociability. Photographs of actors were differentiated primarily in terms of positive social and intellectual evaluation; the speech of actors was differentiated primarily along an activity dimension; and the writing of the actors was differentiated primarily along a potency dimension. This study supported the differential information hypothesis and suggested that these three types of information about an actor may lead judges to use different personality dimensions. Person perception studies have shown that observers readily make attributions about the personality traits, abilities, and emotions of other persons based on limited information. Three types of information have been extensively studied: facial appearance, expressive noncontent characteristics of speech (such as pitch, tone, and tempo), and handwriting. Numerous studies have reported that facial features and expression influence attributions about the attractiveness, pleasantness, intellectual and social skills, and mental health of the target person (Adams, 1977;
Manipulating Uncertainty: The Contribution of Different Audiovisual Prosodic Cues to the Perception of Confidence
- Proceedings of the Speech Prosody Conference
, 2006
"... When answering factual questions, speakers can signal whether they are uncertain about the correctness of their answer using prosodic cues such as fillers (“uh”), a rising intonation contour or a marked facial expression. It has been shown that on the basis of such cues, observers can make adequate ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
When answering factual questions, speakers can signal whether they are uncertain about the correctness of their answer using prosodic cues such as fillers (“uh”), a rising intonation contour or a marked facial expression. It has been shown that on the basis of such cues, observers can make adequate estimates about the speaker’s level of confidence, but it is unclear which of these cues have the largest impact on perception. To find the relative strength of the three aforementioned cues, a novel perception experiment was performed in which answers were artificially manipulated in such a way that all possible combinations of the cues of interest could be judged by participants. Results showed that while all three factors had a significant influence on the perception results, this effect was by far the largest for facial expressions. 1.
Invited article: Why professionals fail to catch liars and how they can improve
- Legal and Criminological Psychology
, 2004
"... In the first part of this article, I will briefly review research findings that show that professional lie catchers, such as police officers, are generally rather poor at distinguishing between truths and lies. I believe that there are many reasons contributing towards this poor ability, and will gi ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
(Show Context)
In the first part of this article, I will briefly review research findings that show that professional lie catchers, such as police officers, are generally rather poor at distinguishing between truths and lies. I believe that there are many reasons contributing towards this poor ability, and will give an overview of these reasons in the second part of this article. I also believe that professionals could become better lie detectors and will explain how in the final part of this article. Lie detection 1
Cardiff conversation database (CCDb): A database of natural dyadic conversations
- In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR) Workshops
, 2013
"... ..."
(Show Context)