• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Dynamic cascades with bidirectional bootstrapping for action unit detection in spontaneous facial behavior. Affective Computing, (2011)

by Yunfeng Zhu, F De la Torre, J F Cohn, Yu-Jin Zhang
Venue:IEEE Transactions on,
Add To MetaCart

Tools

Sorted by:
Results 11 - 17 of 17

Meta-Analysis of the First Facial . . .

by Michel Valstar, Marc Mehu, Bihan Jiang, Maja Pantic, Klaus Scherer , 2002
"... . . . an active topic in computer science for over two decades, in particular Facial Action Coding System (FACS) Action Unit ..."
Abstract - Add to MetaCart
. . . an active topic in computer science for over two decades, in particular Facial Action Coding System (FACS) Action Unit

Emotion inference from

by Daniel Bernhardt , 2010
"... Number 787 ..."
Abstract - Add to MetaCart
Number 787

Vision and Attention Theory Based Sampling for Continuous Facial Emotion Recognition

by See Profile, Bir Bhanu, Ninad Shashikant Thakoor, Albert C. Cruz, Student Member, Bir Bhanu, Ninad S. Thakoor
"... Abstract—Affective computing—the emergent field in which computers detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are not able to infer a person’s emotions from natural and spontaneous facial expressions captured in video. While the field o ..."
Abstract - Add to MetaCart
Abstract—Affective computing—the emergent field in which computers detect emotions and project appropriate expressions of their own—has reached a bottleneck where algorithms are not able to infer a person’s emotions from natural and spontaneous facial expressions captured in video. While the field of emotion recognition has seen many advances in the past decade, a facial emotion recognition approach has not yet been revealed which performs well in unconstrained settings. In this paper, we propose a principled method which addresses the temporal dynamics of facial emotions and expressions in video with a sampling approach inspired from human perceptual psychology. We test the efficacy of the method on the Audio/Visual Emotion Challenge 2011 and 2012, Cohn-Kanade and the MMI Facial Expression Database. The method shows an average improvement of 9.8 percent over the baseline for weighted accuracy on the Audio/Visual Emotion Challenge 2011 video-based frame-level subchallenge testing set. Index Terms—Facial expressions, audio/visual emotion challenge, sampling and interpolation Ç 1

Learning Temporal Alignment Uncertainty for Efficient Event Detection

by unknown authors
"... Abstract—In this paper we tackle the problem of efficient video event detection. We argue that linear detection functions should be preferred in this regard due to their scalability and ef-ficiency during estimation and evaluation. A popular approach in this regard is to represent a sequence using a ..."
Abstract - Add to MetaCart
Abstract—In this paper we tackle the problem of efficient video event detection. We argue that linear detection functions should be preferred in this regard due to their scalability and ef-ficiency during estimation and evaluation. A popular approach in this regard is to represent a sequence using a bag of words (BOW) representation due to its: (i) fixed dimensionality irrespective of the sequence length, and (ii) its ability to compactly model the statistics in the sequence. A drawback to the BOW representation, however, is the intrinsic destruction of the temporal ordering information. In this paper we propose a new representation that leverages the uncertainty in relative temporal alignments between pairs of sequences while not destroying temporal ordering. Our representation, like BOW, is of a fixed dimensionality making it easily integrated with a linear detection function. Extensive experiments on CK+, 6DMG, and UvA-NEMO databases show significant performance improvements across both isolated and continuous event detection tasks. I.
(Show Context)

Citation Context

...ry codebook encoding to a new frame. IV. FEATURE EXTRACTION FROM VIDEO A. Feature Extraction There are two general approaches for video feature extraction, shape-based [14], [15] and appearance-based =-=[16]-=-, [17] methods. Common to all appearance-based methods, they have some limitations due to changes in camera view, illumination Algorithm 1 Our Approach (Continuous Event Detection) Input : Input examp...

in the Facial Expression Recognition and Analysis (FERA2015)

by Hua Gao
"... Abstract — This article describes a system for participation ..."
Abstract - Add to MetaCart
Abstract — This article describes a system for participation
(Show Context)

Citation Context

...d dynamic 2D or 3D action unit detection because of their efficiency, e.g. [7],[8]. SIFT (Scale Invariant Feature Transform) descriptors have also been used efficiently within various frameworks ([9],=-=[10]-=-). We participate in the challenge with a system that also uses SIFT as features on an enhanced set of facial landmarks that includes points around transient facial features with support vector machin...

tional challenge on Facial Emotion Recognition and Analysis. We

by Thibaud Senechal, Hanan Salam, Renaud Seguier, Kevin Bailly, Lionel Prevost
"... Abstract—This paper presents our response to the first interna- ..."
Abstract - Add to MetaCart
Abstract—This paper presents our response to the first interna-
(Show Context)

Citation Context

...ods remove in-plane rotation and scaling according to the eyes localization, while other approaches try to remove small 3D rigid head motion using an affine transformation or a piece-wise affine warp =-=[16]-=-. Whatever the methods are, image registration relies on preliminary face detection and facial landmarks localization. Face detection is usually based on the public OpenCV face detector designed by Vi...

Contemporary Challenges for a Social Signal processing

by Dr. T. Kishorekumar, K. Sunilkumar
"... This paper provides a short overview of Social Signal Processing. The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ..."
Abstract - Add to MetaCart
This paper provides a short overview of Social Signal Processing. The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ignoring a crucial range of abilities that matter immensely for how people do in life. This range of abilities is called social intelligence and includes the ability to express and recognize social signals produced during social interactions like agreement, politeness, empathy, friendliness, conflict, etc., coupled with the ability to manage them in order to get along well with others while winning their cooperation. Social Signal Processing (SSP) is the new research domain that aims at understanding and modeling social interactions (human-science goals), and at providing computers with similar abilities in human-computer interaction scenarios (technological goals). SSP is in its infancy and the journey towards artificial social intelligence and socially-aware computing is still long, the paper outlines its future perspectives and some of its most promising applications.
(Show Context)

Citation Context

... smarter training set selection for subsequent learning through a dynamic cascade bidirectional bootstrapping scheme and report some of the best results so far in AU detection on the RU-FACS database =-=[17]-=-.Face analysis is also used for performing mutual gaze following and joint attention actions. Joint attention is the ability of coordination of a common point of reference with the communicating party...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University