Results 1 -
5 of
5
Appearance-Based Gaze Estimation in the Wild
"... Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and meth-ods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We pre ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Appearance-based gaze estimation is believed to work well in real-world settings, but existing datasets have been collected under controlled laboratory conditions and meth-ods have been not evaluated across multiple datasets. In this work we study appearance-based gaze estimation in the wild. We present the MPIIGaze dataset that contains 213,659 images we collected from 15 participants during natural everyday laptop use over more than three months. Our dataset is significantly more variable than existing ones with respect to appearance and illumination. We also present a method for in-the-wild appearance-based gaze es-timation using multimodal convolutional neural networks that significantly outperforms state-of-the art methods in the most challenging cross-dataset evaluation. We present an extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including our own. This evaluation provides clear insights and allows us to identify key research challenges of gaze estimation in the wild. 1.
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
"... Abstract—Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly la ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model’s controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild. I.
TabletGaze: Unconstrained Appearance-based Gaze Estimation in Mobile Tablets
, 2015
"... We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labele ..."
Abstract
- Add to MetaCart
We study gaze estimation on tablets; our key design goal is uncalibrated gaze estimation using the front-facing camera during natural use of tablets, where the posture and method of holding the tablet is not constrained. We collected the first large unconstrained gaze dataset of tablet users, labeled Rice TabletGaze dataset. The dataset consists of 51 subjects, each with 4 different postures and 35 gaze locations. Subjects vary in race, gender and in their need for prescription glasses, all of which might impact gaze estimation accuracy. Driven by our observations on the collected data, we present a TabletGaze algorithm for automatic gaze estimation using multi-level HoG feature and Random Forests regressor. The TabletGaze algorithm achieves a mean error of 3.17 cm. We perform extensive evaluation on the impact of various factors such as dataset size, race, wearing glasses and user posture on the gaze estimation accuracy and make important observations about the impact of these factors.
INTEL REALSENSE = REAL LOW COST GAZE
"... Intel’s newly-announced low-cost RealSense 3D camera claims significantly better precision than other currently avail-able low-cost platforms and is expected to become ubiquitous in laptops and mobile devices starting this year. In this paper, we demonstrate for the first time that the RealSense cam ..."
Abstract
- Add to MetaCart
(Show Context)
Intel’s newly-announced low-cost RealSense 3D camera claims significantly better precision than other currently avail-able low-cost platforms and is expected to become ubiquitous in laptops and mobile devices starting this year. In this paper, we demonstrate for the first time that the RealSense cam-era can be easily converted into a real low-cost gaze tracker. Gaze has become increasingly relevant as an input for human-computer interaction due to its association with attention. It is also critical in clinical mental health diagnosis. We present a novel 3D gaze and fixation tracker based on the eye surface geometry captured with the RealSense 3D camera. First, eye surface 3D point clouds are segmented to extract the pupil center and iris using registered infrared images. With non-ellipsoid eye surface and single fixation point assumptions, pupil centers and iris normal vectors are used to first esti-mate gaze (for each eye), and then a single fixation point for both eyes simultaneously using a RANSAC-based approach. With a simple learned bias field correction model, the fixa-tion tracker demonstrates mean error of approximately 1 cm at 20 − 30 cm, which is sufficiently adequate for gaze and fixation tracking in human-computer interaction and mental health diagnosis applications. Index Terms — gaze tracker, fixation tracker, depth cam-era, mental health, human-computer interaction 1.
Detecting Bids for Eye Contact Using a Wearable Camera
"... Abstract — We propose a system for detecting bids for eye contact directed from a child to an adult who is wearing a point-of-view camera. The camera captures an egocentric view of the child-adult interaction from the adult’s perspective. We detect and analyze the child’s face in the egocentric vide ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — We propose a system for detecting bids for eye contact directed from a child to an adult who is wearing a point-of-view camera. The camera captures an egocentric view of the child-adult interaction from the adult’s perspective. We detect and analyze the child’s face in the egocentric video in order to automatically identify moments in which the child is trying to make eye contact with the adult. We present a learning-based method that couples a pose-dependent appearance model with a temporal Conditional Random Field (CRF). We present encouraging findings from an experimental evaluation using a newly collected dataset of 12 children. Our method outperforms state-of-the-art approaches and enables measuring gaze behavior in naturalistic social interactions. I.