Results 1 - 10
of
54
Eye Movement Analysis for Activity Recognition
- Proc. 11th Int’l Conf. Ubiquitous Computing
, 2009
"... Abstract—In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals—saccad ..."
Abstract
-
Cited by 52 (12 self)
- Add to MetaCart
(Show Context)
Abstract—In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals—saccades, fixations, and blinks—and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities. Index Terms—Ubiquitous computing, feature evaluation and selection, pattern analysis, signal processing. Ç 1
Cognitive strategies and eye movements for searching hierarchical computer displays
- Proceedings of the Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL
, 2003
"... This research investigates the cognitive strategies and eye movements that people use to search for a known item in a hierarchical computer display. Computational cognitive models were built to simulate the visual-perceptual and oculomotor processing required to search hierarchical and nonhierarchic ..."
Abstract
-
Cited by 40 (9 self)
- Add to MetaCart
(Show Context)
This research investigates the cognitive strategies and eye movements that people use to search for a known item in a hierarchical computer display. Computational cognitive models were built to simulate the visual-perceptual and oculomotor processing required to search hierarchical and nonhierarchical displays. Eye movement data were collected and compared on over a dozen measures with the a priori predictions of the models. Though it is well accepted that hierarchical layouts are easier to search than nonhierarchical layouts, the underlying cognitive basis for this design heuristic has not yet been established. This work combines cognitive modeling and eye tracking to explain this and numerous other visual design guidelines. This research also demonstrates the power of cognitive modeling for predicting, explaining, and interpreting eye movement data, and how to use eye tracking data to confirm and disconfirm modeling details. Categories and subject descriptors H.5.2 [Information Interfaces and Presentation]: User Interfaces-- Evaluation/methodology, eye tracking,
S.: Combining eye movements and collaborative filtering for proactive information retrieval
- In: Proc. SIGIR
, 2005
"... We study a new task, proactive information retrieval by combining implicit relevance feedback and collaborative fil-tering. We have constructed a controlled experimental set-ting, a prototype application, in which the users try to find interesting scientific articles by browsing their titles. Im-pli ..."
Abstract
-
Cited by 34 (15 self)
- Add to MetaCart
(Show Context)
We study a new task, proactive information retrieval by combining implicit relevance feedback and collaborative fil-tering. We have constructed a controlled experimental set-ting, a prototype application, in which the users try to find interesting scientific articles by browsing their titles. Im-plicit feedback is inferred from eye movement signals, with discriminative hidden Markov models estimated from exist-ing data in which explicit relevance feedback is available. Collaborative filtering is carried out using the User Rating Profile model, a state-of-the-art probabilistic latent variable model, computed using Markov Chain Monte Carlo tech-niques. For new document titles the prediction accuracy with eye movements, collaborative filtering, and their com-bination was significantly better than by chance. The best
Integrating Perceptual and Cognitive Modeling for Adaptive and Intelligent Human-Computer Interaction
- PROC. OF THE IEEE
, 2002
"... This paper describes technology and tools for intelligent human-computer interaction (IHCI) where human cognitive, perceptual, motor, and affective factors are modeled and used to adapt the H--C interface. IHCI emphasizes that human behavior encompasses both apparent human behavior and the hidden me ..."
Abstract
-
Cited by 27 (0 self)
- Add to MetaCart
(Show Context)
This paper describes technology and tools for intelligent human-computer interaction (IHCI) where human cognitive, perceptual, motor, and affective factors are modeled and used to adapt the H--C interface. IHCI emphasizes that human behavior encompasses both apparent human behavior and the hidden mental state behind behavioral performance. IHCI expands on the interpretation of human activities, known as W4 (what, where, when, who). While W4 only addresses the apparent perceptual aspect of human behavior, the W5+ technology for IHCI described in this paper addresses also the why and how questions, whose solution requires recognizing specific cognitive states. IHCI integrates parsing and interpretation of nonverbal information with a computational cognitive model of the user, which, in turn, feeds into processes that adapt the interface to enhance operator performance and provide for rational decision-making. The technology proposed is based on a general four-stage interactive framework, which moves from parsing the raw sensory-motor input, to interpreting the user's motions and emotions, to building an understanding of the user's current cognitive state. It then diagnoses various problems in the situation and adapts the interface appropriately. The interactive component of the system improves processing at each stage. Examples of perceptual, behavioral, and cognitive tools are described throughout the paper. Adaptive and intelligent HCI are important for novel applications of computing, including ubiquitous and human-centered computing
Cleaning up systematic error in eye-tracking data by using required fixation locations
- Behavior Research Methods, Instruments, & Computers
, 2002
"... In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article ..."
Abstract
-
Cited by 23 (6 self)
- Add to MetaCart
(Show Context)
In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, or explicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number of implicit RFLs—screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a recalibration procedure when necessary. This article also demonstrates how the disparity will vary across
Inferring relevance from eye movements: Feature extraction
- Helsinki University of Technology
, 2005
"... We organize a PASCAL EU Network of Excellence challenge for inferring relevance from eye movements, beginning 1 March 2005. The aim of this paper is to provide background material for the competitors: give references to related articles on eye movement modelling, describe the methods used for extrac ..."
Abstract
-
Cited by 16 (3 self)
- Add to MetaCart
(Show Context)
We organize a PASCAL EU Network of Excellence challenge for inferring relevance from eye movements, beginning 1 March 2005. The aim of this paper is to provide background material for the competitors: give references to related articles on eye movement modelling, describe the methods used for extracting the features used in the challenge, provide results of basic reference methods and to discuss open questions in the field. 1
Just Blink Your Eyes: A Head-Free Gaze Tracking System
- in CHI Extended Abstracts
, 2003
"... We propose a head-free, easy-setup gaze tracking system designed for a gaze-based Human-Computer Interaction. Our system enables the user to interact with the computer soon after catching the user's eye blinks. The user can move his/her head freely since the system keeps tracking the user' ..."
Abstract
-
Cited by 16 (1 self)
- Add to MetaCart
(Show Context)
We propose a head-free, easy-setup gaze tracking system designed for a gaze-based Human-Computer Interaction. Our system enables the user to interact with the computer soon after catching the user's eye blinks. The user can move his/her head freely since the system keeps tracking the user's eye. In addition, our system only needs a 10 second calibration procedure at the very first time of use. An eye tracking method based on our unique eye blink detection and a sophisticated gaze estimation method using the geometrical eyeball model realize these advantages.
Recognition of visual memory recall processes using eye movement analysis
- In UbiComp
, 2011
"... Physical activity, location, as well as a person’s psychophys-iological and affective state are common dimensions for de-veloping context-aware systems in ubiquitous computing. An important yet missing contextual dimension is the cognitive context that comprises all aspects related to mental informa ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Physical activity, location, as well as a person’s psychophys-iological and affective state are common dimensions for de-veloping context-aware systems in ubiquitous computing. An important yet missing contextual dimension is the cognitive context that comprises all aspects related to mental informa-tion processing, such as perception, memory, knowledge, or learning. In this work we investigate the feasibility of recog-nising visual memory recall. We use a recognition method-ology that combines minimum redundancy maximum rele-vance feature selection (mRMR) with a support vector ma-chine (SVM) classifier. We validate the methodology in a dual user study with a total of fourteen participants look-ing at familiar and unfamiliar pictures from four picture cat-egories: abstract, landscapes, faces, and buildings. Using person-independent training, we are able to discriminate be-tween familiar and unfamiliar abstract pictures with a top recognition rate of 84.3 % (89.3 % recall, 21.0 % false positive rate) over all participants. We show that eye movement anal-ysis is a promising approach to infer the cognitive context of a person and discuss the key challenges for the real-world implementation of eye-based cognition-aware systems.
Relevance feedback from eye movements for proactive information retrieval
- Helsinki University of Technology
, 2003
"... We study whether it is possible to infer from eye movements measured during reading what is relevant for the user in an information retrieval task. Inference is made using hidden Markov and discriminative hidden Markov models. The result of this feasibility study is that prediction of relevance is p ..."
Abstract
-
Cited by 12 (7 self)
- Add to MetaCart
(Show Context)
We study whether it is possible to infer from eye movements measured during reading what is relevant for the user in an information retrieval task. Inference is made using hidden Markov and discriminative hidden Markov models. The result of this feasibility study is that prediction of relevance is possible to a certain extent, and models benefit from taking into account the time series nature of the data. 1.
Logic meets cognition: Empirical reasoning in games
- 627 of CEUR Workshop Proceedings
, 2010
"... This paper presents a first attempt to bridge the gap between logical and cognitive treat-ments of strategic reasoning in games. The focus of the paper is backward induction, a prin-ciple which is purported to follow from common knowledge of rationality by Zermelo’s the-orem. There have been extensi ..."
Abstract
-
Cited by 5 (3 self)
- Add to MetaCart
(Show Context)
This paper presents a first attempt to bridge the gap between logical and cognitive treat-ments of strategic reasoning in games. The focus of the paper is backward induction, a prin-ciple which is purported to follow from common knowledge of rationality by Zermelo’s the-orem. There have been extensive formal debates about the merits of principle of backward induction among game theorists and logicians. Experimental economists and psychologists have shown that human subjects, perhaps due to their bounded resources, do not always fol-low the backward induction strategy, leading to unexpected outcomes. Recently, based on an eye-tracker study, it has turned out that even human subjects who produce the outwardly correct ‘backward induction answer ’ use a different internal reasoning strategy to achieve it. This paper presents a formal language to represent different strategies on a finer-grained level than was possible before. The language and its semantics may lead to precisely distinguishing different cognitive reasoning strategies, that can then be tested on the basis of computational cognitive models and experiments with human subjects. 1