Results 1 - 10
of
39
Graspable User Interfaces
, 1996
"... This dissertation defines and explores Graspable User Interfaces, an evolution of the input mechanisms used in graphical user interfaces (GUIs). A Graspable UI design provides users concurrent access to multiple, specialized input devices which can serve as dedicated physical interface widgets, affo ..."
Abstract
-
Cited by 103 (3 self)
- Add to MetaCart
This dissertation defines and explores Graspable User Interfaces, an evolution of the input mechanisms used in graphical user interfaces (GUIs). A Graspable UI design provides users concurrent access to multiple, specialized input devices which can serve as dedicated physical interface widgets, affording physical manipulation and spatial arrangements. Like conventional GUIs, physical devices function as “handles” or manual controllers for logical functions on widgets in the interface. However, the notion of the Graspable UI builds on current practice in a number of ways. With conventional GUIs, there is typically only one graphical input device, such as a mouse. Hence, the physical handle is necessarily “time-multiplexed,” being repeatedly attached and unattached to the various logical functions of the GUI. A significant aspect of the Graspable UI is that there can be more than one input device. Hence input control can then be “space-multiplexed.” That is, different devices can be attached to different functions, each independently (but possibly simultaneously) accessible. This, then affords the capability to take advantage of the
Planning Reaches by Evaluating Stored Postures
- Psychological Review
, 1995
"... This article describes a theory of the computations underlying the selection of coordinated motion patterns, especially in reaching tasks. The central idea is that when a spatial target is selected as an object to be reached, stored postures are evaluated for the contributions they can make to the t ..."
Abstract
-
Cited by 50 (1 self)
- Add to MetaCart
This article describes a theory of the computations underlying the selection of coordinated motion patterns, especially in reaching tasks. The central idea is that when a spatial target is selected as an object to be reached, stored postures are evaluated for the contributions they can make to the task. Weights are assigned to the stored postures, and a single target posture is found by taking a weighted sum of the stored postures. Movement is achieved by reducing the distance between the starting angle and target angle of each joint. The model explains compensation for reduced joint mobility, tool use, practice effects, performance errors, and aspects of movement kinematics. Extensions of the model can account for anticipation and coarticulation effects, movement through via points, and hierarchical control of series of movements. The goal of this research is a unified theory of the planning and control of physical action. Such a theory, as several authors have noted (Jeannerod, in press; Rosenbaum, 1991; Wing, 1993), has been lacking. Instead, specialized models have been designed to account for data from different tasks. The sentiment
The Dynamics of Perception and Action
- Psychological Review
, 2006
"... How might one account for the organization in behavior without attributing it to an internal control structure? The present article develops a theoretical framework called behavioral dynamics that inte-grates an information-based approach to perception with a dynamical systems approach to action. Fo ..."
Abstract
-
Cited by 39 (1 self)
- Add to MetaCart
(Show Context)
How might one account for the organization in behavior without attributing it to an internal control structure? The present article develops a theoretical framework called behavioral dynamics that inte-grates an information-based approach to perception with a dynamical systems approach to action. For a given task, the agent and its environment are treated as a pair of dynamical systems that are coupled mechanically and informationally. Their interactions give rise to the behavioral dynamics, a vector field with attractors that correspond to stable task solutions, repellers that correspond to avoided states, and bifurcations that correspond to behavioral transitions. The framework is used to develop theories of several tasks in which a human agent interacts with the physical environment, including bouncing a ball on a racquet, balancing an object, braking a vehicle, and guiding locomotion. Stable, adaptive behavior emerges from the dynamics of the interaction between a structured environment and an agent with simple control laws, under physical and informational constraints.
Robot Instruction by Human Demonstration
, 1994
"... Conventional methods for programming a robot either are inflexible or demand significant expertise. While the notion of automatic programming by high-level goal specification addresses these issues, the overwhelming complexity of planning manipulator grasps and paths remains a formidable obstacle to ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
Conventional methods for programming a robot either are inflexible or demand significant expertise. While the notion of automatic programming by high-level goal specification addresses these issues, the overwhelming complexity of planning manipulator grasps and paths remains a formidable obstacle to practical implementation. This thesis describes the approach of programming a robot by human demonstration. Our system observes a human performing the task, recognizes the human grasp, and maps it onto the manipulator. Using human actions to guide robot execution greatly reduces the planning complexity. In analyzing the task sequence, the system first divides the observed sensory data into meaningful temporal segments, namely the pregrasp, grasp, and manipulation phases. This is achieved by analyzing the human hand motion profiles. The features used are the fingertip polygon area (the fingertip polygon being the polygon whose vertices are the fingertips), hand speed, and the volume sweep r...
An Inertial Measurement Unit for User Interfaces
, 2000
"... Inertial measurement components, which sense either acceleration or angular rate, are being embedded into common user interface devices more frequently as their cost continues to drop dramatically. These devices hold a number of advantages over other sensing technologies: they measure relevant param ..."
Abstract
-
Cited by 17 (4 self)
- Add to MetaCart
(Show Context)
Inertial measurement components, which sense either acceleration or angular rate, are being embedded into common user interface devices more frequently as their cost continues to drop dramatically. These devices hold a number of advantages over other sensing technologies: they measure relevant parameters for human interfaces and can easily be embedded into wireless, mobile platforms. The work in this dissertation demonstrates that inertial measurement can be used to acquire rich data about human gestures, that we can derive efficient algorithms for using this data in gesture recognition, and that the concept of a parameterized atomic gesture recognition has merit. Further we show that a framework combining these three levels of description can be easily used by designers to create robust applications.
Reaching movements to augmented and graphic objects in virtual environments
- In Proceedings of CHI ’01
, 2001
"... This work explores how the availability of visual and haptic feedback affects the kinematics of reaching performance in a tabletop virtual environment. Eight subjects performed reach-to-grasp movements toward target objects of various sizes in conditions where visual and haptic feedback were either ..."
Abstract
-
Cited by 13 (1 self)
- Add to MetaCart
(Show Context)
This work explores how the availability of visual and haptic feedback affects the kinematics of reaching performance in a tabletop virtual environment. Eight subjects performed reach-to-grasp movements toward target objects of various sizes in conditions where visual and haptic feedback were either present or absent. It was found that movement time was slower when visual feedback of the moving limb was not available. Further MT varied systematically with target size when haptic feedback was available (i.e. augmented targets), and thus followed Fitts ’ law. However, movement times were constant regardless of target size when haptic feedback was removed. In depth analysis of the reaching kinematics revealed that subjects spent longer decelerating toward smaller targets in conditions where haptic feedback was available. In contrast, deceleration time was constant when haptic feedback was absent. These results suggest that visual feedback about the moving limb and veridical haptic feedback about object contact are extremely important for humans to effectively work in virtual environments.
Ballistic Hand Movements
"... Abstract. Common movements like reaching, striking, etc. observed during surveillance have highly variable target locations. This puts appearance-based techniques at a disadvantage for modelling and recognizing them. Psychological studies indicate that these actions are ballistic in nature. Their tr ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Common movements like reaching, striking, etc. observed during surveillance have highly variable target locations. This puts appearance-based techniques at a disadvantage for modelling and recognizing them. Psychological studies indicate that these actions are ballistic in nature. Their trajectories have simple structures and are determined to a great degree by the starting and ending positions. We present an approach for movement recognition that explicitly considers their ballistic nature. This enables the decoupling of recognition from the movement’s trajectory, allowing generalization over a range of target-positions. A given movement is first analyzed to determine if it is ballistic. Ballistic movements are further classified into reaching, striking, etc. The proposed approach was tested with motion capture data obtained from the CMU MoCap database. 1
Cerebral organization of motor imagery: Contralateral control of grip selection in mentally represented prehension
- Psychological Science
, 1998
"... Abstract—The principle of contralateral organization of the visual and motor systems was exploited to investigate contributions of the cerebral hemispheres to the mental representation of prehension in healthy, right-handed human subjects. Graphically rendered dowels were presented to either the lef ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Abstract—The principle of contralateral organization of the visual and motor systems was exploited to investigate contributions of the cerebral hemispheres to the mental representation of prehension in healthy, right-handed human subjects. Graphically rendered dowels were presented to either the left or right visual field in a variety of different orientations, and times to determine whether an underhand or overhand grip would be preferred for engaging these stimuli were measured. Although no actual reaching movements were performed, a significant advantage in grip-selection time was found when information was presented to the cerebral hemisphere contralateral to the designated response hand. Results are consistent with the position that motor imagery recruits neurocognitive mechanisms involved in movement planning. More precisely, these findings indicate that processes within each cerebral hemisphere participate in mentally representing object-oriented actions of the contralateral hand. An important contribution to resolving the long-standing debate over the relationship between imagery and perception has been made by numerous studies indicating that the two categories of behavior involve common neural substrates (for comprehensive reviews, see
BEYOND NOUNS AND VERBS
, 2009
"... During the past decade, computer vision research has focused on constructing image based appearance models of objects and action classes using large databases of examples (positive and negative) and machine learning to construct models. Visual inference however involves not only detecting and recogn ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
During the past decade, computer vision research has focused on constructing image based appearance models of objects and action classes using large databases of examples (positive and negative) and machine learning to construct models. Visual inference however involves not only detecting and recognizing objects and actions but also extracting rich relationships between objects and actions to form storylines or plots. These relationships also improve recognition performance of appearancebased models. Instead of identifying individual objects and actions in isolation, such systems improve recognition rates by augmenting appearance based models with contextual models based on object-object, action-action and object-action relationships. In this thesis, we look at the problem of using contextual information for recognition from three different perspectives: (a) Representation of Contextual Models (b) Role of language in learning semantic/contextual models (c) Learning of contextual models from weakly labeled data. Our work departs from the traditional view of visual and contextual learn-ing where individual detectors and relationships are learned separately. Our work focuses on simultaneous learning of visual appearance and contextual models from