Results 1 - 10
of
17
Human Brain Dynamics Accompanying Use of Egocentric and Allocentric Reference Frames during Navigation
"... ■ Maintaining spatial orientation while travelling requires integrating spatial information encountered from an egocentric viewpoint with accumulated information represented within egocentric and/or allocentric reference frames. Here, we report changes in high-density electroencephalographic (EEG) a ..."
Abstract
-
Cited by 8 (3 self)
- Add to MetaCart
(Show Context)
■ Maintaining spatial orientation while travelling requires integrating spatial information encountered from an egocentric viewpoint with accumulated information represented within egocentric and/or allocentric reference frames. Here, we report changes in high-density electroencephalographic (EEG) activity during a virtual tunnel passage task in which subjects respond to a postnavigation homing challenge in distinctly different ways— either compatible with a continued experience of the virtual environment from a solely egocentric perspective or as if also maintaining their original entrance orientation, indicating use of a parallel allocentric reference frame. By spatially filtering the EEG data using independent component analysis, we found that these two equal subject subgroups exhibited differences in EEG
Punishment i n o r g a n i z a t i o n s : A rev iew, p r o p o s i t i o n s and r e s e a r c h s u g g e s t i o n s
- Academy of Management Review
, 1980
"... Dissociation, as the editor of this important volume reminds us, "challenges many comfortable assumptions." From a theoretical vantage, it demands great conceptual clar-ity and a knowledge ofmany areas of import in psychology; from a clinical vantage, it has brought about one ofthe most co ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
Dissociation, as the editor of this important volume reminds us, "challenges many comfortable assumptions." From a theoretical vantage, it demands great conceptual clar-ity and a knowledge ofmany areas of import in psychology; from a clinical vantage, it has brought about one ofthe most contentious debates in recent history, whether or notimpor-tant memories can be "forgotten " only to appear as habits, behaviors, and dreams, or even later as full-fledged remem-brances. But above all, dissociative phenomena challenge the cherished notion that our conscious self is an all-knowing, integrated entity. This volume is the result of a 1991 conference at the Center for Advanced Study in the Behavioral Sciences, sponsored by the MacArthur Foundation. I was fortunate to be present at this truly multidisciplinary meeting.
Depth perception within virtual environments: A comparative study between wide screen stereoscopic displays and head mounted devices,” in ComputationWorld’09
- IEEE Computer Society
"... Abstract—Depth perception is one of the key issues in virtual reality. Many questions within this area are still under investigation including the egocentric distance misestimation. In this paper we describe an experiment confirming distance underestimation from another point of view. The approach w ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Depth perception is one of the key issues in virtual reality. Many questions within this area are still under investigation including the egocentric distance misestimation. In this paper we describe an experiment confirming distance underestimation from another point of view. The approach we developed is based on a very simple task: subjects had to compare relative depths of two virtual objects. The experiment compared performance using head mounted display and stereo-scopic widescreen display to evaluate which visual cues subjects use to estimate depth of virtual objects. To minimize motoric effects, subjects were seated and their estimations were only verbal. Likewise, to avoid the well known effects of apparent size, namely the size-distance invariance, the experiment was also performed with conflict sequences: the presented objects had the same apparent sizes with different depths or the same depth but different physical sizes. The obtained results show significant differences between the two devices and confirm the distance misestimation phenomenon for head mounted display. Moreover, changing the background color or the shape of the presented objects also had an influence on subjects’ performance.
Straight After the Turn: The Role of the Parietal Lobes for Egocentric Space Processing
"... Processing spatial information with respect to an egocentric reference frame has been shown to recruit a cortical network along the dorsal stream. However, how brain lesions affect this ability remains controversial. The present study investigated spatial navigation in parietal and frontal patients ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Processing spatial information with respect to an egocentric reference frame has been shown to recruit a cortical network along the dorsal stream. However, how brain lesions affect this ability remains controversial. The present study investigated spatial navigation in parietal and frontal patients as well as healthy controls in computer-simulated tunnels. Two different answer formats were employed: the setting of a virtual three-dimensional arrow to indicate the location of the endpoint relative to the starting point, and map drawing. Also, mental rotation skills and visuospatial working memory were assessed. The results suggest that parietal, but not frontal, patients are impaired in building up an adequate representation of virtual space relative to controls, and that this impairment covaries with spatial working memory performance. Patients with damage to the parietal lobes confused the direction of turn more frequently and made less accurate angular judgements of tunnel turns than frontal-lobe patients. Analysis of the map drawings suggests that the impairment may be linked to deficits in the updating of cognitive heading in absence of corresponding perceptual information from the virtual environment. 2 1.
Eye-centered encoding of visual space in scene-selective regions
- Journal of Vision
, 2010
"... We used functional magnetic resonance imaging (fMRI) to investigate the reference frames used to encode visual information in scene-responsive cortical regions. At early levels of the cortical visual hierarchy, neurons possess spatially selective receptive fields (RFs) that are yoked to specific lo ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We used functional magnetic resonance imaging (fMRI) to investigate the reference frames used to encode visual information in scene-responsive cortical regions. At early levels of the cortical visual hierarchy, neurons possess spatially selective receptive fields (RFs) that are yoked to specific locations on the retina. In lieu of this eye-centered organization, we speculated that visual areas implicated in scene processing, such as the parahippocampal place area (PPA), the retrosplenial complex (RSC), and transverse occipital sulcus (TOS) might instead possess RFs defined in head-, body-, or world-centered reference frames. To test this, we scanned subjects while they viewed objects and scenes presented at four screen locations while they maintained fixation at one of three possible gaze positions. We then examined response profiles as a function of either fixation-referenced or screen-referenced position. Contrary to our prediction, the PPA and TOS exhibited position-response curves that moved with the fixation point rather than being anchored to the screen, a pattern indicative of eye-centered encoding. RSC, on the other hand, did not exhibit a position-response curve in either reference frame. By showing an important commonality between the PPA/TOS and other visually responsive regions, the results emphasize the critical involvement of these regions in the visual analysis of scenes.
Alpha modulation in parietal and retrosplenial cortex correlates Frontiers in Human Neuroscience www.frontiersin.org February 2014 | Volume 8 | Article 71 | 11 Ehinger et al. Mobile EEG and spatial navigation with navigation performance. Psychophysiology
- J. Cogn. Neurosci
, 2012
"... correlates with navigation performance ..."
(Show Context)
Neglect “Around the Clock”: Dissociating Number and Spatial Neglect in Right Brain Damage
"... Since the seminal observation of the SNARC effect by Dehaene, Bossini, and Giraux [(1993) Journal of Experimental Psychology: General, 122(3), 371–396] several studies have indicated the existence of an intrinsic-automatic spatial coding of number magnitudes. In the first part of this chapter we sum ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Since the seminal observation of the SNARC effect by Dehaene, Bossini, and Giraux [(1993) Journal of Experimental Psychology: General, 122(3), 371–396] several studies have indicated the existence of an intrinsic-automatic spatial coding of number magnitudes. In the first part of this chapter we summarize recent work with healthy participants and expand on this origi-nal claim. Some of our evidence can be used to support a theory of spatial mapping of men-tal numbers onto mental space where smaller numbers are associated with the left and larger numbers with the right side of space. In the second part of the chapter we review investiga-tions of spatial neglect and relate them to “small number neglect”, which initially seemed to provide crucial support for a tight link between mechanisms of spatial attentional orienting and 150 11. DISSOCIATING NUMBER AND SPATIAL NEGLECT IN RIGHT BRAIN DAMAGE In their seminal study, Dehaene, Bossini, and Giraux [1] described behavioral evidence for spatially oriented number lines in normal subjects. When asked to classify a single digit as being odd or even by pressing on one of two buttons, subjects reacted faster to smaller numbers with the left hand and more quickly to larger numbers with the right hand. Interestingly this effect could be reversed by crossing hands, showing that it is spatially
Human parietal cortex in action This review comes from a themed issue on Cognitive neuroscience Edited
"... Experiments using functional neuroimaging and transcranial magnetic stimulation in humans have revealed regions of the parietal lobes that are specialized for particular visuomotor actions, such as reaching, grasping and eye movements. In addition, the human parietal cortex is recruited by processi ..."
Abstract
- Add to MetaCart
(Show Context)
Experiments using functional neuroimaging and transcranial magnetic stimulation in humans have revealed regions of the parietal lobes that are specialized for particular visuomotor actions, such as reaching, grasping and eye movements. In addition, the human parietal cortex is recruited by processing and perception of action-related information, even when no overt action occurs. Such information can include object shape and orientation, knowledge about how tools are employed and the understanding of actions made by other individuals. We review the known subregions of the human posterior parietal cortex and the principles behind their organization. Introduction Sensory control of actions depends crucially on the posterior parietal cortex, that is, all of the parietal cortex behind primary (SI) and secondary (SII) somatosensory cortex, including both the superior and inferior parietal lobules, which are divided by the intraparietal sulcus. Initially posterior parietal cortex was considered part of 'association cortex', which integrates information from multiple senses. During the past decade, the role of the posterior parietal cortex in space perception and guiding actions was emphasized [1, In one popular view of the visual system [1], visual information is segregated along two pathways: the ventral stream (occipito-temporal cortex) computes vision for perception, whereas the dorsal stream (occipito-parietal cortex) computes vision for action. Here, we review recent advances that address the organization of the posterior parietal cortex and the action-related subregions within it. We begin by focusing on the role of the dorsal stream in visually-guided real actions. However, we then discuss a topic that does not fit so easily into the dichotomy: action-related perceptual tasks that invoke the dorsal stream. Growing evidence from studies in both macaque and human brains suggests that areas within the posterior parietal cortex might be active not only when the individual is preparing to act, but also during observation of others' actions and the perceptual processing of attributes and affordances that are relevant to the actions, even when no actions are executed. We focus largely on the human brain, but include a brief summary of comparable areas in the macaque monkey brain and potential homologies between the two species (See Posterior parietal cortex in action Reaching and pointing The role of the posterior parietal cortex in reaching is evident from the deficits in patients with optic ataxia In contrast to reaching, in which subjects extend the arm to touch a target, many recent neuroimaging studies have employed pointing, in which the index finger is directed towards the target without extending the arm. These studies also reported activation in the mOPJ [12,13], but only when targets were presented in peripheral vision [11], as well as within the mIPS, regardless of whether the targets were in foveal or peripheral vision [18]. The relationships between the various reaching-and pointing-related parietal regions in humans and the more wellestablished parietal reach region in macaque monkeys await clarification. Although one group has suggested that the mOPJ is a homologue of the macaque parietal reach region (which includes areas V6A and MIP) [13], another group has proposed that the mIPS in the human is a functional equivalent of the macaque area also in the medial intraparietal sulcus (area MIP), on the basis of similarities in the responses to a visuomotor joystick task [19]. A growing body of literature is further characterizing the role of the mOPJ in reaching. One study examined reaching movements directed toward body parts (the chin or the thumb of the other hand) when subjects had their eyes closed [20]. They found that the mOPJ was more active the first time the movements were planned than it was for subsequent movements, suggesting that, in addition to activation in response to visual targets, this region is also activated by movements to bodily targets. An ambitious fMRI study of various types of reaching errors suggested that the mOPJ encodes the current target of a reaching movement [21]. Grasping Converging evidence suggests that a region in the human anterior intraparietal sulcus (aIPS) is involved in visuallyguided grasping [22][23][24][25][26] and cross-modal (visual-tactile) integration [27]. Not only do humans with aIPS lesions demonstrate grasping deficits [22], TMS applied to aIPS [28 ] and the superior parietal lobule [29] disrupts on-line hand-preshaping adjustments to sudden changes in object orientation. fMRI experiments in the well-studied patient, D.F., have shown that her aIPS is activated during object grasping but not during reaching, despite damage to an object-selective area in the ventral stream (the lateral occipital cortex) [30]. Eye movements and topographic maps There is extensive literature on the areas involved in eye movements in humans (reviewed in [31]). Studies using fMRI reliably demonstrated saccade-related activation midway up the intraparietal sulcus [32] and somewhat medial to it, in the superior parietal lobule [33][34][35][36][37]. One saccade-related focus in the superior parietal lobe contains a topographic map that represents memory-driven saccade direction [33], the focus of attention [38] or the direction of a pointing movement [34,36]. Moreover, activation in this area demonstrated spatial updating when the gaze changed [34,35,37]. The map in each hemisphere represents the contralateral visual field, which led to the suggestion that the region is functionally similar to the parietal eye fields (in the lateral intraparietal sulcus) of the macaque [33]. This suggestion is bolstered by an fMRI study that directly compared saccade-related activation in humans and macaques [39]. Note that whereas macaque LIP is on the lateral bank of the intraparietal sulcus, the human area is medial to the intraparietal sulcus. Thus, we have called the human area 'the parietal eye fields' (PEF) to avoid any confusion regarding its laterality. Other human parietal areas also contain spatiotopic maps. One saccade-related focus at the junction of the intraparietal sulcus and transverse occipital sulcus (IPTO) demonstrates stronger activation for saccades into the contralateral visual field, as do the PEF. The human IPTO region is likely to correspond to macaque V3A, which also contains a retinotopic map [40,41]. Two additional human parietal areas with topographic representations were reported posterior to the PEF [42,43]. Other preliminary evidence suggests that putative human equivalents of V6 and the ventral intraparietal area, VIP [44], might also contain topographic maps [45,46]. Indeed, it now seems that the parietal cortex is tiled with spatiotopic maps that were not previously reported by simple visual mapping (typically using flickering checkerboard stimuli), but that can be revealed with appropriate action-related tasks. Although the vast majority of human studies on object selectivity focused on areas within the ventral stream [47], neuroimaging has also revealed shape-selective activation for objects within the dorsal stream of both monkeys and humans [48]. These regions tend to be ignored because of concerns regarding attentional confounds, which could be more problematic for parietal areas than for occipitotemporal areas. Given the importance of actions in the dorsal stream, we hypothesize that these regions probably encode the action-related attributes of objects, such as orientation, depth and motion. For example, in fMRI adaptation studies, one region at the lateral occipitoparietal junction (lOPJ) shows sensitivity to object orientation [49,50] but not object identity [49], consistent with the fact that orientation is crucial to action planning, whereas identity might not always be essential. fMRI adaptation was also used to investigate the selectivity of aIPS, finding that aIPS is sensitive to the grasp posture, whereas object-selective ventral-stream regions are not [51 ]. Furthermore, aIPS, or a nearby region, demonstrated a preference for shapes in which 3D information was defined by motion or pictorial cues [52]. Taken together, these results suggest that object-selectivity in the dorsal stream warrants further investigation, particularly with a view to its possible relevance to action planning. Unlike category-selective regions in the ventral stream, which require awareness to become activated (e.g. [53]), regions in the dorsal stream remain activated by objects, even when those objects are not consciously perceived [54 ]. Moreover, the activation to unperceived stimuli in the dorsal stream occurred for manipulable objects but not faces. This result strongly suggests that the 'invisible' stimuli that are relevant to actions were, indeed, processed in the dorsal stream. These results could account for the ability of patients (e.g. D.F. or patients with blindsight) and normal subjects (e.g. [55]) to accurately act on objects, without explicit awareness [54]. Tools For the dorsal stream, tools, because of their obvious ties to action, represent a particularly significant category of objects. Indeed, neuroimaging investigations reliably report a left-lateralized network of areas, including areas within the posterior parietal cortex, as underlying the representation(s) of knowledge about familiar tools (for a review, see [56]). Tool-selective areas in the dorsal stream are thought to be related to the motor representations associated with familiar tools and their usage, in contrast to the role of tool-selective areas within the ventral stream, which are thought to be involved with the semantic associations of tools [57]. However, the nature of the tool-selective activation within the dorsal stream is not yet known. Because tools are graspable, and typical control stimuli (e.g., animals [57]) are not, tool-related parietal activations near aIPS might simply be driven by the graspable properties of tools, perhaps reflecting a covert plan to manipulate the object. This hypothesis does not appear likely, however, given the results of two recent fMRI studies. One study showed that an area in the vicinity of aIPS was active during the passive viewing of familiar tools but did not respond to unfamiliar shapes that were potentially graspable [58]. A study from our lab has also found that this tool-selective parietal region does not generalize to other objects that are graspable (e.g., an apple) [59]. Moreover, we found that the tool-selective parietal region is typically posterior to aIPS, as defined by grasping (versus reaching) movements. In addition, two recent imaging studies found that left parietal areas involved in the planning of tool use gestures are posterior to those involved in the execution of those gestures (See It is likely that some of these posterior parietal activations directly correspond to those representations that are impaired in patients suffering from ideomotor apraxia, a disorder of skilled object-related movements. Consistent with this hypothesis, lesion analyses implicate the left inferior parietal lobule and intraparietal sulcus as the most crucial sites of damage associated with ideomotor apraxia [62, Action observation Within the grasping circuit of the macaque, including aIPS and the adjacent inferior parietal lobule The mirror system might be crucial in imitating and learning new actions Conclusions Mapping of the human dorsal stream has progressed at a slower pace than mapping of the ventral stream, largely because of the technical challenges of using action paradigms for neuroimaging, perhaps accompanied by a general neglect of the study of actions in cognitive science Within both streams, it remains unclear whether regions of activation are truly distinct for particular stimuli or tasks. Within the ventral stream, there are dissenting views on whether visual processing occurs within specialized modules dedicated to processing specific stimulus categories The confusing plethora of regions in both streams could be greatly simplified by the determination of general organizational principles. For example, areas within the ventral stream seem to follow a quasiretinotopic organization, with adjacent representations for stimuli that are processed in the fovea (faces), midperiphery (objects) and far periphery (scenes)
Men and Women Exhibit a Differential Bias for Processing Movement versus Objects
, 2012
"... Abstract Sex differences in many spatial and verbal tasks appear to reflect an inherent low-level processing bias for movement in males and objects in females. We explored this potential movement/object bias in men and women using a computer task that measured targeting performance and/or color rec ..."
Abstract
- Add to MetaCart
Abstract Sex differences in many spatial and verbal tasks appear to reflect an inherent low-level processing bias for movement in males and objects in females. We explored this potential movement/object bias in men and women using a computer task that measured targeting performance and/or color recognition. The targeting task showed a ball moving vertically towards a horizontal line. Before reaching the line, the ball disappeared behind a masking screen, requiring the participant to imagine the movement vector and identify the intersection point. For the color recognition task, the ball briefly changed color before disappearing beneath the mask and participants were required only to identify the color shade. Results showed that targeting accuracy for slow and fast moving balls was significantly better in males compared to females. No sex difference was observed for color shade recognition. We also studied a third, dual attention task comprised of the first two, where the moving ball briefly changed color randomly just before passing beneath the masking screen. When the ball changed color, participants were required only to identify the color shade. If the ball didn't change color, participants estimated the intersection point. Participants in this dual attention condition were first tested with the targeting and color tasks alone and showed results that were similar to the previous groups tested on a single task. However, under the dual attention condition, male accuracy in targeting, as well as color shade recognition, declined significantly compared to their performance when the tasks were tested alone. No significant changes were found in female performance. Finally, reaction times for targeting and color choices in both sexes correlated highly with ball speed, but not accuracy. Overall, these results provide evidence of a sex-related bias in processing objects versus movement, which may reflect sex differences in bottom up versus top-down analytical strategies.