• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,052
Next 10 →

Subsymmetries predict auditory and visual . . .

by Godfried T Toussaint, Juan F Beltran - PERCEPTION , 2013
"... ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract not found

Predictive reward signal of dopamine neurons

by Wolfram Schultz - Journal of Neurophysiology , 1998
"... Schultz, Wolfram. Predictive reward signal of dopamine neurons. is called rewards, which elicit and reinforce approach behav-J. Neurophysiol. 80: 1–27, 1998. The effects of lesions, receptor ior. The functions of rewards were developed further during blocking, electrical self-stimulation, and drugs ..."
Abstract - Cited by 747 (12 self) - Add to MetaCart
of rewards, and rons show phasic activations after primary liquid and food rewards and conditioned, reward-predicting visual and auditory stimuli. the availability of rewards determines some of the basic They show biphasic, activation-depression responses after stimuli parameters of the subject’s life

Modulation of LIP activity by predictive auditory and visual cues

by Yale E. Cohen, Ian S. Cohen, Gordon W. Gifford Iii - Cerebral Cortex , 2004
"... The lateral intraparietal area (area LIP) contains a multimodal repre-sentation of extra-personal space. To further examine this represen-tation, we trained rhesus monkeys on the predictive-cueing task. During this task, monkeys shifted their gaze to a visual target whose location was predicted by t ..."
Abstract - Cited by 6 (0 self) - Add to MetaCart
The lateral intraparietal area (area LIP) contains a multimodal repre-sentation of extra-personal space. To further examine this represen-tation, we trained rhesus monkeys on the predictive-cueing task. During this task, monkeys shifted their gaze to a visual target whose location was predicted

Computational Models of Sensorimotor Integration

by Zoubin Ghahramani , Daniel M. Wolpert, Michael I. Jordan - SCIENCE , 1997
"... The sensorimotor integration system can be viewed as an observer attempting to estimate its own state and the state of the environment by integrating multiple sources of information. We describe a computational framework capturing this notion, and some specific models of integration and adaptati ..."
Abstract - Cited by 424 (12 self) - Add to MetaCart
information from visual and auditory systems is integrated so as to reduce the variance in localization. (2) The effects of a remapping in the relation between visual and auditory space can be predicted from a simple learning rule. (3) The temporal propagation of errors in estimating the hand

Selection of Valid and Reliable EEG Features for Predicting Auditory and Visual Alertness Levels

by unknown authors , 1999
"... A selection procedure with three rules, high efficiency, low individual variability, and low redundancy, was devel-oped to screen electroencephalogram (EEG) features for predicting behavioral alertness levels. A total of 24 EEG features were derived from temporal, frequency spectral, and statistical ..."
Abstract - Add to MetaCart
, and the mean frequency of the EEG spectrum (MF), was found to be the best combination for predicting the auditory alertness level. In the visual task study, the mean frequency of the beta band (Fβ, 13 – 32 Hz) was the only EEG feature selected. The application of an averaging subwindow procedure within a

PREDICTING AUDITORY-VISUAL SPEECH RECOGNITION IN HEARING-IMPAIRED LISTENERS

by Ken W Grant , Brian E Walden
"... ABSTRACT Individuals typically derive substantial benefit to speech recognition from combining auditory (A) and visual (V) cues. However, there is considerable variability in AV speech recognition, even when individual differences in A and V performance are taken into account. In this paper, severa ..."
Abstract - Add to MetaCart
ABSTRACT Individuals typically derive substantial benefit to speech recognition from combining auditory (A) and visual (V) cues. However, there is considerable variability in AV speech recognition, even when individual differences in A and V performance are taken into account. In this paper

A New Test of Attention in Listening (TAIL) Predicts Auditory Performance

by Yu-xuan Zhang, Johanna G. Barry, David R. Moore, Sygal Amitay
"... Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequenti ..."
Abstract - Add to MetaCart
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented

Prediction-Driven Computational Auditory Scene Analysis for Dense Sound Mixtures

by Daniel P. W. Ellis , 1996
"... We interpret the sound reaching our ears as the combined effect of independent, sound-producing entities in the external world; hearing would have limited usefulness if were defeated by overlapping sounds. Computer systems that are to interpret real-world sounds for speech recognition or for multime ..."
Abstract - Cited by 189 (10 self) - Add to MetaCart
We interpret the sound reaching our ears as the combined effect of independent, sound-producing entities in the external world; hearing would have limited usefulness if were defeated by overlapping sounds. Computer systems that are to interpret real-world sounds for speech recognition or for multimedia indexing must similarly interpret complex mixtures. However, existing functional models of audition employ only data-driven processing incapable of making context-dependent inferences in the face of interference. We propose aprediction-driven approach to this problem, raising numerous issues including the need to represent any kind of sound, and to handle multiple competing hypotheses. Results from an implementation of this approach illustrate its ability to analyze complex, ambient sound scenes that would confound previous systems.

AUDITORY

by Michael R. Wirtzfeld, Ian C. Bruce, A. Auditory Chimaeras
"... Speech intelligibility predictors based on the contributions of envelope (ENV) and time fine structure (TFS) neural cues have many potential applications in communications and hearing research. However, establishing robust correlates between subjective speech perception scores and neural cues has no ..."
Abstract - Add to MetaCart
not explain speech intelligiblity for “auditory chimaeras ” [8] where speech information is primarily in the TFS [4], motivating the inclusion of TFS neural cues in the predictive models. A speech corpus of 1,750 sentences divided into five chimaera types, subjectively scored by 5 normal hearing listeners

unknown title

by Exp Brain Res, Catarina Mendonça
"... Predicting auditory space calibration from recent multisensory experience ..."
Abstract - Add to MetaCart
Predicting auditory space calibration from recent multisensory experience
Next 10 →
Results 1 - 10 of 1,052
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University