Results 1  10
of
47
SEEMORE: Combining Color, Shape, and Texture Histogramming in a Neurally Inspired Approach to Visual Object Recognition
, 1997
"... this article. ..."
Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex
 Neural Computation
, 1995
"... this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expec ..."
Abstract

Cited by 86 (21 self)
 Add to MetaCart
this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines inputdriven bottomup signals with expectationdriven topdown signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the ExpectationMaximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Si
Primate motor cortex and free arm movements to visual targets in threedimensional space. II. Coding of the direction of movement by a neuronal population
 Journal of Neuroscience
, 1988
"... We describe a code by which a population of motor cortical neurons could determine uniquely the direction of reaching movements in threedimensional space. The population consisted of 475 directionally tuned cells whose functional properties are described in the preceding paper (Schwartz et al., 1 ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
We describe a code by which a population of motor cortical neurons could determine uniquely the direction of reaching movements in threedimensional space. The population consisted of 475 directionally tuned cells whose functional properties are described in the preceding paper (Schwartz et al., 1988). Each cell discharged at the highest rate with movements in its “preferred direction ” and at progressively lower rates with movements in directions away from the preferred one. The neuronal population code assumes that for a particular movement direction each cell makes a vectorial contribution (“votes”) with direction in the cell’s preferred direction and magnitude proportional to the change in the cell’s discharge rate associated with the particular direction of movement. The vector sum of these contributions is the outcome of the population code (the “neuronal population vec
Minimizing Binding Errors Using Learned Conjunctive Features
, 2000
"... this article, we describe our work to test a simple analytical model that captures several tradeoffs governing the performance of visual recognition systems based on spatially invariant conjunctive features. In addition, we introduce a supervised greedy algorithm for feature learning that grows a v ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
this article, we describe our work to test a simple analytical model that captures several tradeoffs governing the performance of visual recognition systems based on spatially invariant conjunctive features. In addition, we introduce a supervised greedy algorithm for feature learning that grows a visual representation in such a way as to minimize falsepositive recognition errors. Finally, we consider some of the surprising properties of "good" representations and the implications of our results for more realistic visual recognition problems.
The quantitative study of shape and pattern perception
 Psychol. Bull
, 1957
"... The preeminent importance of formal or relational factors in perception has been abundantly demonstrated during some forty years of gestalt psychology. It seems extraordinary, therefore, that so little progress has been made (and, indeed, that so little effort has been expended) toward the systemat ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
The preeminent importance of formal or relational factors in perception has been abundantly demonstrated during some forty years of gestalt psychology. It seems extraordinary, therefore, that so little progress has been made (and, indeed, that so little effort has been expended) toward the systematizing and quantifying of such factors. Our most precise knowledge of perception is in those areas which have yielded to psychophysical analysis (e.g., the perception of size, color, and pitch), but there is virtually no psychophysics of shape or pattern. Several difficulties may be pointed out at once: (a) Shape is a multidimensional variable, though it is often carelessly referred to as a "dimension," along with brightness, hue, area, and the like, (b) The number of dimensions necessary to describe a shape is not fixed or constant, but increases with the complexity of the shape, (c) Even if we know how many dimensions are necessary in a given case, the choice of particular descriptive terms (i.e., of referenceaxes in the multidimensional space with which we are dealing) remains a problem; presumably some such terms have more psychological meaningfulness than others.
Visual transformation of size
 Journal of Experimental Psychology: Human Perception & Performance
, 1975
"... To investigate human visual identification of differentsized objects as identically shaped, matching reaction times were measured for pairs of simultaneously presented random figures. In three experiments, reaction time for correct reactions to test pairs of figures of the same shape and orientatio ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
To investigate human visual identification of differentsized objects as identically shaped, matching reaction times were measured for pairs of simultaneously presented random figures. In three experiments, reaction time for correct reactions to test pairs of figures of the same shape and orientation consistently increased approximately linearly as a function of the linear size ratio of the figures. In the second experiment, where this ratio was defined for control pairs as well as for test pairs, reaction time for correct reactions to control pairs showed a similar increase as a function of size ratio. The results suggest that the task was performed by a gradual process of mental size transformation of one of the members of each pair of figures to the format of the other one. The way we visually identify form independently of size has long been considered
Learning the Lie Groups of Visual Invariance
, 2007
"... A fundamental problem in biological and machine vision is visual invariance: How are objects perceived to be the same despite transformations such as translations, rotations, and scaling? In this letter, we describe a new, unsupervised approach to learning invariances based on Lie group theory. Unli ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
A fundamental problem in biological and machine vision is visual invariance: How are objects perceived to be the same despite transformations such as translations, rotations, and scaling? In this letter, we describe a new, unsupervised approach to learning invariances based on Lie group theory. Unlike traditional approaches that sacrifice information about transformations to achieve invariance, the Lie group approach explicitly models the effects of transformations in images. As a result, estimates of transformations are available for other purposes, such as pose estimation and visuomotor control. Previous approaches based on firstorder Taylor series expansions of images can be regarded as special cases of the Lie group approach, which utilizes a matrixexponentialbased generative model of images and can handle arbitrarily large transformations. We present an unsupervised expectationmaximization algorithm for learning Lie transformation operators directly from image data containing examples of transformations. Our experimental results show that the Lie operators learned by the algorithm from an artificial data set containing six types of affine transformations closely match the analytically predicted affine operators. We then demonstrate that the algorithm can also recover novel transformation operators from natural image sequences. We conclude by showing that the learned operators can be used to both generate and estimate transformations in images, thereby providing a basis for achieving visual invariance.
Periodic Symmetric Functions, Serial Addition and Multiplication with Neural Networks
, 1998
"... This paper investigates threshold based neural networks for periodic symmetric Boolean functions and some related operations. It is shown that any ninput variable periodic symmetric Boolean function can be implemented with a feedforward linear threshold based neural network with size of O(log n) a ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
This paper investigates threshold based neural networks for periodic symmetric Boolean functions and some related operations. It is shown that any ninput variable periodic symmetric Boolean function can be implemented with a feedforward linear threshold based neural network with size of O(log n) and depth also of O(log n), both measured in terms of neurons. The maximum weight and fanin values are in the order of O(n). Under the same assumptions on weight and fanin values, an asymptotic bound of O(log n) for both size and depth of the network is also derived for symmetric Boolean functions that can be decomposed into a constant number of periodic symmetric Boolean subfunctions. Based on this results neural networks for serial binary addition and multiplication of nbit operands are also proposed. It is shown that the serial addition can be computed with polynomially bounded weights and a maximum fanin in the order of O(log n) in O( n log n ) serial cycles, where a serial cycle c...
Periodic Symmetric Functions with FeedForward Neural Networks
 in NEURAP '95=96 Neural Networks and their Applications
, 1996
"... This technical report presents a new theoretical approach to the problem of switching networks synthesis with McCullochPitts feedforward neural networks. It is shown that any ninputs periodical symmetric Boolean function F p with the period T and the first positive transition at x = a can be impl ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This technical report presents a new theoretical approach to the problem of switching networks synthesis with McCullochPitts feedforward neural networks. It is shown that any ninputs periodical symmetric Boolean function F p with the period T and the first positive transition at x = a can be implemented with a 1 + dlog n\Gammaa T e depth and size network both measured in term of neurons, when a period contains two transitions. It can be implemented with a t+dlog n\Gammaa T e depth and size network when a period contains more than two transitions, where t is the number of neural elements necessary to implement the restriction of F p to the first period, i.e. the input interval [0; T ]. An asymptotic bound of O(log n) for the network (for both size and depth) is also derived for symmetric Boolean functions that can be decomposed in l periodic symmetric Boolean subfunctions. ii iii Contents ABSTRACT ii LIST OF FIGURES vi 1 Periodic Symmetric Functions with FeedForward Neural...
Receptive fields for vision: from hyperacuity to object recognition
, 1996
"... Many of the lowerlevel areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a welldelimited part of th ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Many of the lowerlevel areas in the mammalian visual system are organized retinotopically, that is, as maps which preserve to a certain degree the topography of the retina. A unit that is a part of such a retinotopic map normally responds selectively to stimulation in a welldelimited part of the visual field, referred to as its receptive field (RF). Receptive fields are probably the most prominent and ubiquitous computational mechanism employed by biological information processing systems. This paper surveys some of the possible computational reasons behind the ubiquity of RFs, by discussing examples of RFbased solutions to problems in vision, from spatial acuity, through sensory coding, to object recognition.