• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 14,411
Next 10 →

Evolving Artificial Neural Networks

by Xin Yao , 1999
"... This paper: 1) reviews different combinations between ANN's and evolutionary algorithms (EA's), including using EA's to evolve ANN connection weights, architectures, learning rules, and input features; 2) discusses different search operators which have been used in various EA's; ..."
Abstract - Cited by 574 (6 self) - Add to MetaCart
's; and 3) points out possible future research directions. It is shown, through a considerably large literature review, that combinations between ANN's and EA's can lead to significantly better intelligent systems than relying on ANN's or EA's alone

A new learning algorithm for blind signal separation

by S. Amari, A. Cichocki, H. H. Yang - , 1996
"... A new on-line learning algorithm which minimizes a statistical de-pendency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual in-formation (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract - Cited by 622 (80 self) - Add to MetaCart
of the sources. The Gram-Charlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the on-line learning algorithm which has an equivariant property and is easily implemented on a neural

Survey on Independent Component Analysis

by Aapo Hyvärinen - NEURAL COMPUTING SURVEYS , 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract - Cited by 2309 (104 self) - Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation

Statistical pattern recognition: A review

by Anil K. Jain, Robert P. W. Duin, Jianchang Mao - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2000
"... The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques ..."
Abstract - Cited by 1035 (30 self) - Add to MetaCart
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network

Brain magnetic resonance imaging with contrast dependent on blood oxygenation.

by S Ogawa , T M Lee , A R Kay , D W Tank - Proc. Natl. Acad. Sci. USA , 1990
"... ABSTRACT Paramagnetic deoxyhemoglobin in venous blood is a naturally occurring contrast agent for magnetic resonance imaging (MRI). By accentuating the effects of this agent through the use of gradient-echo techniques in high fields, we demonstrate in vivo images of brain microvasculature with imag ..."
Abstract - Cited by 648 (1 self) - Add to MetaCart
to regional neural activity. Magnetic resonance imaging (MRI) is a widely accepted modality for providing anatomical information. Current research (1) involves extending MRI methods to provide information about biological function, in addition to the concomitant anatomical information. In addition

Policy gradient methods for reinforcement learning with function approximation.

by Richard S Sutton , David Mcallester , Satinder Singh , Yishay Mansour - In NIPS, , 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract - Cited by 439 (20 self) - Add to MetaCart
approximating a value function and using that to compute a deterministic policy, we approximate a stochastic policy directly using an independent function approximator with its own parameters. For example, the policy might be represented by a neural network whose input is a representation of the state, whose

Making Gnutella-like P2P Systems Scalable

by Yatin Chawathe, Sylvia Ratnasamy, Lee Breslau, Nick Lanham, Scott Shenker , 2003
"... Napster pioneered the idea of peer-to-peer file sharing, and supported it with a centralized file search facility. Subsequent P2P systems like Gnutella adopted decentralized search algorithms. However, Gnutella's notoriously poor scaling led some to propose distributed hash table solutions to t ..."
Abstract - Cited by 429 (1 self) - Add to MetaCart
implementation and its deployment on a testbed. Categories and Subject Descriptors C.2 [Computer Communication Networks]: Distributed Systems General Terms Algorithms, Design, Performance, Experimentation Keywords Peer-to-peer, distributed hash tables, Gnutella 1.

When Networks Disagree: Ensemble Methods for Hybrid Neural Networks

by Michael P. Perrone, Leaon N. Cooper , 1993
"... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..."
Abstract - Cited by 349 (3 self) - Add to MetaCart
in functional space which helps to avoid over-fitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct

How to improve Bayesian reasoning without instruction: Frequency formats

by Gerd Gigerenzer, Ulrich Hoffrage - Psychological Review , 1995
"... Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one s ..."
Abstract - Cited by 396 (29 self) - Add to MetaCart
specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural

A unified architecture for natural language processing: Deep neural networks with multitask learning

by Ronan Collobert, Jason Weston , 2008
"... We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and sem ..."
Abstract - Cited by 340 (13 self) - Add to MetaCart
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically
Next 10 →
Results 1 - 10 of 14,411
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University