Results 1 - 10
of
5,922
Improved Digital Image Compression using Modified Single Layer Linear Neural Networks B.Arunapriya
"... Image Compression is solved by using Wavelet-Modified Single Layer Linear Forward Only Counter Propagation Network (MSLLFOCPN) technique. Form the wavelets it inherits the properties of localizing the global spatial and frequency correlation from wavelets. Function approximation and prediction are o ..."
Abstract
- Add to MetaCart
are obtained from neural networks. As a result counter propagation network was considered for its superior performance and the research enable us to propose a new neural network architecture named single layer linear counter propagation network (SLLC). The combination of wavelet and SLLC network were tested
Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network
, 1989
"... A new approach to unsupervised learning in a single-layer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which achie ..."
Abstract
-
Cited by 293 (2 self)
- Add to MetaCart
A new approach to unsupervised learning in a single-layer linear feedforward neural network is discussed. An optimality principle is proposed which is based upon preserving maximal information in the output units. An algorithm for unsupervised learning based upon a Hebbian learning rule, which
Approximation by Superpositions of a Sigmoidal Function
, 1989
"... In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set ofaffine functionals can uniformly approximate any continuous function of n real variables with support in the unit hypercube; only mild conditions are imposed on the univariate fun ..."
Abstract
-
Cited by 1248 (2 self)
- Add to MetaCart
function. Our results settle an open question about representability in the class of single bidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any
Growing radial basis neural networks: Merging supervised and unsupervised learning with network growth techniques
- IEEE Transactions on Neural Networks
, 1997
"... Abstract—This paper proposes a framework for constructing and training radial basis function (RBF) neural networks. The proposed growing radial basis function (GRBF) network begins with a small number of prototypes, which determine the locations of radial basis functions. In the process of training, ..."
Abstract
-
Cited by 58 (3 self)
- Add to MetaCart
. These include unsupervised algorithms for clustering and learning vector quantization, as well as learning algorithms for training single-layer linear neural networks. A supervised learning scheme based on the minimization of the localized class-conditional variance is also proposed and tested. GRBF neural
Survey on Independent Component Analysis
- NEURAL COMPUTING SURVEYS
, 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract
-
Cited by 2309 (104 self)
- Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation
Training Support Vector Machines: an Application to Face Detection
, 1997
"... We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision sur ..."
Abstract
-
Cited by 727 (1 self)
- Add to MetaCart
We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision
A Re-Examination of Text Categorization Methods
, 1999
"... This paper reports a controlled study with statistical significance tests on five text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classifier, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a NaiveBayes (NB) classifier. We f ..."
Abstract
-
Cited by 853 (24 self)
- Add to MetaCart
This paper reports a controlled study with statistical significance tests on five text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classifier, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a NaiveBayes (NB) classifier. We
A Growing Neural Gas Network Learns Topologies
- Advances in Neural Information Processing Systems 7
, 1995
"... An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this m ..."
Abstract
-
Cited by 401 (5 self)
- Add to MetaCart
An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994
Handwritten Digit Recognition by Neural Networks with Single-Layer Training
, 1992
"... We show that neural network classifiers with single-layer training can be applied efficiently to complex real-world classification problems such as the recognition of handwritten digits. We introduce the STEPNET procedure, which decomposes the problem into simpler subproblems which can be solved by ..."
Abstract
-
Cited by 52 (2 self)
- Add to MetaCart
We show that neural network classifiers with single-layer training can be applied efficiently to complex real-world classification problems such as the recognition of handwritten digits. We introduce the STEPNET procedure, which decomposes the problem into simpler subproblems which can be solved
Greedy layer-wise training of deep networks
, 2006
"... Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities allow ..."
Abstract
-
Cited by 394 (48 self)
- Add to MetaCart
Complexity theory of circuits strongly suggests that deep architectures can be much more efficient (sometimes exponentially) than shallow architectures, in terms of computational elements required to represent some functions. Deep multi-layer neural networks have many levels of non-linearities
Results 1 - 10
of
5,922