Results 1 
7 of
7
Combining and comparing clustering and layout algorithms
, 2002
"... Many clustering and layout techniques have been used for structuring and visualising complex data. This paper explores a number of combinations and variants of sampling, Kmeans clustering and spring models in making such layouts, using Chalmers ’ 1996 linear iteration time spring model as a benchma ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Many clustering and layout techniques have been used for structuring and visualising complex data. This paper explores a number of combinations and variants of sampling, Kmeans clustering and spring models in making such layouts, using Chalmers ’ 1996 linear iteration time spring model as a benchmark. This algorithm runs in O(N 2) time overall, but the run times for the new algorithms we describe reach O(N√N). We compare their layout quality and run times in laying out two collections of synthetic data, drawing samples from each collection of sizes ranging from 1000 to 20000. Based on these comparisons, we outline a number of avenues for future work that may further reduce time complexity and improve layout quality. ____________________________________________________________________________________________________ 1.
How Dependencies between Successive Examples Affect OnLine Learning
, 1996
"... . We study the dynamics of online learning for a large class of neural networks and learning rules, including backpropagation for multilayer perceptrons. In this paper, we focus on the case where successive examples are dependent, and we analyze how these dependencies affect the learning process. W ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
. We study the dynamics of online learning for a large class of neural networks and learning rules, including backpropagation for multilayer perceptrons. In this paper, we focus on the case where successive examples are dependent, and we analyze how these dependencies affect the learning process. We define the representation error and the prediction error. The representation error measures how well the environment is represented by the network after learning. The prediction error is the average error which a continually learning network makes on the next example. In the neighborhood of a local minimum of the error surface, we calculate these errors. We find that the more predictable the example presentation, the higher the representation error, i.e. the less accurate the asymptotic representation of the whole environment. Furthermore we study the learning process in the presence of a plateau. Plateaus are flat spots on the error surface, which can severely slow down the learning proce...
Online Learning with TimeCorrelated Examples
, 1998
"... We study the dynamics of online learning with timecorrelated patterns. In this, we make a distinction between "small" networks and "large" networks. "Small" networks have a finite number of input units and are usually studied using tools from stochastic approximation ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study the dynamics of online learning with timecorrelated patterns. In this, we make a distinction between "small" networks and "large" networks. "Small" networks have a finite number of input units and are usually studied using tools from stochastic approximation theory in the limit of small learning parameters. "Large" networks have an extensive number of input units. A description in terms of individual weights is no longer useful and tools from statistical mechanics can be applied to compute the evolution of macroscopic order parameters. We give general derivations for both cases, but in the end focus on the effect of correlations on plateaus. Plateaus are long time spans in which the performance of the networks hardly changes. Learning in both "small" and "large" multilayered perceptrons is often hampered by the presence of plateaus. The effect of correlations, however, appears to be quite different: they can have a huge beneficial effect in small networks, but seem to have ...
Oblique Support Vector Machines ∗
, 2005
"... Abstract. In this paper we propose a modified framework of support vector machines, called Oblique Support Vector Machines(OSVMs), to improve the capability of classification. The principle of OSVMs is joining an orthogonal vector into weight vector in order to rotate the support hyperplanes. By th ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. In this paper we propose a modified framework of support vector machines, called Oblique Support Vector Machines(OSVMs), to improve the capability of classification. The principle of OSVMs is joining an orthogonal vector into weight vector in order to rotate the support hyperplanes. By this way, not only the regularized risk function is revised, but the constrained functions are also modified. Under this modification, the separating hyperplane and the margin of separation are constructed more precise. Moreover, in order to apply to largescale data problem, an iterative learning algorithm is proposed. In this iterative learning algorithm, three different schemes for training can be found in this literature, including patternmode learning, semibatch mode learning and batch mode learning. Besides, smooth technique is adopted in order to convert the constrained nonlinear programming problem into unconstrained optimum problem. Consequently, experimental results and comparisons are given to demonstrate that the performance of OSVMs is better than that of SVMs and SSVMs.
Feature
"... selection with equalized salience measures and its application to segmentation ..."
Abstract
 Add to MetaCart
selection with equalized salience measures and its application to segmentation
Online Learning From Finite Training Sets and
 Neural Computation
, 1998
"... this paper, we give an exact analysis of online learning in a simple model system. Our aim is twofold: (1) to assess how the combination of noninfinitesimal learning rates j and finite training sets (containing ff examples per weight) affects online learning, and (2) to compare the generalization p ..."
Abstract
 Add to MetaCart
this paper, we give an exact analysis of online learning in a simple model system. Our aim is twofold: (1) to assess how the combination of noninfinitesimal learning rates j and finite training sets (containing ff examples per weight) affects online learning, and (2) to compare the generalization performance of online and offline learning. A priori, one Online learning can also be used to learn teacher rules that vary in time. The assumption of an infinite set (or `stream') of training examples is then much more plausible, and in fact necessary for continued adaptation of the student. We do not consider this case in the following
Presentation Order and onLine Learning
"... We study the effect of the presentation order of training patterns on the performance of online learning neural networks. In the context of time series, we discuss the difference between randomized and natural learning. With regard to learning in cycles, we quantify and compare the performance of a ..."
Abstract
 Add to MetaCart
We study the effect of the presentation order of training patterns on the performance of online learning neural networks. In the context of time series, we discuss the difference between randomized and natural learning. With regard to learning in cycles, we quantify and compare the performance of almost cyclic and purely cyclic learning. 1 Introduction Learning plays a crucial role in most neuralnetwork applications. Through learning the network weights are adapted to meet the requirements of the environment. Usually, the designer has access to a finite number of examples from this environment: the training set. A popular learning strategy is online learning: at each learning step one of the patterns is drawn from the training set and presented to the network leading to a learning step of the form \Deltaw(n) = w(n + 1) \Gamma w(n) = jf(w(n); x(n)) ; (1) with w(n) the weight vector at iteration step n, j the learning parameter, x(n) the presented training pattern, and f(\Delta; \Del...