Results 1  10
of
17,118
Optimal Brain Damage
, 1990
"... We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved sp ..."
Abstract

Cited by 510 (5 self)
 Add to MetaCart
We have used informationtheoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved
The Performance Comparison of Optimally Weighted
"... The performance comparison of the optimally weighted LS estimate and the linear minimum variance estimate for a linear model with random input is presented. In this case optimally weighted LS estimate is not a linear estimate of a parameter given input and observation anymore while linear minimum va ..."
Abstract
 Add to MetaCart
The performance comparison of the optimally weighted LS estimate and the linear minimum variance estimate for a linear model with random input is presented. In this case optimally weighted LS estimate is not a linear estimate of a parameter given input and observation anymore while linear minimum
Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms
 Evolutionary Computation
, 1994
"... In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands the user to have knowledge about t ..."
Abstract

Cited by 539 (5 self)
 Add to MetaCart
In trying to solve multiobjective optimization problems, many traditional methods scalarize the objective vector into a single objective. In those cases, the obtained solution is highly sensitive to the weight vector used in the scalarization process and demands the user to have knowledge about
Optimal weighted recombination
 Foundations of Genetic Algorithms 8
, 2005
"... Abstract. Weighted recombination is a means for improving the local search performance of evolution strategies. It aims to make effective use of the information available, without significantly increasing computational costs per time step. In this paper, the potential speedup resulting from using r ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
rankbased weighted recombination is investigated. Optimal weights are computed for the sphere model, and comparisons with the performance of strategies that do not make use of weighted recombination are presented. It is seen that unlike strategies that rely on unweighted recombination and truncation
Fibonacci Heaps and Their Uses in Improved Network optimization algorithms
, 1987
"... In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated Fheaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. Fheaps support arbitrary deletion from an nitem heap in qlogn) amortized tim ..."
Abstract

Cited by 739 (18 self)
 Add to MetaCart
time and all other standard heap operations in o ( 1) amortized time. Using Fheaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worstcase bounds, where n is the number of vertices and m the number of edges
Response Surfaces for Optimal Weight of
, 2000
"... Two levels of fidelity are used for minimum weight design of a composite bladestiffened panel subject to crack propagation constraints. The low fidelity approach makes use of an equivalent strain constraint calculated by a closed form solution for the stress intensity factor. The high fidelity appro ..."
Abstract
 Add to MetaCart
Two levels of fidelity are used for minimum weight design of a composite bladestiffened panel subject to crack propagation constraints. The low fidelity approach makes use of an equivalent strain constraint calculated by a closed form solution for the stress intensity factor. The high fidelity
Internet traffic engineering by optimizing OSPF weights
 in Proc. IEEE INFOCOM
, 2000
"... Abstractâ€”Open Shortest Path First (OSPF) is the most commonly used intradomain internet routing protocol. Traffic flow is routed along shortest paths, splitting flow at nodes where several outgoing links are on shortest paths to the destination. The weights of the links, and thereby the shortest pa ..."
Abstract

Cited by 403 (13 self)
 Add to MetaCart
to its capacity. Our starting point was a proposed AT&T WorldNet backbone with demands projected from previous measurements. The desire was to optimize the weight setting based on the projected demands. We showed that optimizing the weight settings for a given set of demands is NPhard, so we
Neural network ensembles, cross validation, and active learning
 Neural Information Processing Systems 7
, 1995
"... Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it qua ..."
Abstract

Cited by 479 (6 self)
 Add to MetaCart
the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme. 1
Active Learning with Statistical Models
, 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statist ..."
Abstract

Cited by 679 (10 self)
 Add to MetaCart
For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative
Markov Logic Networks
 MACHINE LEARNING
, 2006
"... We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the ..."
Abstract

Cited by 816 (39 self)
 Add to MetaCart
We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects
Results 1  10
of
17,118