Results 1  10
of
172
Probabilistic Reasoning in Terminological Logics
, 1994
"... In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usua ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
In this paper a probabilistic extensions for terminological knowledge representation languages is defined. Two kinds of probabilistic statements are introduced: statements about conditional probabilities between concepts and statements expressing uncertain knowledge about a specific object. The usual modeltheoretic semantics for terminological logics are extended to define interpretations for the resulting probabilistic language. It is our main objective to find an adequate modelling of the way the two kinds of probabilistic knowledge are combined in commonsense inferences of probabilistic statements. Cross entropy minimization is a technique that turns out to be very well suited for achieving this end. 1 INTRODUCTION Terminological knowledge representation languages (concept languages, terminological logics) are used to describe hierarchies of concepts. While the expressive power of the various languages that have been defined (e.g. KLONE [BS85] ALC [SSS91]) varies greatly in that ...
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
GLOBAL CONVERGENCE PROPERTIES OF CONJUGATE GRADIENT METHODS FOR OPTIMIZATION
, 1992
"... This paper explores the convergence ofnonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes ofmethods that are globally convergent on smooth, nonconvex functions. Some properties of the FletcherReeves method play an important role ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
This paper explores the convergence ofnonlinear conjugate gradient methods without restarts, and with practical line searches. The analysis covers two classes ofmethods that are globally convergent on smooth, nonconvex functions. Some properties of the FletcherReeves method play an important role in the first family, whereas the second family shares an important property with the PolakRibire method. Numerical experiments are presented.
The NEWUOA software for unconstrained optimization with derivatives
, 2004
"... Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
Abstract: The NEWUOA software seeks the least value of a function F(x), x∈R n, when F(x) can be calculated for any vector of variables x. The algorithm is iterative, a quadratic model Q ≈ F being required at the beginning of each iteration, which is used in a trust region procedure for adjusting the variables. When Q is revised, the new Q interpolates F at m points, the value m=2n+1 being recommended. The remaining freedom in the new Q is taken up by minimizing the Frobenius norm of the change to ∇ 2 Q. Only one interpolation point is altered on each iteration. Thus, except for occasional origin shifts, the amount of work per iteration is only of order (m+n) 2, which allows n to be quite large. Many questions were addressed during the development of NEWUOA, for the achievement of good accuracy and robustness. They include the choice of the initial quadratic model, the need to maintain enough linear independence in the interpolation conditions in the presence of computer rounding errors, and the stability of the updating of certain matrices that allow the fast revision of Q. Details are given of the techniques that answer all the questions that occurred. The software was tried on several test problems. Numerical results for nine of them are reported and discussed, in order to demonstrate the performance of the software for up to 160 variables.
Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1998
"... Three networks are compared for low false alarm stock trend predictions. Shortterm trends, particularly attractive for neural network analysis, can be used profitably in scenarios such as option trading, but only with significant risk. Therefore, we focus on limiting false alarms, which improves ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
Three networks are compared for low false alarm stock trend predictions. Shortterm trends, particularly attractive for neural network analysis, can be used profitably in scenarios such as option trading, but only with significant risk. Therefore, we focus on limiting false alarms, which improves the risk/reward ratio by preventing losses. To predict stock trends, we exploit time delay, recurrent, and probabilistic neural networks (TDNN, RNN, and PNN, respectively), utilizing conjugate gradient and multistream extended Kalman filter training for TDNN and RNN. We also discuss different predictability analysis techniques and perform an analysis of predictability based on a history of daily closing price. Our results indicate that all the networks are feasible, the primary preference being one of convenience.
Comparison of support vector machine and artificial neural network systems for drug/ nNondrug classification
 J. Chem. Inf. Comput. Sci. 2003
"... Support vector machine (SVM) and artificial neural network (ANN) systems were applied to a drug/nondrug classification problem as an example of binary decision problems in earlyphase virtual compound filtering and screening. The results indicate that solutions obtained by SVM training seem to be mo ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
Support vector machine (SVM) and artificial neural network (ANN) systems were applied to a drug/nondrug classification problem as an example of binary decision problems in earlyphase virtual compound filtering and screening. The results indicate that solutions obtained by SVM training seem to be more robust with a smaller standard error compared to ANN training. Generally, the SVM classifier yielded slightly higher prediction accuracy than ANN, irrespective of the type of descriptors used for molecule encoding, the size of the training data sets, and the algorithm employed for neural network training. The performance was compared using various different descriptor sets and descriptor combinations based on the 120 standard GhoseCrippen fragment descriptors, a wide range of 180 different properties and physicochemical descriptors from the Molecular Operating Environment (MOE) package, and 225 topological pharmacophore (CATS) descriptors. For the complete set of 525 descriptors crossvalidated classification by SVM yielded 82% correct predictions (Matthews cc) 0.63), whereas ANN reached 80 % correct predictions (Matthews cc) 0.58). Although SVM outperformed the ANN classifiers with regard to overall prediction accuracy, both methods were shown to complement each other, as the sets of true positives, false positives (overprediction), true negatives, and false negatives (underprediction) produced by the two classifiers were not identical. The theory of SVM and ANN training is briefly reviewed.
Evaluation of Pattern Classifiers for Fingerprint and OCR Applications
 Pattern Recognition
, 1993
"... In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results rep ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
In this paper we evaluate the classification accuracy of four statistical and three neural network classifiers for two image based pattern classification problems. These are fingerprint classification and optical character recognition (OCR) for isolated handprinted digits. The evaluation results reported here should be useful for designers of practical systems for these two important commercial applications. For the OCR problem, the KarhunenLo`eve (KL) transform of the images is used to generate the input feature set. Similarly for the fingerprint problem, the KL transform of the ridge directions is used to generate the input feature set. The statistical classifiers used were Euclidean minimum distance, quadratic minimum distance, normal, and knearest neighbor. The neural network classifiers used were multilayer perceptron, radial basis function, and probabilistic. The OCR data consisted of 7,480 digit images for training and 23,140 digit images for testing. The fingerprint data co...
LimitedMemory Matrix Methods with Applications
, 1997
"... Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introdu ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introduce a general algebraic form for the matrix update in limited�memory quasi� Newton methods. Many well�known methods such as limited�memory Broyden Family meth� ods satisfy the general form. We are able to prove several results about methods which sat� isfy the general form. In particular � we show that the only limited�memory Broyden Family method �using exact line searches � that is guaranteed to terminate within n iterations on an n�dimensional strictly convex quadratic is the limited�memory BFGS method. Further� more � we are able to introduce several new variations on the limited�memory BFGS method that retain the quadratic termination property. We also have a new result that shows that full�memory Broyden Family methods �using exact line searches � that skip p updates to the quasi�Newton matrix will terminate in no more than n�p steps on an n�dimensional strictly convex quadratic. We propose several new variations on the limited�memory BFGS method
Fast Training Algorithms For MultiLayer Neural Nets
, 1993
"... Training a multilayer neural net by backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. This paper describes an algorithm which is much faster than backpropagation and for which it is not necessary to specify the number of hidden units in advance ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
Training a multilayer neural net by backpropagation is slow and requires arbitrary choices regarding the number of hidden units and layers. This paper describes an algorithm which is much faster than backpropagation and for which it is not necessary to specify the number of hidden units in advance. The relationship with other fast pattern recognition algorithms, such as algorithms based on kd trees, is mentioned. The algorithm has been implemented and tested on articial problems such as the parity problem and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using backpropagation, and the training process is much faster than backpropagation. Accuracy is comparable to that for the \nearest neighbour" algorithm, which is slower and requires more storage space. Comments Only the Abstract is given here. The full paper ap...