Results 1  10
of
36
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
Supervised versus multiple instance learning: An empirical comparison
 Proceedings of 22nd International Conference on Machine Learning (ICML2005
, 2005
"... We empirically study the relationship between supervised and multiple instance (MI) learning. Algorithms to learn various concepts have been adapted to the MI representation. However, it is also known that concepts that are PAClearnable with onesided noise can be learned from MI data. A relevant q ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
(Show Context)
We empirically study the relationship between supervised and multiple instance (MI) learning. Algorithms to learn various concepts have been adapted to the MI representation. However, it is also known that concepts that are PAClearnable with onesided noise can be learned from MI data. A relevant question then is: how well do supervised learners do on MI data? We attempt to answer this question by looking at a cross section of MI data sets from various domains coupled with a number of learning algorithms including Diverse Density, Logistic Regression, nonlinear Support Vector Machines and FOIL. We consider a supervised and MI version of each learner. Several interesting conclusions emerge from our work: (1) no MI algorithm is superior across all tested domains, (2) some MI algorithms are consistently superior to their supervised counterparts, (3) using high falsepositive costs can improve a supervised learner’s performance in MI domains, and (4) in several domains, a supervised algorithm is superior to any MI algorithm we tested. 1.
A survey of nonlinear conjugate gradient methods
 Pacific Journal of Optimization
, 2006
"... ..."
(Show Context)
Nonlinear optimal control via occupation measures and LMI relaxations
 SIAM Journal on Control and Optimization
, 2008
"... Abstract. We consider the class of nonlinear optimal control problems (OCP) with polynomial data, i.e., the differential equation, state and control constraints and cost are all described by polynomials, and more generally for OCPs with smooth data. In addition, state constraints as well as state an ..."
Abstract

Cited by 47 (24 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the class of nonlinear optimal control problems (OCP) with polynomial data, i.e., the differential equation, state and control constraints and cost are all described by polynomials, and more generally for OCPs with smooth data. In addition, state constraints as well as state and/or action constraints are allowed. We provide a simple hierarchy of LMI (linear matrix inequality)relaxations whose optimal values form a nondecreasing sequence of lower bounds on the optimal value. Under some convexity assumptions, the sequence converges to the optimal value of the OCP. Preliminary results show that good approximations are obtained with few moments. 1.
Algorithm 851: CG DESCENT, a conjugate gradient method with guaranteed descent
 ACM Trans. Math. Softw
, 2006
"... Recently, a new nonlinear conjugate gradient scheme was developed which satisfies the descent condition gT kdk ≤ − 7 8 ‖gk‖2 and which is globally convergent whenever the line search fulfills the Wolfe conditions. This article studies the convergence behavior of the algorithm; extensive numerical t ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
Recently, a new nonlinear conjugate gradient scheme was developed which satisfies the descent condition gT kdk ≤ − 7 8 ‖gk‖2 and which is globally convergent whenever the line search fulfills the Wolfe conditions. This article studies the convergence behavior of the algorithm; extensive numerical tests and comparisons with other methods for largescale unconstrained optimization are given.
Fast Secant Methods for the Iterative Solution of Large Nonsymmetric Linear Systems
 IMPACT OF COMPUTING IN SCIENCE AND ENGINEERING
, 1990
"... A family of secant methods based on general rank1 updates has been revisited in view of the construction of iterative solvers for large nonHermitian linear systems. As it turns out, both Broyden's "good" and "bad" update techniques play a special role — but should be assoc ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
A family of secant methods based on general rank1 updates has been revisited in view of the construction of iterative solvers for large nonHermitian linear systems. As it turns out, both Broyden's "good" and "bad" update techniques play a special role — but should be associated with two different line search principles. For Broyden's "bad" update technique, a minimum residual principle is natural — thus making it theoretically comparable with a series of wellknown algorithms like GMRES. Broyden's "good" update technique, however, is shown to be naturally linked with a minimum "next correction" principle — which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized PDE's of convection diffusion type in 2D with internal layers give a first impression of the possible power of the derived "good" Broyden variant.
A wellposed shooting algorithm for optimal control problems with singular arcs
, 2011
"... In this article we establish for the first time the wellposedness of the shooting algorithm applied to optimal control problems for which all control variables enter linearly in the Hamiltonian. We start by investigating the case having only initialfinal state constraints and free control variable ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
In this article we establish for the first time the wellposedness of the shooting algorithm applied to optimal control problems for which all control variables enter linearly in the Hamiltonian. We start by investigating the case having only initialfinal state constraints and free control variable, and afterwards we deal with control bounds. The shooting algorithm is wellposed if the derivative of its associated shooting function is injective at the optimal solution. The main result of this paper is to provide a sufficient condition for this injectivity, that is very close to the second order necessary condition. We prove that this sufficient condition guarantees the stability of the optimal solution under small perturbations and the wellposedness of the shooting algorithm for the perturbed problem. We present numerical tests that validate our method.
An efficient hybrid conjugate gradient method for unconstrained optimization
 Math. Comp
, 2001
"... Abstract. Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for largescale problems. Recently, they have been much studied. This paper proposes a threeparameter family of hybrid conjugate gradient methods. Two important features of the family a ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Conjugate gradient methods are an important class of methods for unconstrained optimization, especially for largescale problems. Recently, they have been much studied. This paper proposes a threeparameter family of hybrid conjugate gradient methods. Two important features of the family are that (i) it can avoid the propensity of small steps, namely, if a small step is generated away from the solution point, the next search direction will be close to the negative gradient direction; and (ii) its descent property and global convergence are likely to be achieved provided that the line search satisfies the Wolfe conditions. Some numerical results with the family are also presented. 1.
On the molecular pathology of neurodegeneration in IMPDH1based retinitis pigmentosa. Hum Mo Genet
"... Retinitis pigmentosa (RP), the hereditary degenerative disease of the photoreceptor neurons of the retina, probably represents the most prevalent cause of registered blindness amongst those of working age in developed countries. Mutations within the gene encoding inosine monophosphate dehydrogenase ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Retinitis pigmentosa (RP), the hereditary degenerative disease of the photoreceptor neurons of the retina, probably represents the most prevalent cause of registered blindness amongst those of working age in developed countries. Mutations within the gene encoding inosine monophosphate dehydrogenase 1 (IMPDH1), the widely expressed ratelimiting enzyme of the de novo pathway of guanine nucleotide biosynthesis, have recently been shown to cause the RP10 form of autosomal dominant RP. We examined the expression of IMPDH1, IMPDH2 and HPRT transcripts, encoding enzymes of the de novo and salvage pathways of guanine nucleotide biosynthesis, respectively, in retinal sections of mice, the data indicating that the bulk of GTP within photoreceptors is generated by IMPDH1. Impdh1 / null mice are shown here to display a slowly progressive form of retinal degeneration in which visual transduction, analysed by electroretinographic wave functions, becomes gradually compromised, although at 12 months of age most photoreceptors remain structurally intact. In contrast, the human form of RP caused by mutations within the IMPDH1 gene is a severe autosomal dominant degenerative retinopathy in those families that have been examined to date. Expression of mutant IMPDH1 proteins in bacterial and mammalian cells, together with
A threeparameter family of nonlinear conjugate gradient methods
 Mathematics of Computation
, 2001
"... Abstract. In this paper, we propose a threeparameter family of conjugate gradient methods for unconstrained optimization. The threeparameter family of methods not only includes the already existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear co ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose a threeparameter family of conjugate gradient methods for unconstrained optimization. The threeparameter family of methods not only includes the already existing six practical nonlinear conjugate gradient methods, but subsumes some other families of nonlinear conjugate gradient methods as its subfamilies. With Powell’s restart criterion, the threeparameter family of methods with the strong Wolfe line search is shown to ensure the descent property of each search direction. Some general convergence results are also established for the threeparameter family of methods. This paper can also be regarded as a brief review on nonlinear conjugate gradient methods. 1.