Results 1 
3 of
3
Learning Monotonic Transformations for Classification
, 2007
"... A discriminative method is proposed for learning monotonic transformations of the training data while jointly estimating a largemargin classifier. In many domains such as document classification, image histogram classification and gene microarray experiments, fixed monotonic transformations can be ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
A discriminative method is proposed for learning monotonic transformations of the training data while jointly estimating a largemargin classifier. In many domains such as document classification, image histogram classification and gene microarray experiments, fixed monotonic transformations can be useful as a preprocessing step. However, most classifiers only explore these transformations through manual trial and error or via prior domain knowledge. The proposed method learns monotonic transformations automatically while training a largemargin classifier without any prior knowledge of the domain. A monotonic piecewise linear function is learned which transforms data for subsequent processing by a linear hyperplane classifier. Two algorithmic implementations of the method are formalized. The first solves a convergent alternating sequence of quadratic and linear programs until it obtains a locally optimal solution. An improved algorithm is then derived using a convex semidefinite relaxation that overcomes initialization issues in the greedy optimization problem. The effectiveness of these learned transformations on synthetic problems, text data and image data is demonstrated.
Application of the momentsos approach to global optimization of the opf problem,”
 IEEE Trans. on Power Syst.,
, 2014
"... Finding a global solution to the optimal power flow (OPF) problem is difficult due to its nonconvexity. A convex relaxation in the form of semidefinite programming (SDP) has attracted much attention lately as it yields a global solution in several practical cases. However, it does not in all cases, ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Finding a global solution to the optimal power flow (OPF) problem is difficult due to its nonconvexity. A convex relaxation in the form of semidefinite programming (SDP) has attracted much attention lately as it yields a global solution in several practical cases. However, it does not in all cases, and such cases have been documented in recent publications. This paper presents another SDP method known as the momentsos (sum of squares) approach, which generates a sequence that converges towards a global solution to the OPF problem at the cost of higher runtime. Our finding is that in the small examples where the previously studied SDP method fails, this approach finds the global solution. The higher cost in runtime is due to an increase in the matrix size of the SDP problem, which can vary from one instance to another. Numerical experiment shows that the size is very often a quadratic function of the number of buses in the network, whereas it is a linear function of the number of buses in the case of the previously studied SDP method.