Results 1  10
of
28
Selection and Fusion of Color Models for Image Feature Detection
"... Abstract—The choice of a color model is of great importance for many computer vision algorithms (e.g., feature detection, object recognition, and tracking) as the chosen color model induces the equivalence classes to the actual algorithms. As there are many color models available, the inherent diffi ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract—The choice of a color model is of great importance for many computer vision algorithms (e.g., feature detection, object recognition, and tracking) as the chosen color model induces the equivalence classes to the actual algorithms. As there are many color models available, the inherent difficulty is how to automatically select a single color model or, alternatively, a weighted subset of color models producing the best result for a particular task. The subsequent hurdle is how to obtain a proper fusion scheme for the algorithms so that the results are combined in an optimal setting. To achieve proper color model selection and fusion of feature detection algorithms, in this paper, we propose a method that exploits nonperfect correlation between color models or feature detection algorithms derived from the principles of diversification. As a consequence, a proper balance is obtained between repeatability and distinctiveness. The result is a weighting scheme which yields maximal feature discrimination. The method is verified experimentally for three different image feature detectors. The experimental results show that the fusion method provides feature detection results having a higher discriminative power than the standard weighting scheme. Further, it is experimentally shown that the color model selection scheme provides a proper balance between color invariance (repeatability) and discriminative power (distinctiveness). Index Terms—Color, learning, feature detection, scene analysis. 1
Some Generalizations Of The CrissCross Method For Quadratic Programming
 MATH. OPER. UND STAT. SER. OPTIMIZATION
, 1992
"... Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite crisscross method, based on leastindex resolution, is constructed for solving the LCP. In proving finiteness, orthogonality properties of pivot tableaus and positive semidefiniteness of quadratic matrices are used. In the last section some special cases and two further variants of the quadratic crisscross method are discussed. If the matrix of the LCP has full rank, then a surprisingly simple algorithm follows, which coincides with Murty's `Bard type schema' in the P matrix case.
Simulationbased optimization of virtual nesting controls for network revenue management
, 2004
"... Virtual nesting is a popular capacity control strategy in network revenue management. (See Smith et al. [36].) In virtual nesting, products (itineraryfareclass combinations) are mapped ("indexed") into a relatively small number of "virtual classes" on each resource (flight leg) of the network. Nes ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Virtual nesting is a popular capacity control strategy in network revenue management. (See Smith et al. [36].) In virtual nesting, products (itineraryfareclass combinations) are mapped ("indexed") into a relatively small number of "virtual classes" on each resource (flight leg) of the network. Nested protection levels are then used to control the availability of these virtual classes; specifically, a product request is accepted if and only if its corresponding virtual class is available on each resource required. (See Talluri and van Ryzin [38] for a detailed discussion of virtual nesting and protection level controls.) Bertsimas and de Boer [8] recently proposed an innovative simulationbased optimization method for computing protection levels in a virtual nesting control scheme. In contrast to traditional heuristic methods, their approach more accurately approximates the true network revenues generated by the virtual nesting controls. However, because it is based on a discrete model of capacity and demand, the method has both computational and theoretical limitations. In particular, it uses firstdifference estimates, which are computationally complex to calculate exactly. These gradient estimates are then used in a steepest ascent type algorithm, which, for discrete problems, has no guarantee of convergence.
Exact Arithmetic at Low Cost  a Case Study in Linear Programming
 Computational Geometry  Theory and Applications
, 1999
"... We describe a new exactarithmetic approach to linear programming when the number of variables n is much larger than the number of constraints m (or vice versa). The algorithm is an implementation of the simplex method which combines exact (multiple precision) arithmetic with inexact (floating point ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We describe a new exactarithmetic approach to linear programming when the number of variables n is much larger than the number of constraints m (or vice versa). The algorithm is an implementation of the simplex method which combines exact (multiple precision) arithmetic with inexact (floating point) arithmetic, where the number of exact arithmetic operations is small and usually bounded by a function of min(n; m). Combining this with a "partial pricing" scheme (based on a result by Clarkson [8]) which is particularly tuned for the problems under consideration, we obtain a correct and practically efficient algorithm that even competes with the inexact stateoftheart solver CPLEX 1 for small values of min(n; m) and and is far superior to methods that use exact arithmetic in any operation. 1 Introduction Linear Programming (LP)  the problem of maximizing a linear objective function in n variables subject to m linear (in)equality constraints  is the most prominent optimization ...
Sensitivity Analysis in (Degenerate) Quadratic Programming
 DELFT UNIVERSITY OF TECHNOLOGY
, 1996
"... In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper we deal with sensitivity analysis in convex quadratic programming, without making assumptions on nondegeneracy, strict convexity of the objective function, and the existence of a strictly complementary solution. We show that the optimal value as a function of a righthand side element (or an element of the linear part of the objective) is piecewise quadratic, where the pieces can be characterized by maximal complementary solutions and tripartitions. Further, we investigate differentiability of this function. A new algorithm to compute the optimal value function is proposed. Finally, we discuss the advantages of this approach when applied to meanvariance portfolio models.
SENSITIVITY ANALYSIS IN CONVEX QUADRATIC OPTIMIZATION: INVARIANT SUPPORT SET INTERVAL
, 2004
"... In sensitivity analysis one wants to know how the problem and the optimal solutions change under the variation of the input data. We consider the case when variation happens in the right hand side of the constraints and/or in the linear term of the objective function. We are interested to find the r ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In sensitivity analysis one wants to know how the problem and the optimal solutions change under the variation of the input data. We consider the case when variation happens in the right hand side of the constraints and/or in the linear term of the objective function. We are interested to find the range of the parameter variation in Convex Quadratic Optimization (CQO) problems where the support set of a given primal optimal solution remains invariant. This question has been first raised in Linear Optimization (LO) and known as Type II (so called Support Set Invariancy) sensitivity analysis. We present computable auxiliary problems to identify the range of parameter variation in support set invariancy sensitivity analysis for CQO. It should be mentioned that all given auxiliary problems are LO problems and can be solved by an interior point method in polynomial time. We also highlight the differences between characteristics of support set invariancy sensitivity analysis for LO and CQO.
Basis and Tripartition Identification for Quadratic Programming and Linear Complementarity Problems  From an interior solution to an optimal basis and viceversa
, 1996
"... Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexb ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Optimal solutions of interior point algorithms for linear and quadratic programming and linear complementarity problems provide maximal complementary solutions. Maximal complementary solutions can be characterized by optimal (tri)partitions. On the other hand, the solutions provided by simplexbased pivot algorithms are given in terms of complementary bases. A basis identification algorithm is an algorithm which generates a complementary basis, starting from any complementary solution. A tripartition identification algorithm is an algorithm which generates a maximal complementary solution (and its corresponding tripartition), starting from any complementary solution. In linear programming such algorithms were respectively proposed by Megiddo in 1991 and Balinski and Tucker in 1969. In this paper we will present identification algorithms for quadratic programming and linear complementarity problems with sufficient matrices. The presented algorithms are based on the principal...
Decomposition of Mixed Pixels in Remote Sensing Images to Improve the Area Estimation of Agricultural Fields
, 1998
"... tle University of Reading, United Kingdom prof. dr. G. Wilkinson Kingston University, United Kingdom prof. dr. ir. M. Molenaar Landbouw Universiteit Wageningen Preface This thesis is the result of nearly 10 years of study, work, and fun at the University of Nijmegen. In 1987 I started my study Info ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
tle University of Reading, United Kingdom prof. dr. G. Wilkinson Kingston University, United Kingdom prof. dr. ir. M. Molenaar Landbouw Universiteit Wageningen Preface This thesis is the result of nearly 10 years of study, work, and fun at the University of Nijmegen. In 1987 I started my study Informatics at the Faculty of Mathematics and Natural Sciences, not knowing exactly what Informatics was but feeling it had a great future. Four years later, however, I had found out that I was fascinated by the ability of the computer to perform certain tasks that are (nearly) impossible to be executed by human beings. A good example of such a task is the classification of multidimensional feature vectors, which I studied extensively during my Master's research period at the Biophysics Laboratory of the University Hospital of Nijmegen. Therefore it is no wonder that, after a brief intermezzo in the military service, I took up a related subject, i.e. the decomposition of mixed
Learning Boundaries on Military Operational Plans from Simulation Data*
"... AbstractIn this paper we learn indicators from simulated data that serve as boundaries on military operational plans of an expeditionary operation. These are boundaries that an operation must not move beyond without risk of drastic failure. We receive simulated and evaluated partial patterns of pla ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
AbstractIn this paper we learn indicators from simulated data that serve as boundaries on military operational plans of an expeditionary operation. These are boundaries that an operation must not move beyond without risk of drastic failure. We receive simulated and evaluated partial patterns of plan instances from a simulationbased decision support system that are patterns of integer strings. These partial patterns are clustered by an unsupervised neural Potts spin clustering method into clusters where the instances in each cluster have similar characteristics and outcomes. This gives all partial patterns a classification. We use a DempsterShafer theory based factor screening method on each pair of clusters, where all activities of the plan are evaluated as to their differentiating capacity between the two sets of partial plan instances. All plan instances are projected from their full integer string representation to a subset of factors with high differentiating capacity. We apply supervised learning by Support Vector Machine using the previous classification to learn support vectors for each pair of clusters given the projected plan instances of these clusters. From these support vectors we derive a lower dimension hyper plane that will serve as one of the indicators. One indicator from each pair of clusters will make up a full set of indicators for this operational plan. This set of indicators can be provided to the intelligence service and used during execution of the plan for assessment of its progress, and serve as a warning bell if the plan approaches an indicator which it should not proceed beyond. Keywordsmilitary operational planning; effectsbased planning; indicators; partial patterns; clustering; neural network; Potts spin; DempsterShafer theory; factor screening, support vector machine; hyper plane. I.