Results 11  20
of
560
Sparse Permutation Invariant Covariance Estimation
 Electronic Journal of Statistics
, 2008
"... The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in highdimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lassotype penalty. We establish a rate of convergence in the Fro ..."
Abstract

Cited by 75 (5 self)
 Add to MetaCart
The paper proposes a method for constructing a sparse estimator for the inverse covariance (concentration) matrix in highdimensional settings. The estimator uses a penalized normal likelihood approach and forces sparsity by using a lassotype penalty. We establish a rate of convergence in the Frobenius norm as both data dimension p and sample size n are allowed to grow, and show that the rate depends explicitly on how sparse the true concentration matrix is. We also show that a correlationbased version of the method exhibits better rates in the operator norm. The estimator is required to be positive definite, but we avoid having to use semidefinite programming by reparameterizing the objective function
Parametric Analysis of Polyhedral Iteration Spaces
 JOURNAL OF VLSI SIGNAL PROCESSING
, 1998
"... In the area of automatic parallelization of programs, analyzing and transforming loop nests with parametric affine loop bounds requires fundamental mathematical results. The most common geometrical model of iteration spaces, called the polytope model, is based on mathematics dealing with convex and ..."
Abstract

Cited by 68 (13 self)
 Add to MetaCart
In the area of automatic parallelization of programs, analyzing and transforming loop nests with parametric affine loop bounds requires fundamental mathematical results. The most common geometrical model of iteration spaces, called the polytope model, is based on mathematics dealing with convex and discrete geometry, linear programming, combinatorics and geometry of numbers. In this paper, we present automatic methods for computing the parametric vertices and the Ehrhart polynomial, i.e. a parametric expression of the number of integer points, of a polytope defined by a set of parametric linear constraints. These methods have many applications in analysis and transformations of nested loop programs. The paper is illustrated with exact symbolic array dataflow analysis, estimation of execution time, and with the computation of the maximum available parallelism of given loop nests.
Hellytype theorems and generalized linear programming
 DISCRETE COMPUT. GEOM
, 1994
"... This thesis establishes a connection between the Helly theorems, a collection of results from combinatorial geometry, and the class of problems which we call Generalized Linear Programming, or GLP, which can be solved by combinatorial linear programming algorithms like the simplex method. We use the ..."
Abstract

Cited by 59 (0 self)
 Add to MetaCart
This thesis establishes a connection between the Helly theorems, a collection of results from combinatorial geometry, and the class of problems which we call Generalized Linear Programming, or GLP, which can be solved by combinatorial linear programming algorithms like the simplex method. We use these results to explore the class GLP and show new applications to geometric optimization, and also to prove Helly theorems. In general, a GLP is a set...
Boosting as Entropy Projection
, 1999
"... We consider the AdaBoost procedure for boosting weak learners. In AdaBoost, a key step is choosing a new distribution on the training examples based on the old distribution and the mistakes made by the present weak hypothesis. We show how AdaBoost 's choice of the new distribution can be seen ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
We consider the AdaBoost procedure for boosting weak learners. In AdaBoost, a key step is choosing a new distribution on the training examples based on the old distribution and the mistakes made by the present weak hypothesis. We show how AdaBoost 's choice of the new distribution can be seen as an approximate solution to the following problem: Find a new distribution that is closest to the old distribution subject to the constraint that the new distribution is orthogonal to the vector of mistakes of the current weak hypothesis. The distance (or divergence) between distributions is measured by the relative entropy. Alternatively, we could say that AdaBoost approximately projects the distribution vector onto a hyperplane dened by the mistake vector. We show that this new view of AdaBoost as an entropy projection is dual to the usual view of AdaBoost as minimizing the normalization factors of the updated distributions.
LeaderFollower Strategies for Robotic Patrolling in Environments with Arbitrary Topologies
"... Game theoretic approaches to patrolling have become a topic of increasing interest in the very last years. They mainly refer to a patrolling mobile robot that preserves an environment from intrusions. These approaches allow for the development of patrolling strategies that consider the possible acti ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
Game theoretic approaches to patrolling have become a topic of increasing interest in the very last years. They mainly refer to a patrolling mobile robot that preserves an environment from intrusions. These approaches allow for the development of patrolling strategies that consider the possible actions of the intruder in deciding where the robot should move. Usually, it is supposed that the intruder can hide and observe the actions of the patroller before intervening. This leads to the adoption of a leaderfollower solution concept. In this paper, mostly theoretical in its nature, we propose an approach to determine optimal leaderfollower strategies for a mobile robot patrolling an environment. Differently from previous works in literature, our approach can be applied to environments with arbitrary topologies.
Getting the Best Response for Your Erg
"... We consider the speed scaling problem of minimizing the average response time of a collection of dynamically released jobs subject to a constraint A on energy used. We propose an algorithmic approach in which an energy optimal schedule is computed for a huge A, and then the energy optimal schedule ..."
Abstract

Cited by 49 (9 self)
 Add to MetaCart
We consider the speed scaling problem of minimizing the average response time of a collection of dynamically released jobs subject to a constraint A on energy used. We propose an algorithmic approach in which an energy optimal schedule is computed for a huge A, and then the energy optimal schedule is maintained as A decreases. We show that this approach yields an efficient algorithm for equiwork jobs. We note that the energy optimal schedule has the surprising feature that the job speeds are not monotone functions of the available energy. We then explain why this algorithmic approach is problematic for arbitrary work jobs. Finally, we explain how to use the algorithm for equiwork jobs to obtain an algorithm for arbitrary work jobs that is O(1)approximate with respect to average response time, given an additional factor of (1 + ffl)energy.
Optimal Decoupling Capacitor Sizing and Placement for Standard Cell Layout Designs
 IEEE Trans. on ComputerAided Design of Integrated Circuits and Systems
, 1995
"... With technology scaling, the trend for high performance integrated circuits is towards ever higher operating frequency, lower power supply voltages and higher power dissipation. ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
With technology scaling, the trend for high performance integrated circuits is towards ever higher operating frequency, lower power supply voltages and higher power dissipation.
Some Characterizations And Properties Of The "Distance To IllPosedness" And The Condition Measure Of A Conic Linear System
, 1998
"... A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the status of the ..."
Abstract

Cited by 45 (21 self)
 Add to MetaCart
A conic linear system is a system of the form P (d) : find x that solves b \Gamma Ax 2 C Y ; x 2 CX ; where CX and C Y are closed convex cones, and the data for the system is d = (A; b). This system is"wellposed" to the extent that (small) changes in the data (A; b) do not alter the status of the system (the system remains solvable or not). Renegar defined the "distance to illposedness," ae(d), to be the smallest change in the data \Deltad = (\DeltaA; \Deltab) for which the system P (d + \Deltad) is "illposed," i.e., d + \Deltad is in the intersection of the closure of feasible and infeasible instances d 0 = (A 0 ; b 0 ) of P (\Delta). Renegar also defined the "condition measure" of the data instance d as C(d) := kdk=ae(d), and showed that this measure is a natural extension of the familiar condition measure associated with systems of linear equations. This study presents two categories of results related to ae(d), the distance to illposedness, and C(d), the condition me...
Fast image recovery using variable splitting and constrained optimization
 IEEE Trans. Image Process
, 2010
"... Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both wavele ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
Abstract—We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an `2 datafidelity term and a nonsmooth regularizer. This formulation allows both waveletbased (with orthogonal or framebased representations) regularization or totalvariation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the socalled alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods. Index Terms—Augmented Lagrangian, compressive sensing, convex optimization, image reconstruction, image restoration,
Quadratic Optimization
, 1995
"... . Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, t ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
. Quadratic optimization comprises one of the most important areas of nonlinear programming. Numerous problems in real world applications, including problems in planning and scheduling, economies of scale, and engineering design, and control are naturally expressed as quadratic problems. Moreover, the quadratic problem is known to be NPhard, which makes this one of the most interesting and challenging class of optimization problems. In this chapter, we review various properties of the quadratic problem, and discuss different techniques for solving various classes of quadratic problems. Some of the more successful algorithms for solving the special cases of bound constrained and large scale quadratic problems are considered. Examples of various applications of quadratic programming are presented. A summary of the available computational results for the algorithms to solve the various classes of problems is presented. Key words: Quadratic optimization, bilinear programming, concave pro...