Results 11  20
of
28
Theory and Algorithm of LocalRefinement Based Optimization with Application to Device and Interconnect Sizing
, 1999
"... In this paper we formulate three classes of optimization problems: the simple, monotonicallyconstrained, and bounded CHprograms. We reveal the dominance property under the local refinement (LR) operation for the simple CHprogram, as well as the general dominance property under the pseudoLR opera ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
In this paper we formulate three classes of optimization problems: the simple, monotonicallyconstrained, and bounded CHprograms. We reveal the dominance property under the local refinement (LR) operation for the simple CHprogram, as well as the general dominance property under the pseudoLR operation for the monotonicallyconstrained CHprogram and the extendedLR operation for the bounded CHprogram. These properties enable a very efficient polynomialtime algorithm, using different types of LR operations to compute tight lower and upper bounds of the exact solution to any CHprogram. We show that the algorithm is capable of solving many layout optimization problems in deep submicron IC and/or highperformance MCM/PCB designs. In particular, we apply...
Applications of Semidefinite Programming
, 1998
"... A wide variety of nonlinear convex optimization problems can be cast as problems involving linear matrix inequalities (LMIs), and hence efficiently solved using recently developed interiorpoint methods. In this paper, we will consider two classes of optimization problems with LMI constraints: ffl ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
A wide variety of nonlinear convex optimization problems can be cast as problems involving linear matrix inequalities (LMIs), and hence efficiently solved using recently developed interiorpoint methods. In this paper, we will consider two classes of optimization problems with LMI constraints: ffl The semidefinite programming problem, i.e., the problem of minimizing a linear function subject to a linear matrix inequality. Semidefinite programming is an important numerical tool for analysis and synthesis in systems and control theory. It has also been recognized in combinatorial optimization as a valuable technique for obtaining bounds on the solution of NPhard problems.
Mixed state estimation for a linear gaussian markov model
 in: Proceedings of the IEEE Conference on Decision and Control
"... We consider a discretetime dynamical system with Boolean and continuous states, with the continuous state propagating linearly in the continuous and Boolean state variables, and an additive Gaussian process noise, and where each Boolean state component follows a simple Markov chain. This model, whi ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We consider a discretetime dynamical system with Boolean and continuous states, with the continuous state propagating linearly in the continuous and Boolean state variables, and an additive Gaussian process noise, and where each Boolean state component follows a simple Markov chain. This model, which can be considered a hybrid or jumplinear system with very special form, or a standard linear GaussMarkov dynamical system driven by a Boolean Markov process, arises in dynamic fault detection, in which each Boolean state component represents a fault that can occur. We address the problem of estimating the state, given Gaussian noise corrupted linear measurements. Computing the exact maximum a posteriori (MAP) estimate entails solving a mixed integer quadratic program, which is computationally difficult in general, so we propose an approximate MAP scheme, based on a convex relaxation, followed by rounding and (possibly) further local optimization. Our method has a complexity that grows linearly in the time horizon and cubicly with the state dimension, the same as a standard Kalman filter. Numerical experiments suggest that it performs very well in practice. 1
Modeling and Optimization of VLSI Interconnects
, 1999
"... As very large scale integrated (VLSI) circuits move into the era of deepsubmicron (DSM) technology and gigahertz frequency, the system performance has increasingly become dominated by the interconnect delay. This dissertation presents five related research topics on interconnect layout optimizati ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
As very large scale integrated (VLSI) circuits move into the era of deepsubmicron (DSM) technology and gigahertz frequency, the system performance has increasingly become dominated by the interconnect delay. This dissertation presents five related research topics on interconnect layout optimization, and interconnect extraction and modeling: the multisource wire sizing (MSWS) problem, the simultaneous transistor and interconnect sizing (STIS) problem, the global interconnect sizing and spacing (GISS) problem, the interconnect capacitance extraction problem, and the interconnect inductance extraction problems. Given a routing tree with multiple sources, the MSWS problem determines the optimal widths of the wire segments such that the delay is minimized. We reveal several interesting properties for the optimal MSWS solution, of which the most important is the bundled refinement property. Based on this property, we propose a polynomial time algorithm, which uses iterative bundled refinement operations to compute lower and upper bounds of an optimal solution. Since the algorithm often achieves identical lower and upper bounds in experiments, the optimal solution is obtained simply by the bound computation. Furthermore, this algorithm can be used for singlesource wire sizing problem and runs 100x xxi faster than previous methods. It has replaced previous singlesource wire sizing methods in practice.
Mixed SemidefiniteQuadraticLinear Programs,” in Recent advances in LMI methods for control
, 2000
"... ..."
A fast hybrid algorithm for large scale ℓ1regularized logistic regression
 Journal of Machine Learning Research
"... Editor: ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1regularization attributes attractive properties to the classifier, such as feature select ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Editor: ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. When a sparse logistic regression problem has largescale data in high dimensions, it is computationally expensive to minimize the nondifferentiable ℓ1norm in the objective function. Motivated by recent work (Hale et al., 2008; Koh et al., 2007), we propose a novel hybrid algorithm based on combining two types of optimization iterations: one being very fast and memory friendly while the other being slower but more accurate. Called hybrid iterative shrinkage (HIS), the resulting algorithm is comprised of a fixed point continuation phase and an interior point phase. The first phase is based completely on memory efficient operations such as matrixvector multiplications, while the second phase is based on a truncated Newton’s method. Furthermore, we show that various
A Fast Hybrid Algorithm for LargeScale ℓ1Regularized Logistic Regression
"... ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1 regularization attributes attractive properties to the classifier, such as feature selection, rob ..."
Abstract
 Add to MetaCart
ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1 regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. When a sparse logistic regression problem has largescale data in high dimensions, it is computationally expensive to minimize the nondifferentiable ℓ1norm in the objective function. Motivated by recent work (Koh et al., 2007; Hale et al., 2008), we propose a novel hybrid algorithm based on combining two types of optimization iterations: one being very fast and memory friendly while the other being slower but more accurate. Called hybrid iterative shrinkage (HIS), the resulting algorithm is comprised of a fixed point continuation phase and an interior point phase. The first phase is based completely on memory efficient operations such as matrixvector multiplications, while the second phase is based on a truncated Newton’s method. Furthermore, we show that various optimization techniques, including line search and continuation, can significantly accelerate convergence. The algorithm has global convergence at a geometric rate (a Qlinear rate in optimization terminology).
Abstract INTEGRATION, the VLSI journal 40 (2007) 461–472 Wire shaping of RLC interconnects $
, 2006
"... The optimum wire shape to produce the minimum signal propagation delay across an RLC line is shown to exhibit a general exponential form. The line inductance makes exponential tapering more attractive for RLC lines than for RC lines. For RLC lines, optimum wire tapering achieves a greater reduction ..."
Abstract
 Add to MetaCart
The optimum wire shape to produce the minimum signal propagation delay across an RLC line is shown to exhibit a general exponential form. The line inductance makes exponential tapering more attractive for RLC lines than for RC lines. For RLC lines, optimum wire tapering achieves a greater reduction in the signal propagation delay as compared to uniform wire sizing. For RLC lines, exponential tapering outperforms uniform repeater insertion. As technology advances, wire tapering becomes more effective than repeater insertion, since a greater reduction in the propagation delay is achieved. Optimum wire tapering achieves a reduction of 36 % in the propagation delay in long RLC interconnect as compared to uniform repeater insertion. Wire tapering can reduce both the propagation delay and power dissipation. Optimum tapering for minimum propagation delay reduces the propagation delay by 15 % and power dissipation by 16 % for an example circuit. The optimum tapering factor to minimize the transient power dissipation of a circuit is described in this paper. An analytic solution to determine the optimum tapering factor that exhibits an error of less than 2 % is provided. Wire tapering is also shown to reduce the power dissipation of a circuit by up to 65%. Wire tapering can also improve signal integrity by reducing the inductive noise of the interconnect lines. Wire tapering reduces the effect of impedance mismatch in digital circuits. The difference between the overshoots and undershoots in the signal waveform of an example clock distribution network is decreased by 34 % as compared to a uniformly sized network producing the same signal characteristics. r 2006 Elsevier B.V. All rights reserved.