Results 1  10
of
19
Secondorder cone programming methods for total variationbased image restoration
 SIAM Journal of Scientific Computing
, 2004
"... Abstract. In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of L. Rudin, S. Osher and E. Fatemi (ROF). Our approach formulates TV minimization as a secondorder cone program which is then solved by interiorpoint algorithm ..."
Abstract

Cited by 46 (12 self)
 Add to MetaCart
Abstract. In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of L. Rudin, S. Osher and E. Fatemi (ROF). Our approach formulates TV minimization as a secondorder cone program which is then solved by interiorpoint algorithms that are efficient both in practice (using nested dissection and domain decomposition) and in theory (i.e., they obtain solutions in polynomial time). In addition to the original ROF minimization model, we show how to apply our approach to other TV models including ones that are not solvable by PDE based methods. Numerical results on a varied set of images are presented to illustrate the effectiveness of our approach.
Structure learning in random fields for heart motion abnormality detection
 In CVPR
, 2008
"... Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this pr ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this problem, the interactions between the different heart regions and their overall influence on the clinical condition of the heart need to be considered. To do this, we propose a method for jointly learning the structure and parameters of conditional random fields, formulating these tasks as a convex optimization problem. We consider blockL1 regularization for each set of features associated with an edge, and formalize an efficient projection method to find the globally optimal penalized maximum likelihood solution. We perform extensive numerical experiments comparing the presented method with related methods that approach the structure learning problem differently. We verify the robustness of our method on echocardiograms collected in routine clinical practice at one hospital. 1.
Active Sets, Nonsmoothness And Sensitivity
, 2001
"... Nonsmoothness abounds in optimization, but the way it typically arises is highly structured. Nonsmooth behaviour of an objective function is usually associated, locally, with an active manifold: on this manifold the function is smooth, whereas in normal directions it is \veeshaped". Active set ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
Nonsmoothness abounds in optimization, but the way it typically arises is highly structured. Nonsmooth behaviour of an objective function is usually associated, locally, with an active manifold: on this manifold the function is smooth, whereas in normal directions it is \veeshaped". Active set ideas in optimization depend heavily on this structure. Important examples of such functions include the pointwise maximum of some smooth functions, and the maximum eigenvalue of a parametrized symmetric matrix. Among possible foundations for practical nonsmooth optimization, this broad class of \partly smooth" functions seems a promising candidate, enjoying a powerful calculus and sensitivity theory. In particular, we show under a natural regularity condition that critical points of partly smooth functions are stable: small perturbations to the function cause small movements of the critical point on the active manifold. Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. Email: aslewis@math.uwaterloo.ca. Research supported by NSERC. 1 1
Using LOQO To Solve SecondOrder Cone Programming Problems
 PRINCETON UNIVERSITY
, 1998
"... Many nonlinear optimization problems can be cast as secondorder cone programming problems. In this paper, we discuss a broad spectrum of such applications. For each application, we consider various formulations, some convex some not, and study which ones are amenable to solution using a generalpur ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Many nonlinear optimization problems can be cast as secondorder cone programming problems. In this paper, we discuss a broad spectrum of such applications. For each application, we consider various formulations, some convex some not, and study which ones are amenable to solution using a generalpurpose interiorpoint solver LOQO. We also compare with other commonly available nonlinear programming solvers and specialpurpose codes for secondorder cone programming.
Automatic Mesh Refinement in Limit Analysis
, 1999
"... A strategy for automatic mesh refinement in limit analysis is combined with a recently developed computational method. In the absence of estimates of the local error the strategy can be based on the deformations and on slack in the yield condition. The approach is tested on standard problems in plan ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
A strategy for automatic mesh refinement in limit analysis is combined with a recently developed computational method. In the absence of estimates of the local error the strategy can be based on the deformations and on slack in the yield condition. The approach is tested on standard problems in plane strain, including the classical punch problem. Very accurate results are obtained with the use of moderate computational power. Key words: Limit analysis, plasticity, finite element method, automatic mesh refinement. AMS(MOS)subject classifications: 65N30, 65N50, 73E20, 90C90. Abbreviated title: Mesh refinement in limit analysis. Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark. (edc@imada.sdu.dk) y KMD Odense, Odense, Denmark. (ols@kmd.dk) 1 Introduction During the past decade the development of convex nonlinear optimization methods has made it possible to solve the collapse problem of limit analysis on a large scale. In [ACO98] and...
The Reconstruction Problem
 in, Electrical Impedance Tomography, Methods, History and Applications, Ed. HOLDER, D.S, IOP Series in Medical Physics and Biomedical Engineering
, 2005
"... 1.1 Why is EIT so hard? In conventional medical imaging modalities, such as XRay computerized tomography, a collimated beam of radiation passes through the object in a straight line, and the attenuation of this beam is affected only by the matter which lies along its path. In this sense XRay CT is ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
1.1 Why is EIT so hard? In conventional medical imaging modalities, such as XRay computerized tomography, a collimated beam of radiation passes through the object in a straight line, and the attenuation of this beam is affected only by the matter which lies along its path. In this sense XRay CT is local, and it means that the pixels or voxels of our image affect only some (in fact a very small proportion) of the measurements. If the radiation were at lower frequency (softer Xrays) the effect of scattering would have to be taken into account and the effect of a change of material in a voxel would no longer be local. As the frequency decreases this nonlocal effect becomes more pronounced until we reach the case of direct current, in which a change in conductivity would have some effect on any measurement of surface voltage when any current pattern is applied. This nonlocal property of conductivity imaging, which still applies at the moderate frequencies used in EIT, is one of the principal reasons that EIT is difficult. It means that to find the conductivity image one must solve a system of simultaneous
New probabilistic inference algorithms that harness the strengths of variational and Monte Carlo methods
, 2009
"... ..."
In vivo Impedance Imaging with Total Variation Regularization
, 2009
"... We show that electrical impedance tomography (EIT) image reconstruction algorithms with regularization based on the Total Variation (TV) functional are suitable for in vivo imaging of physiological data. This reconstruction approach helps to preserve discontinuities in reconstructed profiles, such ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We show that electrical impedance tomography (EIT) image reconstruction algorithms with regularization based on the Total Variation (TV) functional are suitable for in vivo imaging of physiological data. This reconstruction approach helps to preserve discontinuities in reconstructed profiles, such as step changes in electrical properties at interorgan boundaries, which are typically smoothed by traditional reconstruction algorithms. The use of the TV functional for regularization leads to the minimization of a nondifferentiable objective function in the inverse formulation. This cannot be efficiently solved with traditional optimization techniques such as the Newton Method. We explore two implementations methods for regularization with the TV functional: the Lagged Diffusivity method and the Primal Dual – Interior Point Method (PD–IPM). First we clarify the implementation details of these algorithms for EIT reconstruction. Next, we analyze the performance of these algorithms on noisy simulated data. Finally, we show reconstructed EIT images of in–vivo data for ventilation and gastric emptying studies. In comparison to traditional quadratic regularization, TV regularization shows improved ability to reconstruct sharp contrasts.
An improved algorithm for computing steiner minimal trees in Euclidean dspace, working paper
, 2006
"... We describe improvements to Smith’s branchandbound (B&B) algorithm for the Euclidean Steiner problem in IR d. Nodes in the B&B tree correspond to full Steiner topologies associated with a subset of the terminal nodes, and branching is accomplished by “merging ” a new terminal node with each edge i ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We describe improvements to Smith’s branchandbound (B&B) algorithm for the Euclidean Steiner problem in IR d. Nodes in the B&B tree correspond to full Steiner topologies associated with a subset of the terminal nodes, and branching is accomplished by “merging ” a new terminal node with each edge in the current Steiner tree. For a given topology we use a conic formulation for the problem of locating the Steiner points to obtain a rigorous lower bound on the minimal tree length. We also show how to obtain lower bounds on the child problems at a given node without actually computing the minimal Steiner trees associated with the child topologies. These lower bounds reduce the number of children created and also permit the implementation of a “strong branching ” strategy that varies the order in which terminal nodes are added. Computational results demonstrate substantial gains compared to Smith’s original algorithm.
SELECTING RELIABLE SENSORS VIA CONVEX OPTIMIZATION
"... One of the key challenges in sensor networks is the extraction of trusted and relevant information by fusing data from a multitude of heterogeneous, distinct, but possibly unreliable or irrelevant sensors. Recovering the desirable view of the environment from the maximum number of dependable sensors ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
One of the key challenges in sensor networks is the extraction of trusted and relevant information by fusing data from a multitude of heterogeneous, distinct, but possibly unreliable or irrelevant sensors. Recovering the desirable view of the environment from the maximum number of dependable sensors while specifying the unreliable ones is an issue of paramount importance for active sensing and robust operation of the entire network. This problem of robust sensing is formulated here, and proved to be NPhard. In the quest of suboptimum but practically feasible solutions with quantifiable performance guarantees, two algorithms are developed for selecting reliable sensors via convex programming. The first relies on a convex relaxation of the original problem, while the second one is based on approximating the initial objective function by a concave one. Their performance is tested analytically, and through simulations. 1.