Results 1  10
of
81
Applications of parametric maxflow in computer vision
"... The maximum flow algorithm for minimizing energy functions of binary variables has become a standard tool in computer vision. In many cases, unary costs of the energy depend linearly on parameter λ. In this paper we study vision applications for which it is important to solve the maxflow problem for ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
(Show Context)
The maximum flow algorithm for minimizing energy functions of binary variables has become a standard tool in computer vision. In many cases, unary costs of the energy depend linearly on parameter λ. In this paper we study vision applications for which it is important to solve the maxflow problem for different λ’s. An example is a weighting between data and regularization terms in image segmentation or stereo: it is desirable to vary it both during training (to learn λ from ground truth data) and testing (to select best λ using highknowledge constraints, e.g. user input). We review algorithmic aspects of this parametric maximum flow problem previously unknown in vision, such as the ability to compute all breakpoints of λ and corresponding optimal configurations in finite time. These results allow, in particular, to minimize the ratio of some geometric functionals, such as flux of a vector field over length (or area). Previously, such functionals were tackled with shortest path techniques applicable only in 2D. We give theoretical improvements for “PDE cuts ” [5]. We present experimental results for image segmentation, 3D reconstruction, and the cosegmentation problem. 1.
A PushRelabel Framework for Submodular Function Minimization and Applications to Parametric Optimization
 Discrete Applied Mathematics
, 2001
"... Recently, the first combinatorial strongly polynomial algorithms for submodular function minimization have been devised independently by Iwata, Fleischer, and Fujishige and by Schrijver. In this paper, we improve the running time of Schrijver's algorithm by designing a pushrelabel framework ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
Recently, the first combinatorial strongly polynomial algorithms for submodular function minimization have been devised independently by Iwata, Fleischer, and Fujishige and by Schrijver. In this paper, we improve the running time of Schrijver's algorithm by designing a pushrelabel framework for submodular function minimization (SFM). We also extend this algorithm to carry out parametric minimization for a strong map sequence of submodular functions in the same asymptotic running time as a single SFM. Applications include an eicient algorithm for finding a lexicographically optimal base.
An interior point algorithm for minimum sum of squares clustering
 SIAM J. Sci. Comput
, 1997
"... Abstract. An exact algorithm is proposed for minimum sumofsquares nonhierarchical clustering, i.e., for partitioning a given set of points from a Euclidean mspace into a given number of clusters in order to minimize the sum of squared distances from all points to the centroid of the cluster to wh ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
(Show Context)
Abstract. An exact algorithm is proposed for minimum sumofsquares nonhierarchical clustering, i.e., for partitioning a given set of points from a Euclidean mspace into a given number of clusters in order to minimize the sum of squared distances from all points to the centroid of the cluster to which they belong. This problem is expressed as a constrained hyperbolic program in 01 variables. The resolution method combines an interior point algorithm, i.e., a weighted analytic center column generation method, with branchandbound. The auxiliary problem of determining the entering column (i.e., the oracle) is an unconstrained hyperbolic program in 01 variables with a quadratic numerator and linear denominator. It is solved through a sequence of unconstrained quadratic programs in 01 variables. To accelerate resolution, variable neighborhood search heuristics are used both to get a good initial solution and to solve quickly the auxiliary problem as long as global optimality is not reached. Estimated bounds for the dual variables are deduced from the heuristic solution and used in the resolution process as a trust region. Proved minimum sumofsquares partitions are determined for the first time for several fairly large data sets from the literature, including Fisher’s 150 iris. Key words. classification and discrimination, cluster analysis, interiorpoint methods, combinatorial optimization
Fast algorithms for parametric scheduling come from extensions to parametric maximum
, 1999
"... Chen [6] develops an attractive variant of the classical problem of preemptively scheduling independent jobs with release dates and due dates. Chen suggests that in practice one can often pay to reduce the processing requirement of a job. This leads to two parametric max flow problems. Serafini [26] ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Chen [6] develops an attractive variant of the classical problem of preemptively scheduling independent jobs with release dates and due dates. Chen suggests that in practice one can often pay to reduce the processing requirement of a job. This leads to two parametric max flow problems. Serafini [26] considers scheduling independent jobs with due dates on multiple machines, where jobs can be split among machines so that pieces of a single job can execute in parallel. Minimizing the maximum tardiness again gives a parametric max flow problem. A third problem of this type is deciding how many more games a baseball team can lose partway through a season without being eliminated from finishing first (assuming a best possible distribution of wins and losses by other teams). A fourth such problem is an extended selection problem of Brumelle, Granot, and Liu [4], where we want to discount the costs of “treestructured ” tools as little as possible to be able to process all jobs at a profit. It is tempting to try to solve these problems with the parametric pushrelabel max flow methods of Gallo, Grigoriadis and Tarjan (GGT) [10]. However, all of these applications appear to violate the conditions necessary to apply GGT. We extend GGT in three ways which allow it to be applied to all four of the above applications. We also consider some other applications where these ideas apply. Our extensions to GGT yield faster algorithms for all these applications. 1
Homology Search with Fragmented Nucleic Acid Sequence Patterns
"... Abstract. The comprehensive annotation of noncoding RNAs in newly sequenced genomes is still a largely unsolved problem because many functional RNAs exhibit not only poorly conserved sequences but also large variability in structure. In many cases, such as Y RNAs, vault RNAs, or telomerase RNAs, se ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The comprehensive annotation of noncoding RNAs in newly sequenced genomes is still a largely unsolved problem because many functional RNAs exhibit not only poorly conserved sequences but also large variability in structure. In many cases, such as Y RNAs, vault RNAs, or telomerase RNAs, sequences differ by large insertions or deletions and have only a few small sequence patterns in common. Here we present fragrep2, a purely sequencebased approach to detect such patterns in complete genomes. Afragrep2 pattern consists of an ordered list of positionspecific weight matrices (PWMs) describing short, approximately conserved sequence elements, that are separated by intervals of nonconserved regions of bounded length. The program uses a fractional programming approach to align the PWMs to genomic DNA in order to allow for a bounded number of insertions and deletions in the patterns. These patterns are then combined to significant combinations of PWMs. At this step, a subset of PWMs may be deleted, i.e., have no
On the solution of the Tikhonov regularization of the total least squares problem
 SIAM J. Optim
"... Abstract. Total least squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadra ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
Abstract. Total least squares (TLS) is a method for treating an overdetermined system of linear equations Ax ≈ b, where both the matrix A and the vector b are contaminated by noise. Tikhonov regularization of the TLS (TRTLS) leads to an optimization problem of minimizing the sum of fractional quadratic and quadratic functions. As such, the problem is nonconvex. We show how to reduce the problem to a single variable minimization of a function G over a closed interval. Computing a value and a derivative of G consists of solving a single trust region subproblem. For the special case of regularization with a squared Euclidean norm we show that G is unimodal and provide an alternative algorithm, which requires only one spectral decomposition. A numerical example is given to illustrate the effectiveness of our method.
Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares
 SIAM J. Matrix Anal. Appl
"... Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of minimizing a fractional quadratic problem involving the ratio of two indefinite quadratic functions, subject to a twosided quadratic form constraint. This formulation is motivated by the socalled regularized total least squares (RTLS) problem. A key difficulty with this problem is its nonconvexity, and all current known methods to solve it are guaranteed only to converge to a point satisfying first order necessary optimality conditions. We prove that a global optimal solution to this problem can be found by solving a sequence of very simple convex minimization problems parameterized by a single parameter. As a result, we derive an efficient algorithm that produces an ɛglobal optimal solution in a computational effort of O(n3 log ɛ−1). The algorithm is tested on problems arising from the inverse Laplace transform and image deblurring. Comparison to other wellknown RTLS solvers illustrates the attractiveness of our new method. Key words. regularized total least squares, fractional programming, nonconvex quadratic optimization, convex programming
Fractional programming: the sumofratios case
 Optimization Methods and Software
, 2003
"... One of the most difficult fractional programs encountered so far is the sumofratios problem. Contrary to earlier expectations it is much more removed from convex programming than other multiratio problems analyzed before. It really should be viewed in the context of global optimization. It proves ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
One of the most difficult fractional programs encountered so far is the sumofratios problem. Contrary to earlier expectations it is much more removed from convex programming than other multiratio problems analyzed before. It really should be viewed in the context of global optimization. It proves to be essentially NPcomplete in spite of its special structure under the usual assumptions on numerators and denominators. The paper provides a recent survey of applications, theoretical results and various algorithmic approaches for this challenging problem.
Optimal Time Domain Equalization Design for Maximizing Data Rate of Discrete MultiTone Systems
, 2003
"... The traditional discrete multitone equalizer is a cascade of a time domain equalizer (TEQ) as a single finite impulse response filter, a multicarrier demodulator as a fast Fourier transform (FFT), and a frequency domain equalizer (FEQ) as a onetap filter bank. The TEQ shortens the transmission cha ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
The traditional discrete multitone equalizer is a cascade of a time domain equalizer (TEQ) as a single finite impulse response filter, a multicarrier demodulator as a fast Fourier transform (FFT), and a frequency domain equalizer (FEQ) as a onetap filter bank. The TEQ shortens the transmission channel impulse response (CIR) to mitigate intersymbol interference (ISI). Maximum Bit Rate (MBR) and Minimum ISI (MinISI) methods achieve higher data rates at the TEQ output than previously published methods. As an alternative to the traditional equalizer, the pertone equalizer (PTE) moves the TEQ into the FEQ and customizes a multitap FEQ for each tone. In this paper, we propose a time domain TEQ filter bank (TEQFB) and single TEQ that demonstrate better data rates at the FEQ output than MBR, MinISI, and leastsquares PTE methods with standard CIRs, transmit filters, and receive filters. The contributions of this paper are: (1) a model for the signaltonoise ratio (SNR) at the FFT output that includes ISI, nearend crosstalk, white Gaussian noise, analogtodigital converter quantization noise and the digital noise floor; (2) data rate optimal time domain pertone TEQ filter bank and the upper bound on bit rate performance it achieves; and (3) data rate maximization single TEQ design algorithm. SP EDICS: SP 3TDSL: Telephone Networks and Digital Subscriber Loops. Milos Milosevic *, B. L. Evans and R. Baldick are with the Dept. of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 787121084, L. F. C. Pessoa is with Motorola, Inc.,7700 West Parmer Lane, MD: TX32PL30, Austin, TX 78729. Email: {milos,bevans,baldick}@ece.utexas.edu and Lucio.Pessoa@motorola.com. B. L. Evans was supported by a gift from the Motorola Semiconductor Products S...
Global Optimization of Nonconvex Nonlinear Programs Using Parallel Branch and Bound
, 1995
"... A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A branch and bound algorithm for computing globally optimal solutions to nonconvex nonlinear programs in continuous variables is presented. The algorithm is directly suitable for a wide class of problems arising in chemical engineering design. It can solve problems defined using algebraic functions and twice differentiable transcendental functions, in which finite upper and lower bounds can be placed on each variable. The algorithm uses rectangular partitions of the variable domain and a new bounding program based on convex/concave envelopes and positive definite combinations of quadratic terms. The algorithm is deterministic and obtains convergence with final regions of finite size. The partitioning strategy uses a sensitivity analysis of the bounding program to predict the best variable to split and the split location. Two versions of the algorithm are considered, the first using a local NLP algorithm (MINOS) and the second using a sequence of lower bounding programs in the search fo...