Results 1 
5 of
5
Fast Convergence of the Simplified Largest Step Path Following Algorithm
, 1994
"... Each master iteration of a simplified Newton algorithm for solving a system of equations starts by computing the Jacobian matrix and then uses this matrix in the computation of p Newton steps: the first of these steps is exact, and the other are called "simplified". In this paper we apply this appr ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Each master iteration of a simplified Newton algorithm for solving a system of equations starts by computing the Jacobian matrix and then uses this matrix in the computation of p Newton steps: the first of these steps is exact, and the other are called "simplified". In this paper we apply this approach to a large step path following algorithm for monotone linear complementarity problems. The resulting method generates sequences of objective values (duality gaps) that converge to zero with Qorder p + 1 in the number of master iterations, and with a complexity of O( p nL) iterations.
Improving Complexity of Structured Convex Optimization Problems Using SelfConcordant Barriers
, 2001
"... The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to pr ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The purpose of this paper is to provide improved complexity results for several classes of structured convex optimization problems using to the theory of selfconcordant functions developed in [11]. We describe the classical shortstep interiorpoint method and optimize its parameters in order to provide the best possible iteration bound. We also discuss the necessity of introducing two parameters in the definition of selfconcordancy and which one is the best to fix. A lemma from [3] is improved, which allows us to review several classes of structured convex optimization problems and improve the corresponding complexity results.
Global Linear And Local Quadratic Convergence Of A LongStep AdaptiveMode Interior Point Method For Some Monotone Variational Inequality Problems
, 1996
"... . An interior point method is proposed to solve variational inequality problems for monotone functions and polyhedral sets. The method has the following advantages. 1. Given an initial interior feasible solution with duality gap ¯ 0 , the algorithm requires at most O[n log(¯ 0 =ffl)] iterations to ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
. An interior point method is proposed to solve variational inequality problems for monotone functions and polyhedral sets. The method has the following advantages. 1. Given an initial interior feasible solution with duality gap ¯ 0 , the algorithm requires at most O[n log(¯ 0 =ffl)] iterations to obtain an ffloptimal solution. 2. The rate of convergence of the duality gap is qquadratic. 3. At each iteration, a longstep improvement based on a line search is allowed. 4. The algorithm can automatically transfer from a linear mode to a quadratic mode to accelerate the local convergence. Keywords: Polynomial Complexity of Algorithms, Interior Point Methods, Monotone Variational Inequality Problems, Rate of Convergence. 1 The research is partially supported by Grant RP930033 of National University of Singapore. 2 Department of Decision Sciences. Email: fbasunj@nus.sg. 3 Department of Mathematics. Email: matzgy@nus.sg. 1 Introduction Given a function F : IR n ! IR n and a nonem...
Topics In Convex Optimization: InteriorPoint Methods, Conic Duality and Approximations
, 2001
"... ..."
ℓ1−Penalized Likelihood Smoothing of Volatility Processes allowing for Abrupt Changes
, 2009
"... We consider the problem of estimating the volatility of a financial asset from a time series record of length T. We believe the underlying volatility process is smooth, possibly stationary, and with potential abrupt changes due to market news. By drawing parallels between time series and regression ..."
Abstract
 Add to MetaCart
We consider the problem of estimating the volatility of a financial asset from a time series record of length T. We believe the underlying volatility process is smooth, possibly stationary, and with potential abrupt changes due to market news. By drawing parallels between time series and regression models, in particular between stochastic volatility models and Markov random fields smoothers, we propose a semiparametric estimator of volatility. Our Bayesian posterior mode estimate is the solution to an ℓ1penalized likelihood optimization that we solve with an interior point algorithm that is efficient since its complexity is bounded by O(T 3/2). We apply our volatility estimator to real financial data, diagnose the model and perform backtesting to investigate to forecasting power of the method by comparison to (I)GARCH.