Results 11  20
of
135
Convergent SDPRelaxations in Polynomial Optimization with Sparsity
 SIAM Journal on Optimization
"... Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxati ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Abstract. We consider a polynomial programming problem P on a compact semialgebraic set K ⊂ R n, described by m polynomial inequalities gj(X) ≥ 0, and with criterion f ∈ R[X]. We propose a hierarchy of semidefinite relaxations in the spirit those of Waki et al. [9]. In particular, the SDPrelaxation of order r has the following two features: (a) The number of variables is O(κ 2r) where κ = max[κ1, κ2] witth κ1 (resp. κ2) being the maximum number of variables appearing the monomials of f (resp. appearing in a single constraint gj(X) ≥ 0). (b) The largest size of the LMI’s (Linear Matrix Inequalities) is O(κ r). This is to compare with the respective number of variables O(n 2r) and LMI size O(n r) in the original SDPrelaxations defined in [11]. Therefore, great computational savings are expected in case of sparsity in the data {gj, f}, i.e. when κ is small, a frequent case in practical applications of interest. The novelty with respect to [9] is that we prove convergence to the global optimum of P when the sparsity pattern satisfies a condition often encountered in large size problems of practical applications, and known as the running intersection property in graph theory. In such cases, and as a byproduct, we also obtain a new representation result for polynomials positive on a basic closed semialgebraic set, a sparse version of Putinar’s Positivstellensatz [16]. 1.
Image restoration subject to a total variation constraint
 IEEE Transactions on Image Processing
, 2004
"... Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
Abstract—Total variation has proven to be a valuable concept in connection with the recovery of images featuring piecewise smooth components. So far, however, it has been used exclusively as an objective to be minimized under constraints. In this paper, we propose an alternative formulation in which total variation is used as a constraint in a general convex programming framework. This approach places no limitation on the incorporation of additional constraints in the restoration process and the resulting optimization problem can be solved efficiently via blockiterative methods. Image denoising and deconvolution applications are demonstrated. I. PROBLEM STATEMENT THE CLASSICAL linear restoration problem is to find the original form of an image in a real Hilbert space from the observation of a degraded image where
Estimating covariation: Epps effect and microstructure noise
 Journal of Econometrics, forthcoming
, 2009
"... This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance e ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
This paper is about how to estimate the integrated covariance 〈X, Y 〉T of two assets over a fixed time horizon [0, T], when the observations of X and Y are “contaminated ” and when such noisy observations are at discrete, but not synchronized, times. We show that the usual previoustick covariance estimator is biased, and the size of the bias is more pronounced for less liquid assets. This is an analytic characterization of the Epps effect. We also provide optimal sampling frequency which balances the tradeoff between the bias and various sources of stochastic error terms, including nonsynchronous trading, microstructure noise, and time discretization. Finally, a twoscales covariance estimator is provided which simultaneously cancels (to first order) the Epps effect and the effect of microstructure noise. The gain is demonstrated in data.
MetricBased Methods for Adaptive Model Selection and Regularization
 Machine Learning
, 2001
"... We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the di ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
We present a general approach to model selection and regularization that exploits unlabeled data to adaptively control hypothesis complexity in supervised learning tasks. The idea is to impose a metric structure on hypotheses by determining the discrepancy between their predictions across the distribution of unlabeled data. We show how this metric can be used to detect untrustworthy training error estimates, and devise novel model selection strategies that exhibit theoretical guarantees against overtting (while still avoiding under tting). We then extend the approach to derive a general training criterion for supervised learningyielding an adaptive regularization method that uses unlabeled data to automatically set regularization parameters. This new criterion adjusts its regularization level to the specic set of training data received, and performs well on a variety of regression and conditional density estimation tasks. The only proviso for these methods is that s...
Approximation by Fully Complex Multilayer Perceptrons
, 2003
"... We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville’s theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two realvalued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the CauchyRiemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e z that are analytic are defined as fully complex activation functions and are shown
Extracting Oscillations: Neuronal Coincidence Detection with Noisy Periodic Spike Input
, 1998
"... How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidencedetection properties of an integra ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
How does a neuron vary its mean output firing rate if the input changes from random to oscillatory coherent but noisy activity? What are the critical parameters of the neuronal dynamics and input statistics? To answer these questions, we investigate the coincidencedetection properties of an integrateandfire neuron. We derive an expression indicating how coincidence detection depends on neuronal parameters. Specifically, we show how coincidence detection depends on the shape of the postsynaptic response function, the number of synapses, and the input statistics, and we demonstrate that there is an optimal threshold. Our considerations can be used to predict from neuronal parameters whether and to what extent a neuron can act as a coincidence detector and thus can convert a temporal code into a rate code.
On Optimal EntropyConstrained Scalar Quantization
, 2000
"... Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if th ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Optimal scalar quantization subject to an entropyconstraint is studied. First the problem of nding analytically an optimal entropyconstrained scalar quantizer (ECSQ) is considered. For a wide class of dierence distortion measures including rth power distortions with r > 0, it is proved that if the source is uniformly distributed over an interval, then for any entropy constraint R (in bits), an optimal quantizer has N = 2 R interval cells such that N 1 cells have equal length d and one cell has length c d. Based on this result, a parametric representation of the minimum achievable distortion D h (R) as a function of the entropy constraint R is obtained for a uniform source. Contrary to earlier expectations, the D h (R) curve turns out to be nonconvex in general. In particular, for the squared error distortion it is shown that D h (R) is a piecewise concave function. The structural properties of optimal ECSQs for more general source distributions are also investigated. In...
A Lagrangian formulation of Zador's entropyconstrained quantization theorem
 IEEE Trans. Inform. Theory
, 2002
"... Zador's classic result for the asymptotic highrate behavior of entropyconstrained vector quantization is recast in a Lagrangian form which better matches the Lloyd algorithm used to optimize such quantizers. The equivalence of the two formulations is shown and the result is proved for source distr ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
Zador's classic result for the asymptotic highrate behavior of entropyconstrained vector quantization is recast in a Lagrangian form which better matches the Lloyd algorithm used to optimize such quantizers. The equivalence of the two formulations is shown and the result is proved for source distributions that are absolutely continuous with respect to the Lebesgue measure which satisfy an entropy condition, thereby generalizing the conditions stated by Zador under which the result holds.
A sum of squares approximation of nonnegative polynomials
 SIAM J. Optim
, 2006
"... Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Abstract. We show that every real nonnegative polynomial f can be approximated as closely as desired (in the l1norm of its coefficient vector) by a sequence of polynomials {fɛ} that are sums of squares. The novelty is that each fɛ has a simple and explicit form in terms of f and ɛ. Key words. Real algebraic geometry; positive polynomials; sum of squares; semidefinite programming. AMS subject classifications. 12E05, 12Y05, 90C22 1. Introduction. The
A Random Set Description of a Possibility Measure and Its Natural Extension
 IEEE Transactions on Systems, Man and Cybernetics
, 1997
"...  The relationship is studied between possibility and necessity measures dened on arbitrary spaces, the theory of imprecise probabilities, and elementary random set theory. It is shown how special random sets can be used to generate normal possibility and necessity measures, as well as their natural ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
 The relationship is studied between possibility and necessity measures dened on arbitrary spaces, the theory of imprecise probabilities, and elementary random set theory. It is shown how special random sets can be used to generate normal possibility and necessity measures, as well as their natural extensions. This leads to interesting alternative formulas for the calculation of these natural extensions. KeywordsUpper probability, upper prevision, coherence, natural extension, possibility measure, random sets. I. Introduction P OSSIBILITY measures were introduced by Zadeh [1] in 1978. In his view, these supremum preserving set functions are a mathematical representation of the information conveyed by typical armative statements in natural language. For recent discussions of this interpretation within the behavioural framework of the theory of imprecise probabilities, we refer to [2], [3], [4]. Supremum preserving set functions can also be found in the literature under a number o...