Results 1  10
of
480
PROBABILITY INEQUALITIES FOR SUMS OF BOUNDED RANDOM VARIABLES
, 1962
"... Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges of the smum ..."
Abstract

Cited by 1498 (2 self)
 Add to MetaCart
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges of the smumands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population.
Least squares quantization in pcm
 IEEE Transactions on Information Theory
, 1982
"... AbstractIt has long been realized that in pulsecode modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as th ..."
Abstract

Cited by 840 (0 self)
 Add to MetaCart
AbstractIt has long been realized that in pulsecode modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the onethird power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quantization schemes for 26 quanta, b = 1,2, t,7, are given numerically for Gaussian and for Laplacian distribution of signal amplitudes. T I.
New results in linear filtering and prediction theory
 Trans. ASME, Ser. D, J. Basic Eng
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. T ..."
Abstract

Cited by 322 (0 self)
 Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly. 1
The minimum description length principle in coding and modeling
 IEEE Trans. Inform. Theory
, 1998
"... Abstract — We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized m ..."
Abstract

Cited by 305 (12 self)
 Add to MetaCart
Abstract — We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized maximized likelihood, mixture, and predictive codings are each shown to achieve the stochastic complexity to within asymptotically vanishing terms. We assess the performance of the minimum description length criterion both from the vantage point of quality of data compression and accuracy of statistical inference. Context tree modeling, density estimation, and model selection in Gaussian linear regression serve as examples. Index Terms—Complexity, compression, estimation, inference, universal modeling.
Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation
 IEEE Transactions on Automatic Control
, 1992
"... Consider the problem of finding a root of the multivariate gradient equation that arises in function minimization. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm of the general KieferWolfowitz type is appropriate for estimating the root. This p ..."
Abstract

Cited by 213 (14 self)
 Add to MetaCart
Consider the problem of finding a root of the multivariate gradient equation that arises in function minimization. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm of the general KieferWolfowitz type is appropriate for estimating the root. This paper presents an SA algorithm that is based on a "simultaneous perturbation" gradient approximation instead of the standard finite difference approximation of KieferWolfowitz type procedures. Theory and numerical experience indicate that the algorithm presented here can be significanfiy more efficient than the standard finite differencebased algorithms in largedimensional problems.
The JumpRisk Premia Implicit in Options: Evidence from an Integrated TimeSeries Study
 Journal of Financial Economics
"... Abstract: This paper examines the joint time series of the S&P 500 index and nearthemoney shortdated option prices with an arbitragefree model, capturing both stochastic volatility and jumps. Jumprisk premia uncovered from the joint data respond quickly to market volatility, becoming more promi ..."
Abstract

Cited by 210 (1 self)
 Add to MetaCart
Abstract: This paper examines the joint time series of the S&P 500 index and nearthemoney shortdated option prices with an arbitragefree model, capturing both stochastic volatility and jumps. Jumprisk premia uncovered from the joint data respond quickly to market volatility, becoming more prominent during volatile markets. This form of jumprisk premia is important not only in reconciling the dynamics implied by the joint data, but also in explaining the volatility “smirks” of crosssectional options data.
Proper complex random processes with applications to information theory
 152 tel00906143, version 1  19 Nov 2013
, 1993
"... Abstract The “covariance ” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudocovariance. A characterization of uncorrela ..."
Abstract

Cited by 115 (0 self)
 Add to MetaCart
Abstract The “covariance ” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudocovariance. A characterization of uncorrelatedness and widesense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudocovariance are called proper. It is shown that properness is preserved under affine transformations and that the complexmultivariate Gaussian density assumes a natural form only for proper random variables. The maximumentropy theorem is generalized to the complexmultivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zeromean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discretetime channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unitsample response is determined. This derivation is considerably simpler than an earlier derivation for the real discretetime Gaussian channel with intersymbol interference, whose capacity is obtained as a byproduct of the results for the complex channel. Znder TermsProper complex random processes, circular stationarity, intersymbol interference, capacity. T I.
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 114 (27 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.
A note on the stochastic realization problem
 Hemisphere Publishing Corporation
, 1976
"... Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizati ..."
Abstract

Cited by 98 (23 self)
 Add to MetaCart
Abstract. Given a mean square continuous stochastic vector process y with stationary increments and a rational spectral density such that (oo) is finite and nonsingular, consider the problem of finding all minimal (wide sense) Markov representations (stochastic realizations) of y. All such realizations are characterized and classified with respect to deterministic as well as probabilistic properties. It is shown that only certain realizations (internal stochastic realizations) can be determined from the given output process y. All others (external stochastic realizations)require that the probability space be extended with an exogeneous random component. A complete characterization of the sets of internal and external stochastic realizations is provided. It is shown that the state process of any internal stochastic realization can be expressed in terms of two steadystate KalmanBucy filters, one evolving forward in time over the infinite past and one backward over the infinite future. An algorithm is presented which generates families Of external realizations defined on the same probability space and totally ordered with respect to state covariances. 1. Introduction. One
A Chernoff Bound For Random Walks On Expander Graphs
 SIAM J. Comput
, 1998
"... . We consider a finite random walk on a weighted graph G; we show that the fraction of time spent in a set of vertices A converges to the stationary probability #(A) with error probability exp ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
.<F3.827e+05> We consider a finite random walk on a weighted graph<F3.539e+05><F3.827e+05> G; we show that the fraction of time spent in a set of vertices<F3.539e+05> A<F3.827e+05> converges to the stationary probability<F3.539e+05><F3.827e+05><F3.539e+05><F3.827e+05> #(A) with error probability exponentially small in the length of the random walk and the square of the size of the deviation from<F3.539e+05><F3.827e+05><F3.539e+05><F3.827e+05> #(A). The exponential bound is in terms of the expansion of<F3.539e+05> G<F3.827e+05> and improves previous results of [D. Aldous,<F3.405e+05> Probab. Engrg. Inform.<F3.827e+05> Sci., 1 (1987), pp. 3346], [L. Lovasz and M. Simonovits,<F3.405e+05> Random Structures<F3.827e+05> Algorithms, 4 (1993), pp. 359412], [M. Ajtai, J. Komlos, and E. Szemeredi,<F3.405e+05> Deterministic simulation of<F3.827e+05> logspace, in Proc. 19th ACM Symp. on Theory of Computing, 1987]. We show that taking the sample average from one trajectory gives a more e#cien...