Results 1  10
of
684
PROBABILITY INEQUALITIES FOR SUMS OF BOUNDED RANDOM VARIABLES
, 1962
"... Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges of the s ..."
Abstract

Cited by 1573 (2 self)
 Add to MetaCart
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(SES> nt) depend only on the endpoints of the ranges of the smumands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population.
Least squares quantization in pcm
 IEEE Transactions on Information Theory
, 1982
"... AbstractIt has long been realized that in pulsecode modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as th ..."
Abstract

Cited by 922 (0 self)
 Add to MetaCart
AbstractIt has long been realized that in pulsecode modulation (PCM), with a given ensemble of signals to handle, the quantum values should be spaced more closely in the voltage regions where the signal amplitude is more likely to fall. It has been shown by Panter and Dite that, in the limit as the number of quanta becomes infinite, the asymptotic fractional density of quanta per unit voltage should vary as the onethird power of the probability density per unit voltage of signal amplitudes. In this paper the corresponding result for any finite number of quanta is derived; that is, necessary conditions are found that the quanta and associated quantization intervals of an optimum finite quantization scheme must satisfy. The optimization criterion used is that the average quantization noise power be a minimum. It is shown that the result obtained here goes over into the Panter and Dite result as the number of quanta become large. The optimum quantization schemes for 26 quanta, b = 1,2, t,7, are given numerically for Gaussian and for Laplacian distribution of signal amplitudes. T I.
New results in linear filtering and prediction theory
 Trans. ASME, Ser. D, J. Basic Eng
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 368 (0 self)
 Add to MetaCart
A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are considered briefly. 1
The minimum description length principle in coding and modeling
 IEEE TRANS. INFORM. THEORY
, 1998
"... We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized maximized ..."
Abstract

Cited by 315 (12 self)
 Add to MetaCart
(Show Context)
We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized maximized likelihood, mixture, and predictive codings are each shown to achieve the stochastic complexity to within asymptotically vanishing terms. We assess the performance of the minimum description length criterion both from the vantage point of quality of data compression and accuracy of statistical inference. Context tree modeling, density estimation, and model selection in Gaussian linear regression serve as examples.
The JumpRisk Premia Implicit in Options: Evidence from an Integrated TimeSeries Study
 Journal of Financial Economics
"... Abstract: This paper examines the joint time series of the S&P 500 index and nearthemoney shortdated option prices with an arbitragefree model, capturing both stochastic volatility and jumps. Jumprisk premia uncovered from the joint data respond quickly to market volatility, becoming more p ..."
Abstract

Cited by 285 (2 self)
 Add to MetaCart
Abstract: This paper examines the joint time series of the S&P 500 index and nearthemoney shortdated option prices with an arbitragefree model, capturing both stochastic volatility and jumps. Jumprisk premia uncovered from the joint data respond quickly to market volatility, becoming more prominent during volatile markets. This form of jumprisk premia is important not only in reconciling the dynamics implied by the joint data, but also in explaining the volatility “smirks” of crosssectional options data.
Simulated Moments Estimator of Markov Models of Asset Prices
 Econometrica, July
, 1993
"... This paper provides a simulated moments estimator (SME) of the parameters of dynamic models in which the state vector follows a timehomogeneous Markov process. Conditions are provided for both weak and strong consistency as well as asymptotic normality. Various tradeoffs among the regularity condit ..."
Abstract

Cited by 246 (5 self)
 Add to MetaCart
This paper provides a simulated moments estimator (SME) of the parameters of dynamic models in which the state vector follows a timehomogeneous Markov process. Conditions are provided for both weak and strong consistency as well as asymptotic normality. Various tradeoffs among the regularity conditions underlying the large sample properties of the SME are discussed in the context of an assetpricing model.
Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation
 IEEE Transactions on Automatic Control
, 1992
"... Consider the problem of finding a root of the multivariate gradient equation that arises in function minimization. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm of the general KieferWolfowitz type is appropriate for estimating the root. This p ..."
Abstract

Cited by 233 (14 self)
 Add to MetaCart
Consider the problem of finding a root of the multivariate gradient equation that arises in function minimization. When only noisy measurements of the function are available, a stochastic approximation (SA) algorithm of the general KieferWolfowitz type is appropriate for estimating the root. This paper presents an SA algorithm that is based on a "simultaneous perturbation" gradient approximation instead of the standard finite difference approximation of KieferWolfowitz type procedures. Theory and numerical experience indicate that the algorithm presented here can be significanfiy more efficient than the standard finite differencebased algorithms in largedimensional problems.
Proper complex random processes with applications to information theory
 152 tel00906143, version 1  19 Nov 2013
, 1993
"... Abstract The “covariance ” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudocovariance. A characterization of uncorrela ..."
Abstract

Cited by 123 (0 self)
 Add to MetaCart
Abstract The “covariance ” of complex random variables and processes, when defined consistently with the corresponding notion for real random variables, is shown to be determined by the usual (complex) covariance together with a quantity called the pseudocovariance. A characterization of uncorrelatedness and widesense stationarity in terms of covariance and pseudocovariance is given. Complex random variables and processes with a vanishing pseudocovariance are called proper. It is shown that properness is preserved under affine transformations and that the complexmultivariate Gaussian density assumes a natural form only for proper random variables. The maximumentropy theorem is generalized to the complexmultivariate case. The differential entropy of a complex random vector with a fixed correlation matrix is shown to be maximum, if and only if the random vector is proper, Gaussian and zeromean. The notion of circular stutionarity is introduced. For the class of proper complex random processes, a discrete Fourier transform correspondence is derived relating circular stationarity in the time domain to uncorrelatedness in the frequency domain. As an application of the theory, the capacity of a discretetime channel with complex inputs, proper complex additive white Gaussian noise, and a finite complex unitsample response is determined. This derivation is considerably simpler than an earlier derivation for the real discretetime Gaussian channel with intersymbol interference, whose capacity is obtained as a byproduct of the results for the complex channel. Znder TermsProper complex random processes, circular stationarity, intersymbol interference, capacity. T I.
General state space Markov chains and MCMC algorithm
 PROBABILITY SURVEYS
, 2004
"... This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform e ..."
Abstract

Cited by 112 (28 self)
 Add to MetaCart
This paper surveys various results about Markov chains on general (noncountable) state spaces. It begins with an introduction to Markov chain Monte Carlo (MCMC) algorithms, which provide the motivation and context for the theory which follows. Then, sufficient conditions for geometric and uniform ergodicity are presented, along with quantitative bounds on the rate of convergence to stationarity. Many of these results are proved using direct coupling constructions based on minorisation and drift conditions. Necessary and sufficient conditions for Central Limit Theorems (CLTs) are also presented, in some cases proved via the Poisson Equation or direct regeneration constructions. Finally, optimal scaling and weak convergence results for MetropolisHastings algorithms are discussed. None of the results presented is new, though many of the proofs are. We also describe some Open Problems.