Results 1  10
of
259
The Generalized Dynamic Factor Model: Identification and Estimation
 Review of Economics and Statistics
, 2000
"... This paper proposes a factor model with infinite dynamics and nonorthogonal idiosyncratic components. The model, which we call the generalized dynamic factor model, isnovel to the literature, and generalizes the static approximate factor model of Chamberlain and Rothschild (1983), as well as the ex ..."
Abstract

Cited by 186 (31 self)
 Add to MetaCart
This paper proposes a factor model with infinite dynamics and nonorthogonal idiosyncratic components. The model, which we call the generalized dynamic factor model, isnovel to the literature, and generalizes the static approximate factor model of Chamberlain and Rothschild (1983), as well as the exact factor model àlaSargent and Sims (1977). We provide identification conditions, propose an estimator of the common components, prove convergence as both time and crosssectional size go to infinity at appropriate rates and present simulation results. We use our model to construct a coincident index for the European Union. Such index is defined as the common component of real GDP within a model including several macroeconomic variables for each European country.
Sparse Inverse Covariance Matrix Estimation Using Quadratic Approximation
"... The ℓ1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm f ..."
Abstract

Cited by 67 (9 self)
 Add to MetaCart
(Show Context)
The ℓ1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized logdeterminant program. In contrast to other stateoftheart methods that largely use first order gradient information, our algorithm is based on Newton’s method and employs a quadratic approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other stateoftheart methods. 1
Domain Theory and Integration
 Theoretical Computer Science
, 1995
"... We present a domaintheoretic framework for measure theory and integration of bounded realvalued functions with respect to bounded Borel measures on compact metric spaces. The set of normalised Borel measures of the metric space can be embedded into the maximal elements of the normalised probabilis ..."
Abstract

Cited by 62 (14 self)
 Add to MetaCart
(Show Context)
We present a domaintheoretic framework for measure theory and integration of bounded realvalued functions with respect to bounded Borel measures on compact metric spaces. The set of normalised Borel measures of the metric space can be embedded into the maximal elements of the normalised probabilistic power domain of its upper space. Any bounded Borel measure on the compact metric space can then be obtained as the least upper bound of an !chain of linear combinations of point valuations (simple valuations) on the upper space, thus providing a constructive setup for these measures. We use this setting to define a new notion of integral of a bounded realvalued function with respect to a bounded Borel measure on a compact metric space. By using an !chain of simple valuations, whose lub is the given Borel measure, we can then obtain increasingly better approximations to the value of the integral, similar to the way the Riemann integral is obtained in calculus by using step functions. ...
Galerkin Approximation Of The Generalized HamiltonJacobi Equation
 Automatica
, 1996
"... . If u is a stabilizing control for a nonlinear system that is affine in the control variable, then the solution to the Generalized HamiltonJacobiBellman (GHJB) equation associated with u is a Lyapunov function for the system and equals the cost associated with u. If an explicit solution to the GH ..."
Abstract

Cited by 57 (6 self)
 Add to MetaCart
. If u is a stabilizing control for a nonlinear system that is affine in the control variable, then the solution to the Generalized HamiltonJacobiBellman (GHJB) equation associated with u is a Lyapunov function for the system and equals the cost associated with u. If an explicit solution to the GHJB equation can be found then it can be used to construct a feedback control law that improves the performance of u. Repeating this process leads to a successive approximation algorithm that uniformly approximates the HamiltonJacobiBellman equation. The difficulty is that it is very difficult to construct solutions to the GHJB equation such that the control derived from its solution is in feedback form. This paper shows that Galerkin's approximation method can be used to construct arbitrarily close approximations to the GHJB equation while generating stable feedback control laws. We state sufficient conditions for the convergence of Galerkin approximations to the GHJB equation. The suffic...
Optimal dynamic auctions for revenue management
 Management Science
, 2002
"... We analyze a dynamic auction, in which a seller with C units to sell faces a sequence of buyers separated into T time periods. Eachgroup of buyers has independent, private values for a single unit. Buyers compete directly against each other within a period, as in a traditional auction, and indirectl ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We analyze a dynamic auction, in which a seller with C units to sell faces a sequence of buyers separated into T time periods. Eachgroup of buyers has independent, private values for a single unit. Buyers compete directly against each other within a period, as in a traditional auction, and indirectly with buyers in other periods through the opportunity cost of capacity assessed by the seller. The number of buyers in each period, as well as the individual buyers ’ valuations, are random. The model is a variation of the traditional singleleg, multiperiod revenue management problem, in which consumers act strategically and bid for units of a fixed capacity over time. For this setting, we prove that dynamic variants of the firstprice and secondprice auction mechanisms maximize the seller’s expected revenue. We also show explicitly how to compute and implement these optimal auctions. The optimal auctions are then compared to a traditional revenue management mechanism—in which list prices are used in each period together with capacity controls—and to a simple auction heuristic that consists of allocating units to eachperiod and running a sequence of standard, multiunit auctions withfixed reserve prices in each period. The traditional revenue management mechanism is proven to be optimal in the limiting cases when there is at most one buyer per period, when capacity is not constraining, and asymptotically when the number of buyers and the capacity increases. The optimal auction significantly outperforms both suboptimal mechanisms when there are a moderate number of periods, capacity is constrained, and the total volume of sales is not too large. The benefit also increases when variability in the dispersion in buyers ’ valuations or in the number of buyers per period increases.
Generalization Error Bounds for Bayesian Mixture Algorithms
 Journal of Machine Learning Research
, 2003
"... Bayesian approaches to learning and estimation have played a significant role in the Statistics literature over many years. While they are often provably optimal in a frequentist setting, and lead to excellent performance in practical applications, there have not been many precise characterizations ..."
Abstract

Cited by 39 (6 self)
 Add to MetaCart
(Show Context)
Bayesian approaches to learning and estimation have played a significant role in the Statistics literature over many years. While they are often provably optimal in a frequentist setting, and lead to excellent performance in practical applications, there have not been many precise characterizations of their performance for finite sample sizes under general conditions. In this paper we consider the class of Bayesian mixture algorithms, where an estimator is formed by constructing a datadependent mixture over some hypothesis space. Similarly to what is observed in practice, our results demonstrate that mixture approaches are particularly robust, and allow for the construction of highly complex estimators, while avoiding undesirable overfitting effects. Our results, while being datadependent in nature, are insensitive to the underlying model assumptions, and apply whether or not these hold. At a technical level, the approach applies to unbounded functions, constrained only by certain moment conditions. Finally, the bounds derived can be directly applied to nonBayesian mixture approaches such as Boosting and Bagging. 1.
SecondOrder RungeKutta Approximations in Control Constrained Optimal Control.
 SIAM J. Numer. Anal.,
, 2000
"... ..."