Results 21  30
of
51
A nonstationary model for functional mapping of complex traits
 Bioinformatics
, 2005
"... doi:10.1093/bioinformatics/bti382 ..."
Maximum Likelihood Estimation of SumDifference Time Series Models Using the EM algorithm
, 2002
"... Integrated moving average processes (IMA), especially the firstorder moving average processes IMA(1; 1), are useful for modeling time series data occurring in economic situations and industrial control problems. It is noticed that any IMA(1; 1) can be thought of as a smoothest possible IMA(1; 1) pr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Integrated moving average processes (IMA), especially the firstorder moving average processes IMA(1; 1), are useful for modeling time series data occurring in economic situations and industrial control problems. It is noticed that any IMA(1; 1) can be thought of as a smoothest possible IMA(1; 1) process buried in white noise. This motivated our orthogonal decomposition of IMA(1; 1) processes. More specifically, the firstorder differences of the observed series can be expressed as the sum of two independent processes. One is the firstorder differences of a white noise process and the other is the firstorder sum of another white noise process. The corresponding spectrum decomposition is then simple and useful for model building. Moreover, this decomposition allows a simple implementation of the EM algorithm for maximum likelihood estimation for a Gaussian IMA(1; 1) process. Based on this orthogonal decomposition, from modeling perspective in the frequency domain we consider a general...
Quantifying the Fraction of Missing Information for Hypothesis Testing in Statistical and Genetic Studies
"... Abstract. Many practical studies rely on hypothesis testing procedures applied to data sets with missing information. An important part of the analysis is to determine the impact of the missing data on the performance of the test, and this can be done by properly quantifying the relative (to complet ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Many practical studies rely on hypothesis testing procedures applied to data sets with missing information. An important part of the analysis is to determine the impact of the missing data on the performance of the test, and this can be done by properly quantifying the relative (to complete data) amount of available information. The problem is directly motivated by applications to studies, such as linkage analyses and haplotypebased association projects, designed to identify genetic contributions to complex diseases. In the genetic studies the relative information measures are needed for the experimental design, technology comparison, interpretation of the data, and for understanding the behavior of some of the inference tools. The central difficulties in constructing such information measures arise from the multiple, and sometimes conflicting, aims in practice. For large samples, we show that a satisfactory, likelihoodbased general solution exists by using appropriate forms of the relative Kullback–Leibler information, and that the proposed measures are computationally inexpensive given the maximized likelihoods
Instrumental Variable Estimation Based on Mean Absolute Deviation
, 2001
"... this paper. 2 An estimator based on this approach can be defined as the maximizer of the sample analogue of Q. We can form various estimators by taking various dispersion measures in Q. A leading example of the dispersion measure is the standard deviation. The function Q with the standard deviation ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
this paper. 2 An estimator based on this approach can be defined as the maximizer of the sample analogue of Q. We can form various estimators by taking various dispersion measures in Q. A leading example of the dispersion measure is the standard deviation. The function Q with the standard deviation is: Q 2 (ff) ,
Primed in Great Britain
"... Information matrix computation from conditional information via normal approximation ..."
Abstract
 Add to MetaCart
Information matrix computation from conditional information via normal approximation
∂θ∂θ ′ ◮ Provided by Fisher Scoring algorithm
, 2008
"... ◮ Why do we need these methods? ◮ Brief review of the following methods: ..."
FEDERAL HOUSING ADMINISTRATION PREPARED FOR: SOCIETY OF ACTUARIES ANNUAL MEETING
, 2004
"... There are a number of reasons why data quality is important to business and government: 1. Highquality data can be a major business asset, a unique source of competitive advantage. 2. Poor quality data can lower customer satisfaction. 3. Poor quality data can lower employee job satisfaction. 4. Poo ..."
Abstract
 Add to MetaCart
There are a number of reasons why data quality is important to business and government: 1. Highquality data can be a major business asset, a unique source of competitive advantage. 2. Poor quality data can lower customer satisfaction. 3. Poor quality data can lower employee job satisfaction. 4. Poor quality data can breed organizational mistrust. The August 2003 issue of The Newsmonthly of the American Academy of Actuaries reports that the National Association of Insurance Commissioners (NAIC) suggests that actuaries audit “controls related to the completeness, accuracy, and classification of loss data”. There is little published work on data quality in the actuarial literature. There are, however, several texts and a large number of published papers on data quality in related disciplines, especially, statistics and computer science. In Section 2 of this work, we discuss some data quality issues as they relate directly to practical
Analysis of Chess Game Outcomes
"... he tournaments, and 4 players only competed in 1 tournament. The World Cup participants who competed in 3 or more tournaments were 5.2 A Model for Chess Game Outcomes 78 Tournament Dates Number of Competitors Brussels, Belgium April 1, 1988  April 22, 1988 18 Belfort, France June 14, 1988  July ..."
Abstract
 Add to MetaCart
he tournaments, and 4 players only competed in 1 tournament. The World Cup participants who competed in 3 or more tournaments were 5.2 A Model for Chess Game Outcomes 78 Tournament Dates Number of Competitors Brussels, Belgium April 1, 1988  April 22, 1988 18 Belfort, France June 14, 1988  July 3, 1988 16 Reykjavik, Iceland October 3, 1988  October 24, 1988 18 Barcelona, Spain March 30, 1989  April 20, 1989 17 Rotterdam, Netherlands June 3, 1989  June 24, 1989 16 Skelleftea, Sweden August 12, 1989  September 3, 1989 16 Table 5.1: World Cup Chess Tournaments, 19881989 contenders for monetary prizes. Table 5.2 lists the players and indicates the tournaments in which each player competed. For each game in the World Cup, the data consists of the players involved in the game, the outcome of the game (win, loss or draw), an indication of which player played the white pieces (the player with the white pieces moves first), and the tournament in which the game oc
Inference and Monitoring Convergence (chapter for Gilks, Richardson, and Spiegelhalter book)
"... this article we present yet another example, from our current applied research. Figure 0.1 displays an example of slow convergence from a Markov chain simulation for a hierarchical Bayesian model for a pharmacokinetics problem (see Bois et al., 1994, for details). The simulations were done using a M ..."
Abstract
 Add to MetaCart
this article we present yet another example, from our current applied research. Figure 0.1 displays an example of slow convergence from a Markov chain simulation for a hierarchical Bayesian model for a pharmacokinetics problem (see Bois et al., 1994, for details). The simulations were done using a Metropolisapproximate Gibbs sampler (as in Section 4.4 of Gelman, 1992); due to the complexity of the model, each iteration was expensive in computer time, and it was desirable to keep the simulation runs as short as possible. Figures 1a and 1b display time series plots for a single parameter in the posterior distribution in two independent simulations, each of length 1000. The simulations were run in parallel simultaneously on two workstations in a network. It is clear from the separation of the two sequences that, after 1000 iterations, the simulations are still far from convergence. However, either sequence alone looks perfectly well behaved.
Standard Errors for EM Estimates in Generalized Linear Models with Random Effects
, 2000
"... A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulae are used to approximate the integrals in the EM algorithm, where two different approaches are pursued: GauHermite quadrature, in case of Gaussian random effects ..."
Abstract
 Add to MetaCart
A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulae are used to approximate the integrals in the EM algorithm, where two different approaches are pursued: GauHermite quadrature, in case of Gaussian random effects, and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations. Keywords: EM algorithm, Estimating equations, GauHermite quadrature, Mixture model, Nonparametric maximum likelihood estimation, Random effect model. Institute of Statistics