Results 1  10
of
19
Binary models for marginal independence
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B
, 2005
"... A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a versi ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a version of the Iterated Conditional Fitting algorithm. The approach is illustrated on a simple example. Relations to multivariate logistic and dependence ratio models are discussed.
CORRELATED DEFAULT PROCESSES: A CRITERIONBASED COPULA APPROACH
, 2004
"... In this paper, we develop a methodology to model, simulate and assess the joint default process of hundreds of issuers. Our study is based on a data set of default probabilities supplied by Moody’s Risk Management Services. We undertake an empirical examination of the joint stochastic process of def ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
In this paper, we develop a methodology to model, simulate and assess the joint default process of hundreds of issuers. Our study is based on a data set of default probabilities supplied by Moody’s Risk Management Services. We undertake an empirical examination of the joint stochastic process of default risk over the period 1987 to 2000 using copula functions. To determine the appropriate choice of the joint default process, we propose a new metric. This metric accounts for different aspects of default correlation, namely (i) level, (ii) asymmetry and (iii) taildependence and extreme behavior. Our model, based on estimating a joint system of over 600 issuers, is designed to replicate the empirical joint distribution of defaults. A comparison of a jump model and a regimeswitching model shows that the latter provides a better representation of the properties of correlated default. We also find that the skewed doubleexponential distribution is the best choice for the marginal distribution of each issuer’s hazard rate process, and combines well with the normal, Gumbel, Clayton and Student’s t copulas in the joint dependence relationship amongst issuers. As a complement to the methodological innovation, we show that (a) appropriate choices of marginal distributions and copulas are essential in modeling correlated default,
Sequential Monte Carlo on large binary sampling spaces
 Statist. Comput
, 2011
"... A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for good performance. In this paper, we present such a parametric fam ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for good performance. In this paper, we present such a parametric family for adaptive sampling on highdimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want tosamplefromaBayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For highdimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations intoaccount, analogously tothemultivariate normaldistribution on continuous spaces. We provide a review of models for binary data and make one of them work in the context of Sequential Monte Carlo sampling. Computational studies on real life data with about a hundred covariates suggest that, on difficult instances, our Sequential Monte Carlo approach clearly outperforms standard techniques based on Markov chain exploration.
Modelling the Dependence Structure between Australian Equity and Real Estate Markets – a Conditional Copula Approach
"... We apply conditional copula models to investigate the dependence structure between returns of Australian equity markets and Real Estate Investment Trusts (REITS). The dependence between these assets has a significant impact on the diversification potential and risk for a portfolio of multiple assets ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We apply conditional copula models to investigate the dependence structure between returns of Australian equity markets and Real Estate Investment Trusts (REITS). The dependence between these assets has a significant impact on the diversification potential and risk for a portfolio of multiple assets and is therefore of great interest to portfolio managers and investors. We observe significant correlations and tail dependence between the considered series indicating a limited diversification potential of investments in REITS in Australia. Conducting a backtesting ValueatRisk analysis, we also find that ignoring the complex dependence structure could lead to a significant underestimation of the actual risk.
MARGINAL ANALYSIS FOR CLUSTERBASED CASECONTROL STUDIES
"... SUMMARY. Clusterbased casecontrol design refers to a design where the sampling unit is a cluster and the sampling probability depends on the responses from individuals within the cluster. Data from a clusterbased casecontrol design arise in many practical applications. For example, in some epid ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
SUMMARY. Clusterbased casecontrol design refers to a design where the sampling unit is a cluster and the sampling probability depends on the responses from individuals within the cluster. Data from a clusterbased casecontrol design arise in many practical applications. For example, in some epidemiologic genetic studies, due to the low prevalence of the disease of interest, families with more members having the disease are sampled with a higher probability. Current approaches for analyzing this type of data rely mainly on parametrically modeling the joint distribution of the responses within a cluster. In this paper, we develop a marginal approach to analyze data from clusterbased casecontrol studies when the main interest is the mean structure of the association between exposures and outcomes and the correlation within cluster is considered nuisance. We specify a marginal regression model for an individual response given covariates and leave the correlation within the cluster unspecified. We establish the statistical properties for the proposed estimator and investigate its finite sample performance through simulation studies. We apply the proposed method to a data set from the Baltimore Eye Survey. 1.
LETTER Communicated by Ernst Niebur Generating Spike Trains with Specified Correlation Coefficients
"... Spike trains recorded from populations of neurons can exhibit substantial pairwise correlations between neurons and rich temporal structure. Thus, for the realistic simulation and analysis of neural systems, it is essential to have efficient methods for generating artificial spike trains with specif ..."
Abstract
 Add to MetaCart
(Show Context)
Spike trains recorded from populations of neurons can exhibit substantial pairwise correlations between neurons and rich temporal structure. Thus, for the realistic simulation and analysis of neural systems, it is essential to have efficient methods for generating artificial spike trains with specified correlation structure. Here we show how correlated binary spike trains can be simulated by means of a latent multivariate gaussian model. Sampling from the model is computationally very efficient and, in particular, feasible even for large populations of neurons. The entropy of the model is close to the theoretical maximum for a wide range of parameters. In addition, this framework naturally extends to correlations over time and offers an elegant way to model correlated neural spike counts with arbitrary marginal distributions. 1
Parametric families on large binary spaces
, 2011
"... In the context of adaptive Monte Carlo algorithms, we cannot directly generate independent samples from the distribution of interest but use a proxy which we need to be close to the target. Generally, such a proxy distribution is a parametric family on the sampling spaces of the target distributi ..."
Abstract
 Add to MetaCart
In the context of adaptive Monte Carlo algorithms, we cannot directly generate independent samples from the distribution of interest but use a proxy which we need to be close to the target. Generally, such a proxy distribution is a parametric family on the sampling spaces of the target distribution. For continuous sampling problems in high dimensions, we often use the multivariate normal distribution as a proxy for we can easily parametrise it by its moments and quickly sample from it. Our objective is to construct similarly flexible parametric families on binary sampling spaces too large for exhaustive enumeration. The binary sampling problem seems more difficult than its continuous counterpart since the choice of a suitable proxy distribution is not obvious.
Computational Vision and Neuroscience Group
"... Imaging techniques such as optical imaging of intrinsic signals, 2photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial and temporal scales. Here, we present Bayesian methods based on Gaussian processes ..."
Abstract
 Add to MetaCart
(Show Context)
Imaging techniques such as optical imaging of intrinsic signals, 2photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial and temporal scales. Here, we present Bayesian methods based on Gaussian processes for extracting topographic maps from functional imaging data. In particular, we focus on the estimation of orientation preference maps (OPMs) from intrinsic signal imaging data. We model the underlying map as a bivariate Gaussian process, with a prior covariance function that reflects known properties of OPMs, and a noise covariance adjusted to the data. The posterior mean can be interpreted as an optimally smoothed estimate of the map, and can be used for model based interpolations of the map from sparse measurements. By sampling from the posterior distribution, we can get error bars on statistical properties such as preferred orientations, pinwheel locations or pinwheel counts. Finally, the use of an explicit probabilistic model facilitates interpretation of parameters and quantitative model comparisons. We demonstrate our model both on simulated data and on intrinsic signaling data from ferret visual cortex. 1
Population Health Metrics BioMed Central
, 2004
"... A note on the use of sensitivity analysis to explore the potential impact of declining institutional care utilisation on disability prevalence ..."
Abstract
 Add to MetaCart
A note on the use of sensitivity analysis to explore the potential impact of declining institutional care utilisation on disability prevalence