Results 11  20
of
549
Statool: A Tool for Distribution Envelope Determination (DEnv), an IntervalBased Algorithm for Arithmetic on Random Variables
 Reliable Computing
, 2003
"... We present Statool, a software tool for obtaining bounds on the distributions of sums, products, and various other functions of random variables where the dependency relationship of the random variables need not be specified. Statool implements the DEny algorithm, which we have described previously ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
We present Statool, a software tool for obtaining bounds on the distributions of sums, products, and various other functions of random variables where the dependency relationship of the random variables need not be specified. Statool implements the DEny algorithm, which we have described previously [4] but not implemented. Our earlier tool addressed only the much more elementary case of independent random variables [3]. An existing tool, RiskCalc [13], also addresses the case of unknown dependency using a different algorithm [33] based on copulas [23], while descriptions and implementations of still other algorithms for similar problems will be reported soon [17] as the area proceeds through a phase of rapid development.
Euclidean embedding of cooccurrence data
 Advances in Neural Information Processing Systems 17
, 2005
"... Abstract Embedding algorithms search for low dimensional structure in complexdata, but most algorithms only handle objects of a single type for which pairwise distances are specified. This paper describes a method for embedding objects of different types, such as images and text, into a single comm ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Abstract Embedding algorithms search for low dimensional structure in complexdata, but most algorithms only handle objects of a single type for which pairwise distances are specified. This paper describes a method for embedding objects of different types, such as images and text, into a single common Euclidean space based on their cooccurrence statistics. Thejoint distributions are modeled as exponentials of Euclidean distances in the lowdimensional embedding space, which links the problem to convex optimization over positive semidefinite matrices. The local structure of our embedding corresponds to the statistical correlations via random walks in the Euclidean space. We quantify the performance of our method on two text datasets, and show that it consistently and significantly outperforms standard methods of statistical correspondence modeling, such as multidimensional scaling and correspondence analysis. 1 Introduction Embeddings of objects in a lowdimensional space are an important tool in unsupervisedlearning and in preprocessing data for supervised learning algorithms. They are especially valuable for exploratory data analysis and visualization by providing easily interpretablerepresentations of the relationships among objects. Most current embedding techniques build low dimensional mappings that preserve certain relationships among objects and differ in the relationships they choose to preserve, which range from pairwise distances in multidimensional scaling (MDS) [4] to neighborhood structure in locally linear embedding[12]. All these methods operate on objects of a single type endowed with a measure of similarity or dissimilarity. However, realworld data often involve objects of several very different types without anatural measure of similarity. For example, typical web pages or scientific papers contain
Estimation of copulabased semiparametric time series models
 J. Econometrics
, 2006
"... This paper studies the estimation of a class of copulabased semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition di ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
This paper studies the estimation of a class of copulabased semiparametric stationary Markov models. These models are characterized by nonparametric invariant (or marginal) distributions and parametric copula functions that capture the temporal dependence of the processes; the implied transition distributions are all semiparametric. Models in this class are easy to simulate, and can be expressed as semiparametric regression transformation models. One advantage of this copula approach is to separate out the temporal dependence (such as tail dependence) from the marginal behavior (such as fat tailedness) of a time series. We present conditions under which processes generated by models in this class are βmixing; naturally, these conditions depend only on the copula specification. Simple estimators of the marginal distribution and the copula parameter are provided, and their asymptotic properties are established under easily verifiable conditions. Estimators of important features of the transition distribution such as the (nonlinear) conditional moments and conditional quantiles are easily obtained from estimators of the marginal distribution and the copula parameter; their √ n − consistency and asymptotic normality can be obtained using the Delta method. In addition, the semiparametric
Beyond Correlation: Extreme Comovements Between Financial Assets
, 2002
"... This paper inv estigates the potential for extreme comov ements between financial assets by directly testing the underlying dependence structure. In particular, a tdependence structure, deriv ed from the Student t distribution, is used as a proxy to test for this extremal behav#a(0 Tests in three ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
This paper inv estigates the potential for extreme comov ements between financial assets by directly testing the underlying dependence structure. In particular, a tdependence structure, deriv ed from the Student t distribution, is used as a proxy to test for this extremal behav#a(0 Tests in three di#erent markets (equities, currencies, and commodities) indicate that extreme comov ements are statistically significant. Moreov er, the "correlationbased" Gaussian dependence structure, underlying the multiv ariate Normal distribution, is rejected with negligible error probability when tested against the tdependencealternativ e. The economic significance of these results is illustratedv ia three examples: comov ements across the G5 equity markets; portfoliov alueatrisk calculations; and, pricing creditderiv ativ es. JEL Classification: C12, C15, C52, G11. Keywords: asset returns, extreme comov ements, copulas, dependence modeling, hypothesis testing, pseudolikelihood, portfolio models, risk management. # The authorsw ould like to thankAndrew Ang, Mark Broadie, Loran Chollete, and Paul Glasserman for their helpful comments on an earlier version of this manuscript. Both authors arewS; the Columbia Graduate School of Business, email: {rm586,assaf.zeevi}@columbia.edu, current version available at www.columbia.edu\# rm586 1 Introducti7 Specification and identification of dependencies between financial assets is a key ingredient in almost all financial applications: portfolio management, risk assessment, pricing, and hedging, to name but a few. The seminal work of Markowitz (1959) and the early introduction of the Gaussian modeling paradigm, in particular dynamic Brownianbased models, hav e both contributed greatly to making the concept of co rrelatio almost synony...
Portfolio ValueatRisk with HeavyTailed Risk Factors,” Mathematical Finance 12
, 2002
"... This paper develops efficient methods for computing portfolio valueatrisk (VAR) when the underlying risk factors have a heavytailed distribution. In modeling heavy tails, we focus on multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
This paper develops efficient methods for computing portfolio valueatrisk (VAR) when the underlying risk factors have a heavytailed distribution. In modeling heavy tails, we focus on multivariate t distributions and some extensions thereof. We develop two methods for VAR calculation that exploit a quadratic approximation to the portfolio loss, such as the deltagamma approximation. In the first method, we derive the characteristic function of the quadratic approximation and then use numerical transform inversion to approximate the portfolio loss distribution. Because the quadratic approximation may not always yield accurate VAR estimates, we also develop a low variance Monte Carlo method. This method uses the quadratic approximation to guide the selection of an effective importance sampling distribution that samples risk factors so that large losses occur more often. Variance is further reduced by combining the importance sampling with stratified sampling. Numerical results on a variety of test portfolios indicate that large variance reductions are typically obtained. Both methods developed in this paper overcome difficulties associated with VAR calculation with heavytailed risk factors. The Monte Carlo method also extends to the problem of estimating the conditional excess, sometimes known as the conditional VAR.
Modelling Dependent Defaults
 RISK
, 2000
"... We consider the modelling of dependent defaults using latent variable models (the approach that underlies KMV and CreditMetrics) and mixture models (the approach underlying CreditRisk+). We explore the role of copulas in the latent variable framework and present results from a simulation study sh ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
We consider the modelling of dependent defaults using latent variable models (the approach that underlies KMV and CreditMetrics) and mixture models (the approach underlying CreditRisk+). We explore the role of copulas in the latent variable framework and present results from a simulation study showing that even for fixed asset correlation assumptions concerning the dependence of the latent variables can have a large effect on the distribution of credit losses. We explore the effect of the tail of the mixingdistribution for the tail of the creditloss distributions. Finally, we discuss the relation between latent variable models and mixture models and provide general conditions under which these models can be mapped into each other. Our contribution can be viewed as an analysis of the model risk associated with the modelling of dependence between credit losses.
A General Approach to Integrated Risk Management with Skewed, Fattailed Risks
, 2005
"... Integrated risk management in a financial institution requires an approach for aggregating risk types (market, credit, and operational) whose distributional shapes vary considerably. In this paper, we construct the joint risk distribution for a typical large, internationally active bank using the me ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Integrated risk management in a financial institution requires an approach for aggregating risk types (market, credit, and operational) whose distributional shapes vary considerably. In this paper, we construct the joint risk distribution for a typical large, internationally active bank using the method of copulas. This technique allows us to incorporate realistic marginal distributions, both conditional and unconditional, that capture some of the essential empirical features of these risks such as skewness and fattails while allowing for a rich dependence structure. We explore the impact of business mix and interrisk correlations on total risk, whether measured by valueatrisk or expected shortfall. We find that given a risk type, total risk is more sensitive to differences in business mix or risk weights than to differences in interrisk correlations. There is a complex relationship between volatility and fattails in determining the total risk: depending on the setting, they either offset or reinforce each other. The choice of copula (normal versus Studentt), which determines the level of tail dependence, has a more modest effect on risk. We then compare the copulabased method with several conventional approaches to computing risk.
A systematic approach to the assessment of fuzzy association rules. Data Mining and Knowledge Discovery
, 2006
"... In order to allow for the analysis of data sets including numerical attributes, several generalizations of association rule mining based on fuzzy sets have been proposed in the literature. While the formal specification of fuzzy associations is more or less straightforward, the assessment of such ru ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
In order to allow for the analysis of data sets including numerical attributes, several generalizations of association rule mining based on fuzzy sets have been proposed in the literature. While the formal specification of fuzzy associations is more or less straightforward, the assessment of such rules by means of appropriate quality measures is less obvious. Particularly, it assumes an understanding of the semantic meaning of a fuzzy rule. This aspect has been ignored by most existing proposals, which must therefore be considered as adhoc to some extent. In this paper, we develop a systematic approach to the assessment of fuzzy association rules. To this end, we proceed from the idea of partitioning the data stored in a database into examples of a given rule, counterexamples, and irrelevant data. Evaluation measures are then derived from the cardinalities of the corresponding subsets. The problem of finding a proper partition has a rather obvious solution for standard association rules but becomes less trivial in the fuzzy case. Our results not only provide a sound justification for commonly used measures but also suggest a means for constructing meaningful alternatives. 1.
2006b, Estimation and model selection of semiparametric copulabased multivariate dynamic models under copula misspeci…cation
 Journal of Econometrics
"... Recently Chen and Fan (2003a) introduced a new class of semiparametric copulabased multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional variance of a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distr ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
Recently Chen and Fan (2003a) introduced a new class of semiparametric copulabased multivariate dynamic (SCOMDY) models. A SCOMDY model specifies the conditional mean and the conditional variance of a multivariate time series parametrically (such as VAR, GARCH), but specifies the multivariate distribution of the standardized innovation semiparametrically as a parametric copula evaluated at nonparametric marginal distributions. In this paper, we first study large sample properties of the estimators of SCOMDY model parameters under amisspecified parametric copula, and then establish pseudo likelihood ratio (PLR) tests for model selection between two SCOMDY models with possibly misspecified copulas. Finally we develop PLR tests for model selection between more than two SCOMDY models along the lines of the reality check of White (2000). The limiting distributions of the estimators of copula parameters and the PLR tests do not depend on the estimation of conditional mean and conditional variance parameters. Although the tests are affected by the estimation of unknown marginal distributions of standardized innovations, they have standard parametric rates and the limiting null distributions are very easy to simulate. Empirical applications to multiple
Semiparametric Pricing of Multivariate Contingent Claims
, 1999
"... This paper derives and implements a nonparametric, arbitragefree technique for multivariate contingent claims (MVCC) pricing. This technique is based on nonparametric estimation of a multivariate riskneutral density function using data from traded options markets and historical asset returns. “New ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
This paper derives and implements a nonparametric, arbitragefree technique for multivariate contingent claims (MVCC) pricing. This technique is based on nonparametric estimation of a multivariate riskneutral density function using data from traded options markets and historical asset returns. “New ” multivariate claims are priced using expectations under this measure. An appealing feature of nonparametric arbitragefree derivative pricing is that fitted prices are obtained that are consistent with traded option prices and are not based on specific restrictions on the underlying asset price process or the functional form of the riskneutral density. Nonparametric MVCC pricing utilizes the method of copulas to combine nonparametrically estimated marginal riskneutral densities (based on options data) into a joint density using a separately estimated nonparametric dependence function (based on historical returns data). This paper provides theory linking objective and riskneutral dependence functions, and empirically testable conditions that justify the use of historical data for estimation of the riskneutral dependence function. The nonparametric MVCC pricing technique is implemented for the valuation of bivariate underperformance and outperformance options on the S&P500 and DAX index. Price deviations are