Results 1 
7 of
7
Bayesian Model Averaging for Linear Regression Models
 Journal of the American Statistical Association
, 1997
"... We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem in ..."
Abstract

Cited by 224 (13 self)
 Add to MetaCart
We consider the problem of accounting for model uncertainty in linear regression models. Conditioning on a single selected model ignores model uncertainty, and thus leads to the underestimation of uncertainty when making inferences about quantities of interest. A Bayesian solution to this problem involves averaging over all possible models (i.e., combinations of predictors) when making inferences about quantities of
NonRedundant Data Clustering
, 2004
"... Data clustering is a popular approach for automatically finding classes, concepts, or groups of patterns. In practice this discovery process should avoid redundancies with existing knowledge about class structures or groupings, and reveal novel, previously unknown aspects of the data. In order to de ..."
Abstract

Cited by 72 (3 self)
 Add to MetaCart
Data clustering is a popular approach for automatically finding classes, concepts, or groups of patterns. In practice this discovery process should avoid redundancies with existing knowledge about class structures or groupings, and reveal novel, previously unknown aspects of the data. In order to deal with this problem, we present an extension of the information bottleneck framework, called coordinated conditional information bottleneck, which takes negative relevance information into account by maximizing a conditional mutual information score subject to constraints. Algorithmically, one can apply an alternating optimization scheme that can be used in conjunction with different types of numeric and nonnumeric attributes. We present experimental results for applications in text mining and computer vision.
Default estimation for lowdefault portfolios
 Journal of Empirical Finance
, 2009
"... The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option. ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
The problem in default probability estimation for lowdefault portfolios is that there is little relevant historical data information. No amount of data processing can fix this problem. More information is required. Incorporating expert opinion formally is an attractive option.
UtilityBased Categorization
, 1993
"... The ability to categorize and use concepts e#ectively is a basic requirementofany intelligent actor. The utilitybased approach to categorization is founded on the thesis that categorization is fundamentally in service of action, i.e., the choice of concepts made by an actor is critical to its choi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The ability to categorize and use concepts e#ectively is a basic requirementofany intelligent actor. The utilitybased approach to categorization is founded on the thesis that categorization is fundamentally in service of action, i.e., the choice of concepts made by an actor is critical to its choice of appropriate actions. This is in contrast to classical and similaritybased approaches which seek logical completeness in concept description with respect to sensory data rather than actionoriented e#ectiveness. Utilitybased categorization is normative and not descriptive. It prescribes howanintelligent agent ought to conceptualize to act e#ectively. It provides ideals for categorization, speci#es criteria for the design of e#ective computational agents, and provides a model of ideal competence. A decisiontheoretic framework for utilitybased categorization whichinvolves reasoning about alternative categorization models of varying levels of abstraction is proposed. Categorization mode...
Monopoly Pricing in the Presence of Social Learning
, 2011
"... To be submitted on November 2011 A monopolist offers a product to a market of consumers with heterogeneous quality preferences. Although initially uninformed about the product quality, they learn by observing past purchase decisions and reviews of other consumers. Our goal is to analyze the social l ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
To be submitted on November 2011 A monopolist offers a product to a market of consumers with heterogeneous quality preferences. Although initially uninformed about the product quality, they learn by observing past purchase decisions and reviews of other consumers. Our goal is to analyze the social learning mechanism and its effect on the sellerâ€™s pricing decision. This analysis borrows from the literature on social learning and on pricing and revenue management. Consumers follow a naive decision rule and, under some conditions, eventually learn the productâ€™s quality. Using meanfield approximation, the dynamics of this learning process are characterized for markets with high demand intensity. The relationship between the price and the speed of learning depends on the heterogeneity of quality preferences. Two pricing strategies are studied: a static price and a single price change. Properties of the optimal prices are derived. Numerical experiments suggest that pricing strategies that account for social learning may increase revenues considerably relative to strategies that do not.
A Behavioural Bayes Method for Determining the Size of a Clinical Trial
, 1999
"... In this paper we introduce a fully Bayesian approach to sample size determination in clinical trials. In contrast to the usual Bayesian decision theoretic methodology, which assumes a single decision maker, our approach recognises the existence of three decision makers, namely: the pharmaceutical co ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we introduce a fully Bayesian approach to sample size determination in clinical trials. In contrast to the usual Bayesian decision theoretic methodology, which assumes a single decision maker, our approach recognises the existence of three decision makers, namely: the pharmaceutical company conducting the trial, which decides on its size; the regulator, whose approval is necessary for the drug to be licenced for sale; and the public at large, who determine ultimate usage. Moreover, we model the subsequent usage by plausible assumptions for actual behaviour, rather than assuming that it represents decisions which are in some sense optimal. The results, not surprisingly, show that the optimal sample size depends strongly on the expected benefit from a conclusively favourable outcome, and on the strength of the evidence required by the regulator.
unknown title
"... www.elsevier.com/locate/csda Assessment of two approximation methods for computing posterior model probabilities ..."
Abstract
 Add to MetaCart
www.elsevier.com/locate/csda Assessment of two approximation methods for computing posterior model probabilities