Results 1  10
of
24
DETERMINANT MAXIMIZATION WITH LINEAR MATRIX INEQUALITY CONSTRAINTS
"... The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the s ..."
Abstract

Cited by 183 (18 self)
 Add to MetaCart
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interiorpoint method, with a simplified analysis of the worstcase complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interiorpoint method will generally be slower; the advantage is that it handles a much wider variety of problems.
Skincolor modeling and adaptation
 In Proceedings of ACCV'98 (Technical Report CMUCS97146, CS department, CMU
, 1997
"... Abstract. This paper studies a statistical skincolor model and its adaptation. It is revealed that (1) human skin colors cluster in a small region in a color space; (2) the variance of a skin color cluster can be reduced by intensity normalization, and (3) under a certain lighting condition, a skin ..."
Abstract

Cited by 140 (6 self)
 Add to MetaCart
Abstract. This paper studies a statistical skincolor model and its adaptation. It is revealed that (1) human skin colors cluster in a small region in a color space; (2) the variance of a skin color cluster can be reduced by intensity normalization, and (3) under a certain lighting condition, a skincolor distribution can be characterized by amultivariate normal distribution in the normalized color space. We then propose an adaptive model to characterize human skincolor distributions for tracking human faces under di erent lighting conditions. The parameters of the model are adapted based on the maximum likelihood criterion. The model has been successfully applied to a realtime face tracker and other applications. 1
Binary models for marginal independence
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B
, 2005
"... A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a versi ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
A number of authors have considered multivariate Gaussian models for marginal independence. In this paper we develop models for binary data with the same independence structure. The models can be parameterized based on Möbius inversion and maximum likelihood estimation can be performed using a version of the Iterated Conditional Fitting algorithm. The approach is illustrated on a simple example. Relations to multivariate logistic and dependence ratio models are discussed.
A new algorithm for maximum likelihood estimation in Gaussian graphical models for marginal independence
 In U. Kjærulff and C. Meek (Eds.), Proceedings of the 19th Conference on Uncertainty in Artificial Intelligence
, 2003
"... Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
Graphical models with bidirected edges (↔) represent marginal independence: the absence of an edge between two vertices indicates that the corresponding variables are marginally independent. In this paper, we consider maximum likelihood estimation in the case of continuous variables with a Gaussian joint distribution, sometimes termed a covariance graph model. We present a new fitting algorithm which exploits standard regression techniques and establish its convergence properties. Moreover, we contrast our procedure to existing estimation algorithms. 1
Partial inversion for linear systems and partial closure of independence graphs
 BIT, Numer. Math
"... We introduce and study a calculus for realvalued square matrices, called partial inversion, and an associated calculus for binary square matrices. The first, applied to systems of recursive linear equations, generates new sets of parameters for different types of statistical joint response models. ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
(Show Context)
We introduce and study a calculus for realvalued square matrices, called partial inversion, and an associated calculus for binary square matrices. The first, applied to systems of recursive linear equations, generates new sets of parameters for different types of statistical joint response models. The corresponding generating graphs are directed and acyclic. The second calculus, applied to matrix representations of independence graphs, gives chain graphs induced by such a generating graph. Chain graphs are more complex independence graphs associated with recursive joint response models. Missing edges in independence graphs coincide with structurally zero parameters in linear systems. A wide range of consequences of an assumed independence structure can be derived by partial closure, but computationally efficient algorithms still need to be developed for applications to very large graphs.
Covariance Chains
 Bernoulli
, 2006
"... Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, ort ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, orthogonal decompositions for covariance chains are derived first in explicit form. Covariance chains are also contrasted to concentration chains, for which estimation is explicit and simple. For this purpose, maximumlikelihood equations are derived first for exponential families when some parameters satisfy zero value constraints. From these equations explicit estimates are obtained, which are asymptotically efficient, and they are applied to covariance chains. Simulation results confirm the satisfactory behaviour of the explicit covariance chain estimates also in moderatesize samples.
Principal fitted components for dimension reduction in regression
 Statistical Science
"... Abstract. We provide a remedy for two concerns that have dogged the use of principal components in regression: (i) principal components are computed from the predictors alone and do not make apparent use of the response, and (ii) principal components are not invariant or equivariant under full rank ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. We provide a remedy for two concerns that have dogged the use of principal components in regression: (i) principal components are computed from the predictors alone and do not make apparent use of the response, and (ii) principal components are not invariant or equivariant under full rank linear transformation of the predictors. The development begins with principal fitted components [Cook, R. D. (2007). Fisher lecture: Dimension reduction in regression (with discussion). Statist. Sci. 22 1–26] and uses normal models for the inverse regression of the predictors on the response to gain reductive information for the forward regression of interest. This approach includes methodology for testing hypotheses about the number of components and about conditional independencies among the predictors. Key words and phrases: Central subspace, dimension reduction, inverse regression, principal components. 1.
The Wishart Distributions on Homogeneous Cones
, 2001
"... The classical family of Wishart distributions on a cone of positive definite matrices and its fundamental features are extended to a family of generalized Wishart distributions on a homogeneous cone using the theory of exponential families. The generalized Wishart distributions include all known fam ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The classical family of Wishart distributions on a cone of positive definite matrices and its fundamental features are extended to a family of generalized Wishart distributions on a homogeneous cone using the theory of exponential families. The generalized Wishart distributions include all known families of Wishart distributions as special cases. The relations to graphical models and Bayesian statistics are indicated.
CHANGING PARAMETERS BY PARTIAL MAPPINGS
, 2008
"... Changes between different sets of parameters are often needed in multivariate statistical modeling such as transformations within linear regression or in exponential models. There may, for instance, be specific inference questions based on subject matter interpretations, alternative wellfitting co ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Changes between different sets of parameters are often needed in multivariate statistical modeling such as transformations within linear regression or in exponential models. There may, for instance, be specific inference questions based on subject matter interpretations, alternative wellfitting constrained models, compatibility judgements of seemingly distinct constrained models, or different reference priors under alternative parameterizations. We introduce and discuss a partial mapping, called partial replication and relate it to a more complex mapping, called partial inversion. Both operations are used to decompose matrix operations, to explain recursion relations among sets of linear parameters, to change between different types of linear models, to approximate maximumlikelihood estimates in exponential family models under independence constraints, and to switch partially between sets of canonical and moment parameters in exponential family distributions or between sets of corresponding maximumlikelihood estimates.
Full Bayesian significance test applied to multivariate normal Structure Models
 BRAZILIAN JOURNAL OF PROBABILITY AND STATISTICS (2003), 17, PP. 147–168
, 2003
"... The Full Bayesian Significance Test (FBST) for precise hypotheses is applied to a Multivariate Normal Structure (MNS) model. In the FBST we compute the evidence against the precise hypothesis. This evidence is the probability of the Highest Relative Surprise Set (HRSS) tangent to the submanifold (o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The Full Bayesian Significance Test (FBST) for precise hypotheses is applied to a Multivariate Normal Structure (MNS) model. In the FBST we compute the evidence against the precise hypothesis. This evidence is the probability of the Highest Relative Surprise Set (HRSS) tangent to the submanifold (of the parameter space) that defines the null hypothesis. The MNS model we present appears when testing equivalence conditions for genetic expression measurements, using microarray technology.