Results 11  20
of
536
Use of Kriging Models to Approximate Deterministic Computer Models
 AIAA Journal
, 2005
"... 1 Address all correspondences to this author. Phone/fax: (814) 8655930/8634128. The use of kriging models for approximation and global optimization has been steadily on the rise in the past decade. The standard approach used in the Design and Analysis of Computer Experiments (DACE) is to use an Or ..."
Abstract

Cited by 81 (0 self)
 Add to MetaCart
1 Address all correspondences to this author. Phone/fax: (814) 8655930/8634128. The use of kriging models for approximation and global optimization has been steadily on the rise in the past decade. The standard approach used in the Design and Analysis of Computer Experiments (DACE) is to use an Ordinary kriging model to approximate a deterministic computer model. Universal and Detrended kriging are two alternative types of kriging models. In this paper, a description on the basics of kriging is given, highlighting the similarities and differences between these three different types of kriging models and the underlying assumptions behind each. A comparative study on the use of three different types of kriging models is then presented using six test problems. The methods of Maximum Likelihood Estimation (MLE) and CrossValidation (CV) for model parameter estimation are compared for the three kriging model types. A onedimension problem is first used to visualize the differences between the different models. In order to show applications in higher dimensions, four twodimension and a 5dimension problem are also given.
A p* primer: logit models for social networks
 SOCIAL NETWORKS
, 1999
"... A major criticism of the statistical models for analyzing social networks developed by Holland, Leinhardt, and others wHolland, P.W., Leinhardt, S., 1977. Notes on the statistical analysis of social network data; Holland, P.W., Leinhardt, S., 1981. An exponential family of probability distributions ..."
Abstract

Cited by 78 (1 self)
 Add to MetaCart
A major criticism of the statistical models for analyzing social networks developed by Holland, Leinhardt, and others wHolland, P.W., Leinhardt, S., 1977. Notes on the statistical analysis of social network data; Holland, P.W., Leinhardt, S., 1981. An exponential family of probability distributions for directed graphs. Journal of the American Statistical Association. 76, pp. 33–65 Ž with discussion.; Fienberg, S.E., Wasserman,
Bayesian and Regularization Methods for Hyperparameter Estimation in Image Restoration
 IEEE Trans. Image Processing
, 1999
"... In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We ..."
Abstract

Cited by 77 (28 self)
 Add to MetaCart
In this paper, we propose the application of the hierarchical Bayesian paradigm to the image restoration problem. We derive expressions for the iterative evaluation of the two hyperparameters applying the evidence and maximum a posteriori (MAP) analysis within the hierarchical Bayesian paradigm. We show analytically that the analysis provided by the evidence approach is more realistic and appropriate than the MAP approach for the image restoration problem. We furthermore study the relationship between the evidence and an iterative approach resulting from the set theoretic regularization approach for estimating the two hyperparameters, or their ratio, defined as the regularization parameter. Finally the proposed algorithms are tested experimentally.
Chain Graph Models and their Causal Interpretations
 B
, 2001
"... Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are a number of simple and apparently plausible, but ultim ..."
Abstract

Cited by 68 (7 self)
 Add to MetaCart
(Show Context)
Chain graphs are a natural generalization of directed acyclic graphs (DAGs) and undirected graphs. However, the apparent simplicity of chain graphs belies the subtlety of the conditional independence hypotheses that they represent. There are a number of simple and apparently plausible, but ultimately fallacious interpretations of chain graphs that are often invoked, implicitly or explicitly. These interpretations also lead to awed methods for applying background knowledge to model selection. We present a valid interpretation by showing how the distribution corresponding to a chain graph may be generated as the equilibrium distribution of dynamic models with feedback. These dynamic interpretations lead to a simple theory of intervention, extending the theory developed for DAGs. Finally, we contrast chain graph models under this interpretation with simultaneous equation models which have traditionally been used to model feedback in econometrics. Keywords: Causal model; cha...
HIGHDIMENSIONAL ISING MODEL SELECTION USING ℓ1REGULARIZED LOGISTIC REGRESSION
 SUBMITTED TO THE ANNALS OF STATISTICS
"... We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on ℓ1regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an ℓ1constraint. The method is ..."
Abstract

Cited by 65 (17 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating the graph associated with a binary Ising Markov random field. We describe a method based on ℓ1regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an ℓ1constraint. The method is analyzed under highdimensional scaling, in which both the number of nodes p and maximum neighborhood size d are allowed to grow as a function of the number of observations n. Our main results provide sufficient conditions on the triple (n, p, d) and the model parameters for the method to succeed in consistently estimating the neighborhood of every node in the graph simultaneously. With coherence conditions imposed on the population Fisher information matrix, we prove that consistent neighborhood selection can be obtained for sample sizes n = Ω(d 3 log p), with exponentially decaying error. When these same conditions are imposed directly on the sample matrices, we show that a reduced sample size of n = Ω(d 2 log p) suffices for the method to estimate neighborhoods consistently. Although this paper focuses on the binary graphical models, we indicate how a generalization of the method of the paper would apply to general discrete Markov random fields.
Density biased sampling: an improved method for data mining and clustering
 Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pp.82–92, 2000
"... Data mining in large data sets often requires a sampling or summarization step to form an incore representation of the data that can be processed more efficiently. Uniform random sampling is frequently used in practice and also frequently criticized because it will miss small clusters. Many natural ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
(Show Context)
Data mining in large data sets often requires a sampling or summarization step to form an incore representation of the data that can be processed more efficiently. Uniform random sampling is frequently used in practice and also frequently criticized because it will miss small clusters. Many natural phenomena are known to follow Zipf’s distribution and the inability of uniform sampling to find small clusters is of practical concern. Density Biased Sampling is proposed to probabilistically undersample dense regions and oversample light regions. A weighted sample is used to preserve the densities of the original data. Density biased sampling naturally includes uniform sampling as a special case. A memory efficient algorithm is proposed that approximates density biased sampling using only a single scan of the data. We empirically evaluate density biased sampling using synthetic data sets that exhibit varying cluster size distributions finding up to a factor of six improvement over uniform sampling. 1
Parameter estimation in TV image restoration using variational distribution approximation
 IEEE TRANS. IMAGE PROCESSING
, 2008
"... In this paper, we propose novel algorithms for total variation (TV) based image restoration and parameter estimation utilizing variational distribution approximations. Within the hierarchical Bayesian formulation, the reconstructed image and the unknown hyperparameters for the image prior and the no ..."
Abstract

Cited by 57 (31 self)
 Add to MetaCart
(Show Context)
In this paper, we propose novel algorithms for total variation (TV) based image restoration and parameter estimation utilizing variational distribution approximations. Within the hierarchical Bayesian formulation, the reconstructed image and the unknown hyperparameters for the image prior and the noise are simultaneously estimated. The proposed algorithms provide approximations to the posterior distributions of the latent variables using variational methods. We show that some of the current approaches to TVbased image restoration are special cases of our framework. Experimental results show that the proposed approaches provide competitive performance without any assumptions about unknown hyperparameters and clearly outperform existing methods when additional information is included.
A.: LikelihoodBased Inference for MaxStable Processes
 Journal of the American Statistical Association
"... The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and flexible framework for inference. In this article we develop inferentially practical, likelihoodbased methods for fitting maxstable processes derived from a compositelikelihood approach. The procedure is sufficiently reliable and versatile to permit the simultaneous modelling of joint and marginal parameters in the spatial context at a moderate computational cost. The utility of this methodology is examined via simulation, and illustrated by the analysis of U.S. precipitation extremes. Keywords: Composite likelihood; Extreme value theory; Maxstable processes; Pseudolikelihood, Rainfall; Spatial Extremes.
Estimating the "Wrong" Graphical Model: Benefits in the ComputationLimited Setting
 Journal of Machine Learning Research
, 2006
"... Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observa ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation.