Results 21 
27 of
27
Artificial Neural Networks
, 2006
"... Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.
A Statistical Perspective on Data Mining
"... Technological advances have led to new and automated data collection methods. Datasets once at a premium are often plentiful nowadays and sometimes indeed massive. A new breed of challenges are thus presented – primary among them is the need for methodology to analyze such masses of data with a view ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Technological advances have led to new and automated data collection methods. Datasets once at a premium are often plentiful nowadays and sometimes indeed massive. A new breed of challenges are thus presented – primary among them is the need for methodology to analyze such masses of data with a view to understanding complex phenomena and relationships. Such capability is provided by data mining which combines core statistical techniques with those from machine intelligence. This article reviews the current state of the discipline from a statistician’s perspective, illustrates issues with reallife examples, discusses the connections with statistics, the differences, the failings and the challenges ahead. 1
heckermacOJmicrosoft.com
"... meekC6Jcmu.edu We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the ..."
Abstract
 Add to MetaCart
meekC6Jcmu.edu We extend the Bayesian Information Criterion (BIC), an asymptotic approximation for the marginal likelihood, to Bayesian networks with hidden variables. This approximation can be used to select models given large samples of data. The standard BIC as well as our extension punishes the complexity of a model according to the dimension of its parameters. We argue that the dimension of a Bayesian network with hidden variables is the rank of the Jacobian matrix of the transformation between the parameters of the network and the parameters of the observable variables. We compute the dimensions of several networks including the naive Bayes model with a hidden root node. 1
Aspects of the Interface between STatistics and . . .
, 1999
"... In recent years the crossfertilisation of ideas between the statistics and machine learning communities has become increasingly important. This exchange of ideas resulted from a recognition that the two communities often have to tackle similar problems and has resulted in an exchange which has enri ..."
Abstract
 Add to MetaCart
In recent years the crossfertilisation of ideas between the statistics and machine learning communities has become increasingly important. This exchange of ideas resulted from a recognition that the two communities often have to tackle similar problems and has resulted in an exchange which has enriched both disciplines. There is much to be gained in considering the two literatures in tandem, and the aim of this thesis is to build on some of the research currently taking place at the interface between these two disciplines. Specifically we will be considering a class of models called Bayesian belief networks. These are models which are closely related to neural networks, a type of model often used in machine learning but largely eschewed by statisticians due to their `black box' approach. Neural networks, while useful tools, lack transparency; by their nature it is difficult to interpret the method in which neural network
unknown title
"... Abstract. A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, ..."
Abstract
 Add to MetaCart
Abstract. A Bayesian network is a graphical model that encodes probabilistic relationships among variables of interest. When used in conjunction with statistical techniques, the graphical model has several advantages for data analysis. One, because the model encodes dependencies among all variables, it readily handles situations where some data entries are missing. Two, a Bayesian network can be used to learn causal relationships, and hence can be used to gain understanding about a problem domain and to predict the consequences of intervention. Three, because the model has both a causal and probabilistic semantics, it is an ideal representation for combining prior knowledge (which often comes in causal form) and data. Four, Bayesian statistical methods in conjunction with Bayesian networks offer an efficient and principled approach for avoiding the overfitting of data. In this paper, we discuss methods for constructing Bayesian networks from prior knowledge and summarize Bayesian statistical methods for using data to improve these models. With regard to the latter task, we describe methods for learning both the parameters and structure of a Bayesian network, including techniques for learning with incomplete data. In addition, we relate
UC405 (Ml INTERPRETABLE PROJECTION PURSUIT*
, 1989
"... The goal of this thesis is to modify projection pursuit by trading accuracy for interpretability. The modification produces a more parsimonious and understandable model without sacrificing the structure which projection pursuit seeks. The method retains the nonlinear versatility of projection pursui ..."
Abstract
 Add to MetaCart
The goal of this thesis is to modify projection pursuit by trading accuracy for interpretability. The modification produces a more parsimonious and understandable model without sacrificing the structure which projection pursuit seeks. The method retains the nonlinear versatility of projection pursuit while clarifying the results. Following an introduction which outlines the dissertation, the first and second chapters contain the technique as applied to exploratory projection pursuit and projection pursuit regression respectively. The interpretability of a description is measured as the simplicity of the coefficients which define its linear projections. Several interpretability indices for a set of vectors are defined based on the ideas of rotation in factor analysis and entropy. The two methods require slightly different indices due to their contrary goals. A roughness penalty weighting approach is used to search for a more parsimonious