Results 11  20
of
93
The Equivalence of Constrained and Weighted Designs in Multiple Objective Design Problems
 Journal of the American Statistical Association
, 1996
"... Several competing objectives may be relevant in the design of an experiment. The competing objectives may not be easy to characterize in a single optimality criterion. One approach to these design problems has been to weight each criterion and find the design that optimizes the weighted average of t ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Several competing objectives may be relevant in the design of an experiment. The competing objectives may not be easy to characterize in a single optimality criterion. One approach to these design problems has been to weight each criterion and find the design that optimizes the weighted average of the criteria. An alternative approach has been to optimize one criterion subject to constraints on the other criteria. An equivalence theorem is presented for the Bayesian constrained design problem. Equivalence theorems are essential in verifying optimality of proposed designs, especially when, as in most nonlinear design problems, numerical optimization is required. This theorem is used to show that the results of Cook and Wong on the equivalence of the weighted and constrained problems also apply much more generally. The results are applied to Bayesian nonlinear design problems with several objectives. KEY WORDS: Bayesian design, regression, nonlinear design 1. INTRODUCTION An experimen...
Intentions in the Coordinated Generation of Graphics and Text From Tabular Data
 Knowledge and Information Systems
, 1998
"... To use graphics efficiently in an automatic report generation system, one has to model messages and how they pass from the writer (intention) to the reader (interpretation). This paper describes PostGraphe a system which generates a report integrating graphics and text from a set of writer's i ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
To use graphics efficiently in an automatic report generation system, one has to model messages and how they pass from the writer (intention) to the reader (interpretation). This paper describes PostGraphe a system which generates a report integrating graphics and text from a set of writer's intentions. The system is given the data in tabular form as might be found in a spreadsheet; also input is a declaration of the types of values in the columns of the table. The user then indicates the intentions to be conveyed in the graphics (e.g. compare two variables or show the evolution of a set of variables) and the system generates a report in L A T E X with the appropriate PostScript graphic files. PostGraphe uses the same information to generate the accompanying text that helps the reader to focus on the important points of the graphics. We also describe how these ideas have been embedded to create a new Chart Wizard for Microsoft Excel. 1 Introduction: important factors in the generati...
High dimensional data analysis via the SIR/PHD approach
, 2000
"... Dimensionality is an issue that can arise in every scientific field. Generally speaking, the difficulty lies on how to visualize a high dimensional function or data set. This is an area which has become increasingly more important due to the advent of computer and graphics technology. People often a ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Dimensionality is an issue that can arise in every scientific field. Generally speaking, the difficulty lies on how to visualize a high dimensional function or data set. This is an area which has become increasingly more important due to the advent of computer and graphics technology. People often ask: “How do they look?”, “What structures are there?”, “What model should be used? ” Aside from the differences that underly the various scientific contexts, such kind of questions do have a common root in Statistics. This should be the driving force for the study of high dimensional data analysis. Sliced inverse regression(SIR) and principal Hessian direction(PHD) are two basic dimension reduction methods. They are useful for the extraction of geometric information underlying noisy data of several dimensions a crucial step in empirical model building which has been overlooked in the literature. In this Lecture Notes, I will review the theory of SIR/PHD and describe some ongoing research in various application areas. There are two parts. The first part is based on materials that have already appeared in the literature. The second part is just a collection of some manuscripts which are not yet published. They are included here for completeness.
ViSta: A Visual Statistics System
, 1992
"... this paper we discuss visual statistical analysis using ViSta. ViSta is designed for an audience of users having a very wide range of data analysis sophistication, ranging from novice to expert. ViSta provides seamlessly integrated data analysis environments specifically tailored to the user's ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
this paper we discuss visual statistical analysis using ViSta. ViSta is designed for an audience of users having a very wide range of data analysis sophistication, ranging from novice to expert. ViSta provides seamlessly integrated data analysis environments specifically tailored to the user's level of expertise. Visual guidance is available for novices (such as students), and visual authoring tools are available for experts (such as teachers) to create guidance for these novices. A structured graphical user interface is available for competent users, and a command line interface is available for sophisticated users. The complete LispStat (Tierney, 1990) programming environment is available to researchers and graduate students who wish to extend ViSta's capabilities.
NicheWorks—interactive visualization of very large graphs
 Proceedings of Graph Drawing ’97
, 1997
"... The difference between displaying networks with 100–1,000 nodes and displaying ones with 10,000–100,000 nodes is not merely quantitative, it is qualitative. Layout algorithms suitable for the former are too slow for the latter, requiring new algorithms or modified (often relaxed) versions of existin ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The difference between displaying networks with 100–1,000 nodes and displaying ones with 10,000–100,000 nodes is not merely quantitative, it is qualitative. Layout algorithms suitable for the former are too slow for the latter, requiring new algorithms or modified (often relaxed) versions of existing algorithms to be invented. The density of nodes and edges displayed per inch of screen real estate requires special visual techniques to filter the graphs and focus attention. Compounding the problem is that large reallife networks are often weighted graphs and usually have additional data associated with the nodes and edges. A system for investigating and exploring such large, complex datasets needs to be able to display both graph structure and node and edge attributes so that patterns and information hidden in the data can be seen. In this article we describe a tool that addresses these needs, the NicheWorks tool. We describe and comment on the available layout algorithms and the linked views interaction system, and detail two examples of the use of NicheWorks for analyzing Web sites and detecting international telephone fraud.
Implementing functions for spatial statistical analysis using the R language
, 1998
"... is a language similar to for statistical data analysis, based on modern programming concepts and released under the GNU General Public License. It permits the integration of program scripts with compiled dynamically loaded libraries of functions when computing speed is important. Following a broa ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
is a language similar to for statistical data analysis, based on modern programming concepts and released under the GNU General Public License. It permits the integration of program scripts with compiled dynamically loaded libraries of functions when computing speed is important. Following a broad outline of existing collections of functions for spatial statistics written for , we show how they may be ported to , and compare their characteristics. We further demonstrate how existing work may be extended to topics not yet covered, and how libraries of functions may be constructed.
Estimating And Depicting The Structure Of A Distribution Of Random Functions
, 2000
"... . We suggest a nonparametric approach to making inference about the structure of distributions in a potentially infinitedimensional space, for example a function space, and displaying information about that structure. Our methodology is based on nonparametric density estimation, and draws inference ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. We suggest a nonparametric approach to making inference about the structure of distributions in a potentially infinitedimensional space, for example a function space, and displaying information about that structure. Our methodology is based on nonparametric density estimation, and draws inference about the slope of the density. The latter step is implemented in a purely iterative way, using only elementary operations of addition and multiplication, and does not require any differentiation or dimensionreduction. Nevertheless it leads in a very simple and reliable manner to "curves" of steepest ascent up the "surface" defined by an estimate of the density of a potentially infinitedimensional distribution. The projections of these curves into the sample space are always onedimensional, or more properly oneparameter, structures, and so can be displayed visually even when the sample space is a class of functions. Also, the modes to which the sample space projections lead are themselv...
A Toolbox for Analyzing Programs
"... The paper describes two separate but synergistic tools for running experiments on large Lisp programs. The first tool, ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The paper describes two separate but synergistic tools for running experiments on large Lisp programs. The first tool,
Principles and procedures of exploratory data analysis
 Psychological Methods
, 1997
"... Exploratory data analysis (EDA) is a wellestablished statistical tradition that provides conceptual and computational tools for discovering patterns to foster hypothesis development and refinement. These tools and attitudes complement the use of significance and hypothesis tests used in confirmator ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Exploratory data analysis (EDA) is a wellestablished statistical tradition that provides conceptual and computational tools for discovering patterns to foster hypothesis development and refinement. These tools and attitudes complement the use of significance and hypothesis tests used in confirmatory data analysis (CDA). Although EDA complements rather than replaces CDA, use of CDA without EDA is seldom warranted. Even when wellspecified theories are held, EDA helps one interpret the results of CDA and may reveal unexpected or misleading patterns in the data. This article introduces the central heuristics and computational tools of EDA and contrasts it with CDA and exploratory statistics in general. EDA techniques are illustrated using previously published psychological data. Changes in statistical training and practice are recommended to incorporate these tools. The widespread availability of software for graphical data analysis and calls for increased use of exploratory data analysis (EDA) on epistemic grounds (e.g. Cohen, 1994) have increased the visibility of EDA. Nevertheless, few psychologists receive explicit training in the beliefs or procedures of this tradition. Huberty (1991) remarked that statistical texts are likely to give cursory references to common EDA techniques such as stemandleaf plots, box plots, or residual analysis and yet seldom integrate these techniques throughout a book. A survey of graduate training programs in psychology corroborates such an impression
Design issues for generalized linear models: A review
 Statistical Science
, 2006
"... Abstract. Generalized linear models (GLMs) have been used quite effectively in the modeling of a mean response under nonstandard conditions, where discrete as well as continuous data distributions can be accommodated. The choice of design for a GLM is a very important task in the development and bui ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. Generalized linear models (GLMs) have been used quite effectively in the modeling of a mean response under nonstandard conditions, where discrete as well as continuous data distributions can be accommodated. The choice of design for a GLM is a very important task in the development and building of an adequate model. However, one major problem that handicaps the construction of a GLM design is its dependence on the unknown parameters of the fitted model. Several approaches have been proposed in the past 25 years to solve this problem. These approaches, however, have provided only partial solutions that apply in only some special cases, and the problem, in general, remains largely unresolved. The purpose of this article is to focus attention on the aforementioned dependence problem. We provide a survey of various existing techniques dealing with the dependence problem. This survey includes discussions concerning locally optimal designs, sequential designs, Bayesian designs and the quantile dispersion graph approach for comparing designs for GLMs. Key words and phrases: Bayesian design, dependence on unknown parameters, locally optimal design, logistic regression, response surface methodology, quantal dispersion graphs, sequential design. 1.