Results 1 
8 of
8
Canonical community ordination. Part I: Basic theory and linear methods. Ecoscience
 Ecoscience
, 1994
"... 1 Canonical community ordination comprises a collection of methods that relate species assemblages to their environment, in both observational studies and designed experiments. Canonical ordination differs from ordination sensu stricto in that species and environment data are analyzed simultaneously ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
1 Canonical community ordination comprises a collection of methods that relate species assemblages to their environment, in both observational studies and designed experiments. Canonical ordination differs from ordination sensu stricto in that species and environment data are analyzed simultaneously. Part I reviews the theory in a nonmathematical way with emphasis on new insights for the interpretation of ordination diagrams. The interpretation depends on the ordination method used to create the diagram. After the basic theory, Part I is focused on the ordination diagrams in linear methods of canonical community ordination, in particular principal components analysis, redundancy analysis and canonical correlation analysis. Special attention is devoted to the display of qualitative environmental variables. Key words: principal components analysis, redundancy analysis, canonical correlation analysis, biplot, ordination diagram, speciesenvironment relations. 2
Relationships among several methods of linearly constrained correspondence analysis. Psychometrika 56
, 1991
"... This paper shows essential equivalences among several methods of linearly constrained correspondence analysis. They include Fisher’s method of additive scoring, Hayashi’s second type of quantification method, ter Braak’s canonical correspondence analysis, Nishisato’s ANOVA of categorical data, corre ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper shows essential equivalences among several methods of linearly constrained correspondence analysis. They include Fisher’s method of additive scoring, Hayashi’s second type of quantification method, ter Braak’s canonical correspondence analysis, Nishisato’s ANOVA of categorical data, correspondence analysis of manipulated contingency tables, B6ckenholt and B6ckenholt’s least squares canonical analysis with linear constraints, and van der Heijden and Meijerink’s zero average restrictions. These methods fall into one of two classes of methods corresponding to two alternative ways of imposing linear constraints, the reparametrization method and the null space method. A connection between the two is established through Khatri’s lemma. Key words: canonical correlation analysis, generalized singular value decomposition (GSVD), the method of additive scoring, the second type of quantification method (Q2), canonical correspondence analysis (CCA), ANOVA of categorical data, canonical analysis with linear constraints (CALC), zero average restrictions, Khatri’s lemma. 1.
Analysis of contingency tables by ideal point discriminant analysis
 Psychometrika
, 1987
"... Crossclassified data are frequently encountered in behavioral and social science research. The loglinear model and dual scaling (correspondence analysis) are two representative methods analyzing such data. An alternative method, based on ideal point discriminant analysis (DA), proposed for analysis ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Crossclassified data are frequently encountered in behavioral and social science research. The loglinear model and dual scaling (correspondence analysis) are two representative methods analyzing such data. An alternative method, based on ideal point discriminant analysis (DA), proposed for analysis of contingency tables, which in a certain sense encompasses the two existing methods. A variety of interesting structures can be imposed on rows and columns of the tables through manipulations of predictor variables and/or as direct constraints on model parameters. This, along with maximum likelihood estimation of the model parameters, allows interesting model comparisons. This is illustrated by the analysis of several data sets.
Multidimensional scaling and regression
 Statistica Applicata
, 1992
"... Constrained multidimensional scaling was put on a firm theoretical basis by Jan De Leeuw and Willem Heiser in the 1980's. There is a simple method of fitting, based on distance via innerproducts, and a numerically more complicated one that is truly based on leastsquares on distances. The uncon ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Constrained multidimensional scaling was put on a firm theoretical basis by Jan De Leeuw and Willem Heiser in the 1980's. There is a simple method of fitting, based on distance via innerproducts, and a numerically more complicated one that is truly based on leastsquares on distances. The unconstrained forms are known as principal coordinate analysis and nonmetric multidimensional scaling, respectively. Constraining the solution by external variables brings the power of classical regression analysis back into multidimensional data analysis. This idea is developed and illustrated, with emphasis on constrained principal coordinate analysis.
Triadic distance models for the analysis of asymmetric threeway proximity data
 British Journal of Mathematical and Statistical Psychology
, 2000
"... Triadic distance models can be used to analyse proxim ity data de ned on triples of objects. Threeway symmetry is a common assumption for triadic distance models. In the present study threeway symmetry is not assumed. Triadic distance models are presented for the analysis of asymmetric threeway ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Triadic distance models can be used to analyse proxim ity data de ned on triples of objects. Threeway symmetry is a common assumption for triadic distance models. In the present study threeway symmetry is not assumed. Triadic distance models are presented for the analysis of asymmetric threeway proximity data that result in a simultaneous representation of symmetry and asymmetry in a lowdimensional con guration. An iterative majorization algorithm is developed for obtaining the coordinates and the representation of the asymmetry. The models are illustrated by an example using longitudinal categorical data. 1.
© 1995 Birkhguser Verlag, Basel Canonical correspondence analysis and related multivariate methods in aquatic ecology
"... community ecology, partial least squares. Canonical correspondence analysis (CCA) is a multivariate method to elucidate the relationships between biological assemblages of species and their environment. The method is designed to extract synthetic environmental gradients from ecological datasets. Th ..."
Abstract
 Add to MetaCart
community ecology, partial least squares. Canonical correspondence analysis (CCA) is a multivariate method to elucidate the relationships between biological assemblages of species and their environment. The method is designed to extract synthetic environmental gradients from ecological datasets. The gradients are the basis for succinctly describing and visualizing the differential habitat preferences (niches) of taxa via an ordination diagram. Linear multivariate methods for relating two set of variables, such as twoblock Partial Least Squares (PLS2), canonical correlation analysis and redundancy analysis, are less suited for this purpose because habitat preferences are often unimodal functions of habitat variables. After pointing out the key assumptions underlying CCA, the paper focuses on the interpretation of CCA ordination diagrams. Subsequently, some advanced uses, such as ranking environmental variables in importance and the statistical testing of effects are illustrated on a typical macroinvertebrate dataset. The paper closes with comparisons with correspondence analysis, discriminant analysis, PLS2 and coinertia analysis. In an appendix a new method, named CCAPLS, is proposed that combines the strong features of CCA and PLS2.
Ranked Data Analysis of a GamutMapping Experiment
, 2001
"... We analyze data from a gamutmapping experiment using several statistical procedures for ranked data. In this experiment six gamutmapping algorithms were applied to six different images and the results were ranked by 31 judges according to how well the images matched an original. We fitted two dist ..."
Abstract
 Add to MetaCart
We analyze data from a gamutmapping experiment using several statistical procedures for ranked data. In this experiment six gamutmapping algorithms were applied to six different images and the results were ranked by 31 judges according to how well the images matched an original. We fitted two distancebased statistical models to the data: both analyses showed that aggregate preference among the six algorithms depended on the image viewed. Based on the first model we classified the images into four classes or clusters. We applied unidimensional unfolding, a technique from mathematical psychology, to extract latent reference frames upon which judges plausibly ordered the algorithms. Four color experts gave interpretations of the derived reference frames. We used the second model to generate confidence sets for the consensus rankings, and another cluster analysis. 2001 SPIE and IS&T.