## Modeling individual differences in cognition (2005)

Venue: | Psychonomic Bulletin & Review |

Citations: | 8 - 3 self |

### BibTeX

@ARTICLE{Lee05modelingindividual,

author = {Michael D. Lee and Michael R. Webb},

title = {Modeling individual differences in cognition},

journal = {Psychonomic Bulletin & Review},

year = {2005},

volume = {12},

pages = {605--621}

}

### OpenURL

### Abstract

Many evaluations of cognitive models rely on data that have been averaged or aggregated across all experimental subjects, and so fail to consider the possibility of important individual differences between subjects. Other evaluations are done at the single-subject level, and so fail to benefit from the reduction of noise that data averaging or aggregation potentially provides. To overcome these weaknesses, we have developed a general approach to modeling individual differences using families of cognitive models in which different groups of subjects are identified as having different psychological behavior. Separate models with separate parameterizations are applied to each group of subjects, and Bayesian model selection is used to determine the appropriate number of groups. We evaluate this individual differences approach in a simulation study and show that it is superior in terms of the key modeling goals of prediction and understanding. We also provide two practical demonstrations of the approach, one using the ALCOVE model of category learning with data from four previously analyzed category learning experiments, the other using multidimensional scaling representational models with previously analyzed similarity data for colors. In both demonstrations, meaningful individual differences are found and the psychological models are able to account for this variation through interpretable differences in parameterization. The results highlight the potential of extending cognitive models to consider individual differences. Much of cognitive psychology, like other empirical sciences, involves the development and evaluation of models. Models provide formal accounts of the explanations proposed by theories and have been developed to address diverse cognitive phenomena, ranging from

### Citations

2335 |
Estimating the dimension of a model
- Schwarz
- 1978
(Show Context)
Citation Context ...periments with more than two possible category responses, using a multinomial distribution, is straightforward. Having defined the likelihood function, we use the Bayesian information criterion (BIC; =-=Schwarz, 1978-=-) as an approximate, easy-to-calculate means of Bayesian model selection. The BIC is given by where P is the number of parameters in the model family (i.e., the sum of all of the parameters used by th... |

1387 |
A simplex method for function minimization
- Nelder, Mead
- 1965
(Show Context)
Citation Context ...), height-relevant filtration (FH), condensation A (CA), and condensation B (CB) category learning data. ing of subjects is provided by the original heuristic method. A Nelder–Mead simplex algorithm (=-=Nelder & Mead, 1965-=-) is used to search for optimal parameterization of this initial partition, allowing its BIC to be evaluated. A combinatorial optimization process is then applied, based on subjects that are nearest (... |

990 | Bayes Factors - Kass, Raftery - 1995 |

955 |
Features of similarity
- Tversky
- 1977
(Show Context)
Citation Context ...Models provide formal accounts of the explanations proposed by theories and have been developed to address diverse cognitive phenomena, ranging from stimulus representation (see, e.g., Shepard, 1980; =-=Tversky, 1977-=-) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; Tenenbaum, 1999)... |

555 |
Markov Chain Monte Carlo in Practice
- Gilks, Richardson, et al.
- 1996
(Show Context)
Citation Context ...lihood (Rissanen, 2001). For cognitive models that resist the formal analysis needed to derive these measures, an alternative is to use numerical methods, such as Markov chain Monte Carlo (see, e.g., =-=Gilks, Richardson, & Spiegelhalter, 1996-=-) to approximate the Bayesian quantities that compare model families. A final intriguing possibility for future research, and a natural extension of the approach presented here, involves using fundame... |

528 | Probability theory: The logic of science
- Jaynes
- 2003
(Show Context)
Citation Context ...t both “zero” and “one” outcomes are possible in the experiment, this is the correct (and unique) choice of prior within the “objective Bayesian” framework for statistical inference (for details, see =-=Jaynes, 2003-=-, pp. 382–386). Individual approach. The full individual differences approach uses the model M ind , which assumes that every subject has a potentially different underlying rate of success, given by t... |

429 | Attention, similarity, and the identification-categorization relationship
- Nosofsky
- 1986
(Show Context)
Citation Context ...ubject behaves in accordance with a different parameterization of the same basic model, so the model is evaluated against the data from each subject separately (see, e.g., Ashby, Maddox, & Lee, 1994; =-=Nosofsky, 1986-=-; Wixted & Ebbesen, 1997). Although this avoids the problem of corrupting the underlying pattern of the data, it also forgoes the potential benefits of averaging and guarantees that models are fit to ... |

408 |
Multidimensional scaling
- Cox
(Show Context)
Citation Context ...r-normal subjects. The presence of these fundamental individual differences makes Helm’s (1959) data interesting. Group Multidimensional Scaling Representation In multidimensional scaling (see, e.g., =-=Cox & Cox, 1994-=-; Shepard, 1987), stimuli are represented as points in a coordinate space, and their empirical dissimilarities are modeled by the distances between the points, usually according to one of the family o... |

390 |
Modern Multidimensional Scaling: Theory and Applications
- Borg, Groenen
- 1997
(Show Context)
Citation Context ...c dissimilarity matrices, D1 , . . . , Dm , where Dk � [dk ij ], with dk ij denoting the similarity between the ith and jth stimuli for the kth subject (or repeated subject). Previous analyses (e.g., =-=Borg & Groenen, 1997-=-, pp. 359–370) of these data have considered multidimensional scaling representations, with a particular focus on the differences between the 5 matrices for known colordeficient subjects and the remai... |

384 |
ALCOVE: an exemplar-based connectionist model of category learning
- Kruschke
- 1992
(Show Context)
Citation Context ...g., Shepard, 1980; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; =-=Kruschke, 1992-=-; Tenenbaum, 1999). One recurrent shortcoming of these models, however, is that (whether intentionally or as an unintended consequence of methodology) humans are usually modeled as invariants, not as ... |

326 |
Toward a universal law of generalization for psychological science
- Shepard
- 1987
(Show Context)
Citation Context .... The presence of these fundamental individual differences makes Helm’s (1959) data interesting. Group Multidimensional Scaling Representation In multidimensional scaling (see, e.g., Cox & Cox, 1994; =-=Shepard, 1987-=-), stimuli are represented as points in a coordinate space, and their empirical dissimilarities are modeled by the distances between the points, usually according to one of the family of Minkowskian d... |

311 |
Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition
- Carroll, Chang
- 1970
(Show Context)
Citation Context ...ns for each group. It is worth emphasizing that these capabilities extend well beyond those of alternative “individual differences” extensions of multidimensional scaling, such as INDSCAL (see, e.g., =-=Carroll & Chang, 1970-=-) and INDCLUS (e.g., Carroll & Arabie, 1983). These models accommodate individual variation by allowing each subject to weight the axes of an underlying representational space in different ways. They ... |

275 |
Fisher information and stochastic complexity
- Rissanen
- 1996
(Show Context)
Citation Context ...this problem is to use more sophisticated model selection criteria that are sensitive to all of the components of model complexity. These include measures such as the stochastic complexity criterion (=-=Rissanen, 1996-=-; see also Myung, Balasubramanian, & Pitt, 2000) and normalized maximum likelihood (Rissanen, 2001). For cognitive models that resist the formal analysis needed to derive these measures, an alternativ... |

251 |
The levenberg-marquardt algorithm: implementation and theory
- Moré
(Show Context)
Citation Context ...the additive constant g* g* g* k g 2 p , …, p , c argmax d ( 1 n ) = ∑ ∑ ij − ˆd ij g g g ( p1, …, pn, c ) k∈gth i< j group using a Levenberg–Marquardt approach to continuous optimization (see, e.g., =-=More, 1977-=-). This was done separately for dimensionalities Sg � 1, 2, . . . , up to a maximum chosen to be sufficiently large to ensure that the best dimensionality according to BIC ( ) + k g 2 g = 1 ∑ ∑ ( dij ... |

245 |
The processing of information and structure
- Garner
- 1974
(Show Context)
Citation Context ...tegory alternatives are mapped onto response probabilities (ϕ). Kruschke (1993) considered the ability of ALCOVE to model human category learning for filtration and condensation categorization tasks (=-=Garner, 1974-=-). The results of four separate experiments were reported, covering two filtration tasks (called position relevant and height relevant because of the nature of the stimuli) and two condensation tasks ... |

194 | Rule-plus-exception model of classification learning
- Nosofsky, Palmeri, et al.
- 1994
(Show Context)
Citation Context .... Although these points have been demonstrated previously for category learning models, including ALCOVE (see, e.g., Erickson, 1999; Lewandowsky, Kalish, & Griffiths, 2000; Nosofsky & Johansen, 2000; =-=Nosofsky, Palmeri, & McKinley, 1994-=-; Treat, McFall, Viken, & Kruschke, 2001; Yang & Lewandowsky, 2003), these other studies have not inferred the groupings by applying rigorous model selection criteria. What the results presented here ... |

189 |
Reflections of the environment in memory
- Anderson, Schooler
- 1991
(Show Context)
Citation Context ...anations proposed by theories and have been developed to address diverse cognitive phenomena, ranging from stimulus representation (see, e.g., Shepard, 1980; Tversky, 1977) to memory retention (e.g., =-=Anderson & Schooler, 1991-=-; Estes, 1997; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; Tenenbaum, 1999). One recurrent shortcoming of these models, however, ... |

182 |
Bayesian model choice: asymptotics and exact calculations
- Gelfand, Dey
- 1994
(Show Context)
Citation Context ...ssed against the data from the second experiment, D′ � (k′ 1 , . . . , k′ m ). To make this assessment, we used the standard Bayesian approach of measuring the (quasi) posterior predictive densities (=-=Gelfand & Dey, 1994-=-). These measures basically assess how likely the second set of data are, given the model that has been learned from the first set, and are given by the conditional probabilities p(D′ | θ * , M ave ),... |

80 | Towards a Unified Theory of Similarity and Recognition
- G, Perrin
- 1988
(Show Context)
Citation Context ... phenomena, ranging from stimulus representation (see, e.g., Shepard, 1980; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; Laming, 1992) to category learning (e.g., =-=Ashby & Perrin, 1988-=-; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; Tenenbaum, 1999). One recurrent shortcoming of these models, however, is that (whether intentionally or as an unintended consequence of methodology... |

80 |
Multidimensional scaling, tree-fitting, and clustering
- SHEPARD
- 1980
(Show Context)
Citation Context ...ion of models. Models provide formal accounts of the explanations proposed by theories and have been developed to address diverse cognitive phenomena, ranging from stimulus representation (see, e.g., =-=Shepard, 1980-=-; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; T... |

76 | A model of probabilistic category learning - Kruschke, Johansen - 1999 |

76 | Toward a method of selecting among computational models of cognition
- Pitt, Myung, et al.
- 2003
(Show Context)
Citation Context ...BB eters, and so leads to progressively more complicated accounts of the data as a whole. As has been pointed out repeatedly in the psychological literature recently (e.g., by Myung & Pitt, 1997, and =-=Pitt, Myung, & Zhang, 2002-=-), it is important both to maximize goodness of fit and to minimize model complexity in order to achieve the basic goals of modeling. Unnecessarily complicated models that “over-fit” data often provid... |

72 |
The problem of inference from curves based on grouped data
- Estes
- 1956
(Show Context)
Citation Context ...the noise, and the resultant data will more accurately reflect the underlying psychological phenomenon. When the performance of subjects has genuine differences, however, it is well known (see, e.g., =-=Estes, 1956-=-; Myung, Kim, & Pitt, 2000) that averaging produces data that do not accurately represent the behavior of individuals and provides a misleading basis for modeling. Even more fundamentally, the practic... |

72 | The minimum description length principle and reasoning under uncertainty
- Gru¨nwald
- 1998
(Show Context)
Citation Context ...e it is helpful to assess the ability of the group approach to recover the true number of groups. A basic argument from advocates of the minimum description length approach to model evaluation (e.g., =-=Grünwald, 1998-=-; Rissanen, 2001) is that real data are generated by statistical processes that are not known and, indeed, are perhaps not knowable. Philosophically, this ,s610 LEE AND WEBB Figure 2. A summary of the... |

71 |
Applying occam’s razor in modeling cognition: A bayesian approach
- Myung, Pitt
- 1997
(Show Context)
Citation Context ...ety, Inc.s606 LEE AND WEBB eters, and so leads to progressively more complicated accounts of the data as a whole. As has been pointed out repeatedly in the psychological literature recently (e.g., by =-=Myung & Pitt, 1997-=-, and Pitt, Myung, & Zhang, 2002), it is important both to maximize goodness of fit and to minimize model complexity in order to achieve the basic goals of modeling. Unnecessarily complicated models t... |

69 | One hundred years of forgetting: A quantitative description of retention,” Psychol
- Rubin, Wenzel
- 1996
(Show Context)
Citation Context ...rtant shortcoming when fundamentally different models are used to explain performance for different subject groups. There are, for example, many competing models of retention that use two parameters (=-=Rubin & Wenzel, 1996-=-), and these models have different complexities that the BIC is unable to distinguish. The obvious remedy for this problem is to use more sophisticated model selection criteria that are sensitive to a... |

67 | Toward a unified model of attention in associative learning
- Kruschke
- 2001
(Show Context)
Citation Context ... category learning models, including ALCOVE (see, e.g., Erickson, 1999; Lewandowsky, Kalish, & Griffiths, 2000; Nosofsky & Johansen, 2000; Nosofsky, Palmeri, & McKinley, 1994; Treat, McFall, Viken, & =-=Kruschke, 2001-=-; Yang & Lewandowsky, 2003), these other studies have not inferred the groupings by applying rigorous model selection criteria. What the results presented here demonstrate is that accounting for indiv... |

64 | On the dangers of averaging across subjects when using multidimensional scaling or the similarity-choice model
- Ashby, Maddox, et al.
- 1994
(Show Context)
Citation Context ... usually assumes that each subject behaves in accordance with a different parameterization of the same basic model, so the model is evaluated against the data from each subject separately (see, e.g., =-=Ashby, Maddox, & Lee, 1994-=-; Nosofsky, 1986; Wixted & Ebbesen, 1997). Although this avoids the problem of corrupting the underlying pattern of the data, it also forgoes the potential benefits of averaging and guarantees that mo... |

61 | Human category learning: Implications for backpropagation models - Kruschke - 1993 |

59 | Strong optimality of the normalized ML models as universal codes and information in data
- Rissanen
- 2001
(Show Context)
Citation Context ...to assess the ability of the group approach to recover the true number of groups. A basic argument from advocates of the minimum description length approach to model evaluation (e.g., Grünwald, 1998; =-=Rissanen, 2001-=-) is that real data are generated by statistical processes that are not known and, indeed, are perhaps not knowable. Philosophically, this ,s610 LEE AND WEBB Figure 2. A summary of the simulation stud... |

58 | Bayesian modeling of human concept learning
- Tenenbaum
- 1999
(Show Context)
Citation Context ...0; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; =-=Tenenbaum, 1999-=-). One recurrent shortcoming of these models, however, is that (whether intentionally or as an unintended consequence of methodology) humans are usually modeled as invariants, not as individuals. This... |

54 |
Exemplar-based accounts of “multiple-system” phenomena in perceptual categorization
- Nosofsky, Johansen
- 2000
(Show Context)
Citation Context ...ations in learning behavior. Although these points have been demonstrated previously for category learning models, including ALCOVE (see, e.g., Erickson, 1999; Lewandowsky, Kalish, & Griffiths, 2000; =-=Nosofsky & Johansen, 2000-=-; Nosofsky, Palmeri, & McKinley, 1994; Treat, McFall, Viken, & Kruschke, 2001; Yang & Lewandowsky, 2003), these other studies have not inferred the groupings by applying rigorous model selection crite... |

50 | Base rates in category learning - Kruschke - 1996 |

42 | Counting probability distributions: Differential geometry and model selection
- Myung, Balasubramanian, et al.
(Show Context)
Citation Context ...re sophisticated model selection criteria that are sensitive to all of the components of model complexity. These include measures such as the stochastic complexity criterion (Rissanen, 1996; see also =-=Myung, Balasubramanian, & Pitt, 2000-=-) and normalized maximum likelihood (Rissanen, 2001). For cognitive models that resist the formal analysis needed to derive these measures, an alternative is to use numerical methods, such as Markov c... |

27 |
Genuine power curves in forgetting: A quantitative analysis of individual subject forgetting functions
- Wixted, Ebbesen
- 1997
(Show Context)
Citation Context ...n accordance with a different parameterization of the same basic model, so the model is evaluated against the data from each subject separately (see, e.g., Ashby, Maddox, & Lee, 1994; Nosofsky, 1986; =-=Wixted & Ebbesen, 1997-=-). Although this avoids the problem of corrupting the underlying pattern of the data, it also forgoes the potential benefits of averaging and guarantees that models are fit to all of the noise in the ... |

25 | Toward an explanation of the power law artifact: Insights from response surface analysis
- Myung, Kim, et al.
- 2000
(Show Context)
Citation Context ...d the resultant data will more accurately reflect the underlying psychological phenomenon. When the performance of subjects has genuine differences, however, it is well known (see, e.g., Estes, 1956; =-=Myung, Kim, & Pitt, 2000-=-) that averaging produces data that do not accurately represent the behavior of individuals and provides a misleading basis for modeling. Even more fundamentally, the practice of averaging data restri... |

25 | Learning the structure of similarity - Tenenbaum - 1996 |

20 |
An introduction to Bayesian hierarchical models with an application in theory of signal detection
- Rouder, Lu
- 2005
(Show Context)
Citation Context ...te individual differences by specifying distributions of basic model parameters and then learning the “hyperparameters” of these distributions from data (see, e.g., Peruggia, Van Zandt, & Chen, 2002; =-=Rouder & Lu, 2005-=-; Rouder, Sun, Speckman, Lu, & Zhou, 2003). These models are often developed within a hierarchical Bayesian modeling framework. This has proved to be an informative and useful approach and is likely t... |

18 |
Processes of memory loss, recovery and distortion
- Estes
- 1997
(Show Context)
Citation Context ...es and have been developed to address diverse cognitive phenomena, ranging from stimulus representation (see, e.g., Shepard, 1980; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; =-=Estes, 1997-=-; Laming, 1992) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; Tenenbaum, 1999). One recurrent shortcoming of these models, however, is that (whet... |

18 |
Integrality versus separability of stimulus dimensions: From an early convergence of evidence to a proposed theoretical basis
- SHEPARD
- 1991
(Show Context)
Citation Context ...ve constant. The value of r � 0 determines the metric, with r � 1 (city block) and r � 2 (Euclidean) being common choices, corresponding to separable and integral stimuli, respectively (Garner, 1974; =-=Shepard, 1991-=-). We follow Tenenbaum (1996; see also Lee, 2001; Lee & Pope, 2003) in assuming that the empirical dissimilarities follow Gaussian distributions with common variance σ 2 . As has been argued by Lee (2... |

17 | Determining the dimensionality of multidimensional scaling models for cognitive modeling
- Lee
- 2001
(Show Context)
Citation Context ...c, with r � 1 (city block) and r � 2 (Euclidean) being common choices, corresponding to separable and integral stimuli, respectively (Garner, 1974; Shepard, 1991). We follow Tenenbaum (1996; see also =-=Lee, 2001-=-; Lee & Pope, 2003) in assuming that the empirical dissimilarities follow Gaussian distributions with common variance σ 2 . As has been argued by Lee (2001), the variance quantifies the precision of t... |

11 |
A hierarchical Bayesian statistical framework for response time distributions
- Rouder, Sun, et al.
- 2003
(Show Context)
Citation Context ...rences by specifying distributions of basic model parameters and then learning the “hyperparameters” of these distributions from data (see, e.g., Peruggia, Van Zandt, & Chen, 2002; Rouder & Lu, 2005; =-=Rouder, Sun, Speckman, Lu, & Zhou, 2003-=-). These models are often developed within a hierarchical Bayesian modeling framework. This has proved to be an informative and useful approach and is likely to be strengthened and extended by current... |

8 |
Context-gated knowledge partitioning in categorization
- Yang, Lewandowsky
- 2003
(Show Context)
Citation Context ...ng models, including ALCOVE (see, e.g., Erickson, 1999; Lewandowsky, Kalish, & Griffiths, 2000; Nosofsky & Johansen, 2000; Nosofsky, Palmeri, & McKinley, 1994; Treat, McFall, Viken, & Kruschke, 2001; =-=Yang & Lewandowsky, 2003-=-), these other studies have not inferred the groupings by applying rigorous model selection criteria. What the results presented here demonstrate is that accounting for individual differences using mo... |

7 |
The analysis of short-term retention: Models for Brown-Peterson experiments
- Laming
- 1992
(Show Context)
Citation Context ...een developed to address diverse cognitive phenomena, ranging from stimulus representation (see, e.g., Shepard, 1980; Tversky, 1977) to memory retention (e.g., Anderson & Schooler, 1991; Estes, 1997; =-=Laming, 1992-=-) to category learning (e.g., Ashby & Perrin, 1988; Berretty, Todd, & Martignon, 1999; Kruschke, 1992; Tenenbaum, 1999). One recurrent shortcoming of these models, however, is that (whether intentiona... |

7 |
Avoiding the dangers of averaging across subjects when using multidimensional scaling
- Lee, Pope
- 2003
(Show Context)
Citation Context ... 1 (city block) and r � 2 (Euclidean) being common choices, corresponding to separable and integral stimuli, respectively (Garner, 1974; Shepard, 1991). We follow Tenenbaum (1996; see also Lee, 2001; =-=Lee & Pope, 2003-=-) in assuming that the empirical dissimilarities follow Gaussian distributions with common variance σ 2 . As has been argued by Lee (2001), the variance quantifies the precision of the data and plays ... |

7 |
Using cognitive science methods to assess the role of social information processing in sexually coercive behavior
- Treat, McFall, et al.
- 2001
(Show Context)
Citation Context ...onstrated previously for category learning models, including ALCOVE (see, e.g., Erickson, 1999; Lewandowsky, Kalish, & Griffiths, 2000; Nosofsky & Johansen, 2000; Nosofsky, Palmeri, & McKinley, 1994; =-=Treat, McFall, Viken, & Kruschke, 2001-=-; Yang & Lewandowsky, 2003), these other studies have not inferred the groupings by applying rigorous model selection criteria. What the results presented here demonstrate is that accounting for indiv... |

6 | Was it a car or a cat I saw? An analysis of response times for word recognition - Peruggia, Zandt, et al. |

5 |
Competing strategies in categorization: Expediency and resistance to knowledge restructuring
- Lewandowsky, Kalish, et al.
- 2000
(Show Context)
Citation Context ...ul parameterizations to accommodate variations in learning behavior. Although these points have been demonstrated previously for category learning models, including ALCOVE (see, e.g., Erickson, 1999; =-=Lewandowsky, Kalish, & Griffiths, 2000-=-; Nosofsky & Johansen, 2000; Nosofsky, Palmeri, & McKinley, 1994; Treat, McFall, Viken, & Kruschke, 2001; Yang & Lewandowsky, 2003), these other studies have not inferred the groupings by applying rig... |

5 | Modeling individual differences in category learning - Webb, Lee - 2004 |

3 |
INDCLUS: An individual differences generalization of the ADCLUS model and the MAPCLUS algorithm
- Carroll, Arabie
- 1983
(Show Context)
Citation Context ... that these capabilities extend well beyond those of alternative “individual differences” extensions of multidimensional scaling, such as INDSCAL (see, e.g., Carroll & Chang, 1970) and INDCLUS (e.g., =-=Carroll & Arabie, 1983-=-). These models accommodate individual variation by allowing each subject to weight the axes of an underlying representational space in different ways. They do not model groups of subjects, although i... |