Results 1  10
of
12
Toward a method of selecting among computational models of cognition
 Psychological Review
, 2002
"... The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to ..."
Abstract

Cited by 152 (16 self)
 Add to MetaCart
The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Computational models of cognition are increasingly being advanced as explanations of behavior. The success of this line of inquiry depends on the development of robust methods to guide the evaluation and selection of these models. This article introduces a method of selecting among mathematical models of cognition known as minimum description length, which provides an intuitive and theoretically wellgrounded understanding of why one model should be chosen. A central but elusive concept in model selection, complexity, can also be derived with the method. The adequacy of the method is demonstrated in 3 areas of cognitive modeling: psychophysics, information integration, and categorization. How should one choose among competing theoretical explanations of data? This question is at the heart of the scientific enterprise, regardless of whether verbal models are being tested in an experimental setting or computational models are being evaluated in simulations. A number of criteria have been proposed to assist in this endeavor, summarized nicely by Jacobs and Grainger
Stratified exponential families: Graphical models and model selection
 ANNALS OF STATISTICS
, 2001
"... ..."
Methods for Optimal Text Selection
, 1997
"... Construction of both texttospeech synthesis (TTS) and automatic speech recognition (ASR) systems involves usage of speech data bases. These data bases usually consist of read text, which means that one has significant control over the content of the data base. Here we address how one can take adva ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Construction of both texttospeech synthesis (TTS) and automatic speech recognition (ASR) systems involves usage of speech data bases. These data bases usually consist of read text, which means that one has significant control over the content of the data base. Here we address how one can take advantage of this control, by discussing a number of variants of "greedy" text selection methods and showing their application in a variety of examples. 1. INTRODUCTION Both automatic speech recognition (ASR) systems and text to speech (TTS) systems have components that are trained on texttypically read text. Surprisingly often, training text is selected without giving much thought to optimality of the selected text. For limited domain situations, it may very well suffice to select randomly a subset from the domain for training purposes. In many ASR applications, and certainly in most TTS applications, however, the domain is open. And, as discussed at length in [7], in open domain situation...
The algebraic combinatorial approach for lowrank matrix completion
 CoRR
"... We present a novel algebraic combinatorial view on lowrank matrix completion based on studying relations between a few entries with tools from algebraic geometry and matroid theory. The intrinsic locality of the approach allows for the treatment of single entries in a closed theoretical and practic ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We present a novel algebraic combinatorial view on lowrank matrix completion based on studying relations between a few entries with tools from algebraic geometry and matroid theory. The intrinsic locality of the approach allows for the treatment of single entries in a closed theoretical and practical framework. More specifically, apart from introducing an algebraic combinatorial theory of lowrank matrix completion, we present probabilityone algorithms to decide whether a particular entry of the matrix can be completed. We also describe methods to complete that entry from a few others, and to estimate the error which is incurred by any method completing that entry. Furthermore, we show how known results on matrix completion and their sampling assumptions can be related to our new perspective and interpreted in terms of a completability phase transition. On this revision This revision version 4 is both abridged and extended in terms of exposition and results, as compared to version 3 Király et al. (2013). The theoretical foundations are developed in a more adhoc way which allow to reach the main statements and algorithmic implications more quickly. Version 3 contains a more principled derivation of the theory, more related results (e.g., estimation of missing entries and its consistency, representations for the determinantal matroid, detailed examples), but a focus which is further away from applications. A reader who is interested in both is invited to read the main parts of version 4 first, then go through version 3 for a more detailed view on the theory. 1.
Methods for Optimal Text Selection
"... Construction of both texttospeech synthesis (TTS) and automatic speech recognition (ASR) systems involves usage of speech data bases. These data bases usually consist of read text, which means that one has significant control over the content of the data base. Here we address how one can take ad ..."
Abstract
 Add to MetaCart
(Show Context)
Construction of both texttospeech synthesis (TTS) and automatic speech recognition (ASR) systems involves usage of speech data bases. These data bases usually consist of read text, which means that one has significant control over the content of the data base. Here we address how one can take advantage of this control, by discussing a number of variants of “greedy ” text selection methods and showing their application in a variety of examples. 1.
On Failing to Capture Some (or Even All) of What is Communicated
"... Abstract This paper examines a methodological argument launched against Cappelen and Lepore’s “minimalist ” semantics. The charge is that this semantic theory – and by implication a great many other ones – cannot be correct, because they fail to capture some of the “intuitive truth conditions ” of t ..."
Abstract
 Add to MetaCart
Abstract This paper examines a methodological argument launched against Cappelen and Lepore’s “minimalist ” semantics. The charge is that this semantic theory – and by implication a great many other ones – cannot be correct, because they fail to capture some of the “intuitive truth conditions ” of the relevant sentences. In response, I argue that this charge rests on the claim that an acceptable scientific theory must (at least sometimes) capture all of the overt phenomena under study. But this claim, I contend, is false. In actual practice, scientific models will often never capture all of the behavior of the relevant phenomenon, and this feature does not undermine them as such. I maintain that semantic theorizing is just an instance of this more general aspect of scientific methodology.
unknown title
"... of the question of whether it is possible to distinguish serial from parallel processing (Townsend, 1969). That same year, a popular song lyric provided a prescient summary of what is now a wideranging literature on this issue: “You can’t always get what you want... ” (Richards & Jagger, 1969 ..."
Abstract
 Add to MetaCart
of the question of whether it is possible to distinguish serial from parallel processing (Townsend, 1969). That same year, a popular song lyric provided a prescient summary of what is now a wideranging literature on this issue: “You can’t always get what you want... ” (Richards & Jagger, 1969). Indeed, in the years that have followed, many challenging problems associated with the theoretical and empirical distinction between serial and parallel processing have been documented; yet many significant advances have been made. The goal of the present review is to collect and showcase many of these advances, with particular attention being paid to approaches that, by intent, provide a tight coupling of theory with experimental and statistical methods. Our treatment of these issues will begin with some context: In particular, we will provide a brief description of the intellectual perspectives (or pretheoretical commitments, as in Lachman, Lachman, & Butterfield, 1979; Murdock, 1974) that we bring to the discussion. We then will move on to a general consideration of the problem of discriminating between serial and parallel processing, an issue that is sometimes referred to as a problem of model mimicry (e.g., Townsend, 1971a, 1972; Van Zandt & Ratcliff, 1995). After this, we will move on to the core of our review, structuring that review around three global assertions: (1) The question will not, and should not, go away; (2) the question cannot be understood or answered out of context; and (3) the question can be answered using approaches that provide a clear and principled connection between theory and method.
unknown title
"... Model complexity is conceptualised as the capacity of a model to fit any conceivable data set. A model can be represented geometrically as a low dimensional "response surface" embedded in a higher dimensional outcome space in which data are represented as a point or set of points. The fit ..."
Abstract
 Add to MetaCart
Model complexity is conceptualised as the capacity of a model to fit any conceivable data set. A model can be represented geometrically as a low dimensional "response surface" embedded in a higher dimensional outcome space in which data are represented as a point or set of points. The fit of the model to a data point is given by the minimum distance between the point and the response surface. Model complexity can thus be thought of as the extent to which the response surface is "close to" arbitrary points in outcome space. If the extension of outcome space can be assumed to be bounded, complexity can be operationalised as the mean minimum distance, defined as the average minimum squared distance between an arbitrary data point in outcome space and the model's response surface. It may also be expressed as a dimensionless quantity called the scaled mean minimum distance. For linear models, theoretical values for the scaled mean minimum distance and the variance of the scaled minimum distance can be readily obtained and compared against empirical estimates obtained from fits to random data. The approach is applied to resolving the question of the relative complexity of the linear integration model (LIM) and the fuzzy logic of perception model (FLMP), both of which have been the subject of controversy in the field of depth perception. It is concluded that the two models are equally complex. The aim of this paper is to explore a heuristic approach to assessing model complexity in terms of the capacity of a model to fit arbitrary patterns of data. Consider two models, A and B, that purport to account for the same set of data, D. If the models contain free parameters, then these are usually adjusted in order to minimise the discrepancy between the predictions of each model an...
Survey of Model Selection and Model Combination
"... Please address correspondence to the first author at ..."