Results 1  10
of
350
Toward an instance theory of automatization
 Psychological Review
, 1988
"... This article presents a theory in which automatization is construed as the acquisition of a domainspecific knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur ..."
Abstract

Cited by 647 (38 self)
 Add to MetaCart
This article presents a theory in which automatization is construed as the acquisition of a domainspecific knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur only after practice in a consistent environment. Practice is important because it increases the amount retrieved and the speed of retrieval; consistency is important because it ensures that the retrieved instances will be useful. The theory accounts quantitatively for the powerfunction speedup and predicts a powerfunction reduction in the standard deviation that is constrained to have the same exponent as the power function for the speedup. The theory accounts for qualitative properties as well, explaining how some may disappear and others appear with practice. More generally, it provides an alternative to the modal view of automaticity, arguing that novice performance is limited by a lack of knowledge rather than a scarcity of resources. The focus on learning avoids many problems with the modal view that stem from its focus on resource limitations. Automaticity is an important phenomenon in everyday mental life. Most of us recognize that we perform routine activities quickly and effortlessly, with little thought and conscious awarenessin short, automatically (James, 1890). As a result, we often perform those activities on &quot;automatic pilot &quot; and turn our minds to other things. For example, we can drive to dinner while conversing in depth with a visiting scholar, or we can make coffee while planning dessert. However, these benefits may be offset by costs. The automatic pilot can lead us astray, causing errors and sometimes catastrophes (Reason & Myceilska, 1982). If the conversation is deep enough, we may find ourselves and the scholar arriving at the office rather than the restaurant, or we may discover that we aren't sure whether we put two or three scoops of coffee into the pot. Automaticity is also an important phenomenon in skill acquisition (e.g., Bryan & Harter, 1899). Skills are thought to consist largely of collections of automatic processes and procedures
Novelty Detection: A Review  Part 1: Statistical Approaches
 Signal Processing
, 2003
"... Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains information abou ..."
Abstract

Cited by 204 (0 self)
 Add to MetaCart
Novelty detection is the identification of new or unknown data or signal that a machine learning system is not aware of during training. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains information about objects that were not known at the time of training the model. In this paper we provide stateof theart review in the area of novelty detection based on statistical approaches. The second part paper details novelty detection using neural networks. As discussed, there are a multitude of applications where novelty detection is extremely important including signal processing, computer vision, pattern recognition, data mining, and robotics.
The choice axiom after twenty years
 Journal of Mathematical Psychology
, 1977
"... This survey is divided into three major sections. The first concerns mathematical results about the choice axiom and the choice models that devoIve from it. For example, its relationship to Thurstonian theory is satisfyingly understood; much is known about how choice and ranking probabilities may re ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
This survey is divided into three major sections. The first concerns mathematical results about the choice axiom and the choice models that devoIve from it. For example, its relationship to Thurstonian theory is satisfyingly understood; much is known about how choice and ranking probabilities may relate, although little of this knowledge seems empirically useful; and there are certain interesting statistical facts. The second section describes attempts that have been made to test and apply these models. The testing has been done mostly, though not exclusively, by psychologists; the applications have been mostly in economics and sociology. Although it is clear from many experiments that the conditions under which the choice axiom holds are surely delicate, the need for simple, rational underpinnings in complex theories, as in economics and sociology, leads one to accept assumptions that are at best approximate. And the third section concerns alternative, more general theories which, in spirit, are much like the choice axiom. Perhaps I had best admit at the outset that, as a commentator on this scene, I am qualified no better than many others and rather less well than some who have been working in this area recently, which I have not been. My pursuits have led me along other,
Disclosure risk vs. data utility: The RU confidentiality map
 Chance
, 2001
"... Information organizations (IOs) must provide data products that are both useful and have low risk of confidentiality disclosure. Recognizing that deidentification of data is generally inadequate to protect their confidentiality against attack by a data snooper, concerned IOs can apply disclosure lim ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
Information organizations (IOs) must provide data products that are both useful and have low risk of confidentiality disclosure. Recognizing that deidentification of data is generally inadequate to protect their confidentiality against attack by a data snooper, concerned IOs can apply disclosure limitation techniques to the original data. Desirably, the resulting restricted data have both high data utility U to users (analytically valid data) and low disclosure risk R (safe data). This article shows the promise of the RU confidentiality map, a chart that traces the impact on R and U of changes in the parameters of a disclosure limitation procedure. Theory for the RU confidentiality map is developed for additive noise applied to univariate data under various scenarios of data snooper attack. These scenarios are predicated on different knowledge states for the data snooper. A demonstration is provided of how to implement the theory for a real database. Through simulation methods, this leads to an empirical RU confidentiality map. Application is made to data from a National Center for Education Statistics (NCES) survey, the
Size effect
, 2000
"... This paper surveys the available results on the size effect on the nominal strength of structures — a fundamental problem of considerable importance to concrete structures, geotechnical structures, geomechanics, arctic ice engineering, composite materials, etc., with applications ranging from struct ..."
Abstract

Cited by 64 (8 self)
 Add to MetaCart
This paper surveys the available results on the size effect on the nominal strength of structures — a fundamental problem of considerable importance to concrete structures, geotechnical structures, geomechanics, arctic ice engineering, composite materials, etc., with applications ranging from structural engineering to the design of ships and aircraft. The history of the ideas on the size effect is briefly outlined and recent research directions are emphasized. First, the classical statistical theory of size effect due to randomness of strength, completed by Weibull, is reviewed and its limitations pointed out. Subsequently, the energetic size eect, caused by stress redistributions due to large fractures, is discussed. Attention is then focused on the bridging between the theory of plasticity, which implies no size effect and is applicable for quasibrittle materials only on a suciently small scale, and the theory of linear elastic fracture mechanics, which exhibits the strongest possible deterministic size eect and is applicable for these materials on sufficiently large scales. The main ideas of the recently developed theory for the size effect in the bridging range are sketched. Only selected references to the vast amount of work that has recently been appearing
A.: LikelihoodBased Inference for MaxStable Processes
 Journal of the American Statistical Association
"... The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
The last decade has seen maxstable processes emerge as a common tool for the statistical modelling of spatial extremes. However, their application is complicated due to the unavailability of the multivariate density function, and so likelihoodbased methods remain far from providing a complete and flexible framework for inference. In this article we develop inferentially practical, likelihoodbased methods for fitting maxstable processes derived from a compositelikelihood approach. The procedure is sufficiently reliable and versatile to permit the simultaneous modelling of joint and marginal parameters in the spatial context at a moderate computational cost. The utility of this methodology is examined via simulation, and illustrated by the analysis of U.S. precipitation extremes. Keywords: Composite likelihood; Extreme value theory; Maxstable processes; Pseudolikelihood, Rainfall; Spatial Extremes.
Experimental evaluation of heuristic optimization algorithms: A tutorial
 Journal of Heuristics
, 2001
"... Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Heuristic optimization algorithms seek good feasible solutions to optimization problems in circumstances where the complexities of the problem or the limited time available for solution do not allow exact solution. Although worst case and probabilistic analysis of algorithms have produced insight on some classic models, most of the heuristics developed for large optimization problem must be evaluated empirically—by applying procedures to a collection of specific instances and comparing the observed solution quality and computational burden. This paper focuses on the methodological issues that must be confronted by researchers undertaking such experimental evaluations of heuristics, including experimental design, sources of test instances, measures of algorithmic performance, analysis of results, and presentation in papers and talks. The questions are difficult, and there are no clear right answers. We seek only to highlight the main issues, present alternative ways of addressing them under different circumstances, and caution about pitfalls to avoid. Key Words: Heuristic optimization, computational experiments 1.
Calculating quantile risk measures for financial time series using extreme value theory
, 1998
"... We consider the estimation of quantiles in the tail of the marginal distribution of financial return series, using extreme value statistical methods based on the limiting distribution for block maxima of stationary time series. A simple methodology for quantification of worst case scenarios, such as ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
We consider the estimation of quantiles in the tail of the marginal distribution of financial return series, using extreme value statistical methods based on the limiting distribution for block maxima of stationary time series. A simple methodology for quantification of worst case scenarios, such as ten or twenty year losses is proposed. We validate methods on a simulated series from an ARCH(1) process showing some of the features of real financial data, such as fat tails and clustered extreme values; we then analyse daily log returns on a share price.