Results 11  20
of
732
Coverage Criteria for GUI Testing
 In Proceedings of the 8th European Software Engineering Conference (ESEC) and 9th ACM SIGSOFT International Symposium on the Foundations of Software Engineering (FSE9
, 2001
"... The widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not apply directly to GUI software. ..."
Abstract

Cited by 60 (15 self)
 Add to MetaCart
(Show Context)
The widespread recognition of the usefulness of graphical user interfaces (GUIs) has established their importance as critical components of today's software. GUIs have characteristics different from traditional software, and conventional testing techniques do not apply directly to GUI software. This paper's focus is on coverage criteria for GUIs, an important tool in testing. We present new coverage criteria that may be employed to help determine whether a GUI has been adequately tested. These coverage criteria use events and event sequences to specify a measure of test adequacy. Since the total number of permutations of event sequences in any nontrivial GUI is extremely large, the GUI's hierarchical structure is exploited to identify the important event sequences to be tested. The GUI's hierarchy is decomposed into GUI components each of which is used as a basic unit of testing. A new representation of a GUI component, called an eventflow graph, identifies the interaction of events within a component and intracomponent criteria are used to evaluate the adequacy of tests on these events. The hierarchical relationship among components is represented by an integration tree and intercomponent coverage criteria are used to evaluate the adequacy of test sequences that cross components. Algorithms are described to construct eventflow graphs and an integration tree for a given GUI, and to evaluate the coverage of a given test suite with respect to the new coverage criteria. A case study illustrates an important correlation between eventbased coverage of a GUI and statement coverage of the software's underlying code. Partially supported by the Andrew Mellon Predoctoral Fellowship. Effective Aug 1, 2001: Department of Computer Science, University of Maryland. atif@cs.um...
Real Theorem Provers Deserve Real UserInterfaces
, 1992
"... This paper explains how to add a modern user interface to existing theorem provers, using principles and tools designed for programming environments. ..."
Abstract

Cited by 56 (5 self)
 Add to MetaCart
This paper explains how to add a modern user interface to existing theorem provers, using principles and tools designed for programming environments.
Automated Modeling of Complex Systems to Answer Prediction Questions
 ARTIFICIAL INTELLIGENCE
, 1995
"... ..."
(Show Context)
Shape Ambiguities in Structure from Motion
 PAMI
, 1996
"... This technical report examines the fundamental ambiguities and uncertainties inherent in recovering structure from motion. By examining the eigenvectors associated with null or small eigenvalues of the Hessian matrix, we can quantify the exact nature of these ambiguities and predict how they affect ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
(Show Context)
This technical report examines the fundamental ambiguities and uncertainties inherent in recovering structure from motion. By examining the eigenvectors associated with null or small eigenvalues of the Hessian matrix, we can quantify the exact nature of these ambiguities and predict how they affect the accuracy of the reconstructed shape. Our results for orthographic cameras show that the basrelief ambiguity is significant even with many images, unless a large amount of rotation is present. Similar results for perspective cameras suggest that three or more frames and a large amount of rotation are required for metrically accurate reconstruction.
Global illumination using local linear density estimation
 Proceedings of SIGGRAPH 97
, 1997
"... This article presents the density estimation framework for generating viewindependent global illumination solutions. It works by probabilistically simulating the light flow in an environment with light particles that trace random walks originating at luminaires and then using statistical density es ..."
Abstract

Cited by 54 (6 self)
 Add to MetaCart
This article presents the density estimation framework for generating viewindependent global illumination solutions. It works by probabilistically simulating the light flow in an environment with light particles that trace random walks originating at luminaires and then using statistical density estimation techniques to reconstruct the lighting on each surface. By splitting the computation into separate transport and reconstruction stages, we gain many advantages including reduced memory usage, the ability to simulate nondiffuse transport, and natural parallelism. Solutions to several theoretical and practical difficulties in implementing this framework are also described. Light sources that vary spectrally and directionally are integrated into a spectral particle tracer using nonuniform rejection. A new local linear density estimation technique eliminates boundary bias and extends to arbitrary polygons. A mesh decimation algorithm with perceptual calibration is introduced to simplify the Gouraudshaded
MPFUN: A Portable High Performance Multiprecision Package
, 1990
"... The author has written a package of Fortran routines that perform a variety of arithmetic operations and transcendental functions on floating point numbers of arbitrarily high precision, including large integers. This package features (1) virtually universal portability, (2) high performance, especi ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
(Show Context)
The author has written a package of Fortran routines that perform a variety of arithmetic operations and transcendental functions on floating point numbers of arbitrarily high precision, including large integers. This package features (1) virtually universal portability, (2) high performance, especially on vector supercomputers, (3) advanced algorithms, including FFTbased multiplication and quadratically convergent algorithms for π and transcendental functions, and (4) extensive selfchecking and debug facilities that permit the package to be used as a rigorous system integrity test. Converting application programs to run with these routines is facilitated by an automatic translator program. This paper describes the routines in the package and includes discussion of the algorithms employed, the implementation techniques, performance results and some applications. Notable among the performance results is that this package runs up to 40 times faster than another widely used package on a RISC workstation, and it runs up to 400 times faster than the other package on a Cray supercomputer.
Orthogonal Eigenvectors and Relative Gaps
, 2002
"... Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppo ..."
Abstract

Cited by 50 (16 self)
 Add to MetaCart
(Show Context)
Let LDLt be the triangular factorization of a real symmetric n\Theta n tridiagonal matrix so that L is a unit lower bidiagonal matrix, D is diagonal. Let (*; v) be an eigenpair, * 6 = 0, with the property that both * and v are determined to high relative accuracy by the parameters in L and D. Suppose also that the relative gap between * and its nearest neighbor _ in the spectrum exceeds 1=n; nj * \Gamma _j? j*j. This paper presents a new O(n) algorithm and a proof that, in the presence of roundoff error, the algorithm computes an approximate eigenvector ^v that is accurate to working precision: j sin &quot;(v; ^v)j = O(n&quot;), where &quot; is the roundoff unit. It follows that ^v is numerically orthogonal to all the other eigenvectors. This result forms part of a program to compute numerically orthogonal eigenvectors without resorting to the GramSchmidt process. The contents of this paper provide a highlevel description and theoretical justification for LAPACK (version 3.0) subroutine DLAR1V.
A computational model of how the basal ganglia produce sequences
 Journal of Cognitive Neuroscience
, 1998
"... We propose a systemslevel computational model of the basal ganglia based closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascending information to cortical action and planning areas, are tonically inhibited by the basal ganglia. Second, we assume that ..."
Abstract

Cited by 50 (1 self)
 Add to MetaCart
(Show Context)
We propose a systemslevel computational model of the basal ganglia based closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascending information to cortical action and planning areas, are tonically inhibited by the basal ganglia. Second, we assume that the output stage of the basal ganglia, the internal segment of the globus pallidus (GPi), selects a single action from several competing actions via lateral interactions. Third, we propose that a form of local working memory exists in the form of reciprocal connections between the external globus pallidus (GPe) and the subthalamic nucleus (STN). As a test of the model, the system was trained to learn a sequence of states that required the context of previous actions. The striatum, which was assumed to represent a conjunction of cortical states, directly selected the action in the GP during training. The STNtoGP connection strengths were modi�ed by an associative learning
Recursive implementation of the Gaussian filter
, 1995
"... In this paper we propose a recursive implementation of the Gaussian filter. This implementation yields an infinite impulse response filter that has six MADDs per dimension independent of the value of a in the Gaussian kernel. In contrast to the Deriche implementation (1987), the coefficients of our ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
In this paper we propose a recursive implementation of the Gaussian filter. This implementation yields an infinite impulse response filter that has six MADDs per dimension independent of the value of a in the Gaussian kernel. In contrast to the Deriche implementation (1987), the coefficients of our recursire filter have a simple, closedform solution for a desired value of the Gaussian sigma. Our implementation is, in general, faster than (1) an implementation based upon direct convolution with samples of a Gaussian, (2) repeated convolutions with a kernel such as the uniform filter, and (3) an FFT implementation of a Gaussian filter.
Surface perception in pictures
 Perception & Psychophysics
, 1992
"... Subjects adjusted a local gauge figure such as to perceptually “fit ” the apparent surfaces of objects depicted in photographs. We obtained a few hundred data points per session, covering the picture according to a uniform lattice. Settings were repeated 3 times for each of 3 subjects. Almost all of ..."
Abstract

Cited by 48 (4 self)
 Add to MetaCart
Subjects adjusted a local gauge figure such as to perceptually “fit ” the apparent surfaces of objects depicted in photographs. We obtained a few hundred data points per session, covering the picture according to a uniform lattice. Settings were repeated 3 times for each of 3 subjects. Almost all of the variability resided in the slant; the relative spread in the slant was about 25% (Weber fraction). The tilt was reproduced with a typical spread of about 100. The rank correlation of the slant settings of different observers was high, thus the slant settings of different subjects were monotonically related. The variability could be predicted from the scatter in repeated settings by the individual observers. Although repeated settings by a single observer agreed within5%, observers did not agree on the value of the slant, even on the average. Scaling factors of a doubling in the depth dimension were encountered between different subjects. The data conformed quite well to some hypothetical fiducial global surface, the orientation of which was “probed ” by the subject’s local settings. The variability was completely accounted for by singleobserver scatter. These conclusions are based upon an analysis of the internal structure of the local settings. We did not address the problem of veridicality, that is, conformity to some “real object.”