Results 1  10
of
104
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Computational Differential Equations
, 1996
"... Introduction This first part has two main purposes. The first is to review some mathematical prerequisites needed for the numerical solution of differential equations, including material from calculus, linear algebra, numerical linear algebra, and approximation of functions by (piecewise) polynomial ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
Introduction This first part has two main purposes. The first is to review some mathematical prerequisites needed for the numerical solution of differential equations, including material from calculus, linear algebra, numerical linear algebra, and approximation of functions by (piecewise) polynomials. The second purpose is to introduce the basic issues in the numerical solution of differential equations by discussing some concrete examples. We start by proving the Fundamental Theorem of Calculus by proving the convergence of a numerical method for computing an integral. We then introduce Galerkin's method for the numerical solution of differential equations in the context of two basic model problems from population dynamics and stationary heat conduction.
On Selecting Models for Nonlinear Time Series
 Physica D
, 1995
"... Constructing models from time series with nontrivial dynamics involves the problem of how to choose the best model from within a class of models, or to choose between competing classes. This paper discusses a method of building nonlinear models of possibly chaotic systems from data, while maintainin ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
Constructing models from time series with nontrivial dynamics involves the problem of how to choose the best model from within a class of models, or to choose between competing classes. This paper discusses a method of building nonlinear models of possibly chaotic systems from data, while maintaining good robustness against noise. The models that are built are close to the simplest possible according to a description length criterion. The method will deliver a linear model if that has shorter description length than a nonlinear model. We show how our models can be used for prediction, smoothing and interpolation in the usual way. We also show how to apply the results to identification of chaos by detecting the presence of homoclinic orbits directly from time series. 1 The Model Selection Problem As our understanding of chaotic and other nonlinear phenomena has grown, it has become apparent that linear models are inadequate to model most dynamical processes. Nevertheless, linear models...
Beyond amplification: Using the computer to reorganize mental functioning
 Educational Psychologist
, 1985
"... Computers are classically viewed as amplifiers of cognition. An alternative conceptualization is offered of computer as reorganizer of mental functioning. Software analyses illuminate the advantages of the latter approach for new visions of the potential cognitive benefits of computers. A new result ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Computers are classically viewed as amplifiers of cognition. An alternative conceptualization is offered of computer as reorganizer of mental functioning. Software analyses illuminate the advantages of the latter approach for new visions of the potential cognitive benefits of computers. A new result emerges: Because the cognitive technologies we invent serve as instruments of cultural redefinition (shaping who we are by changing, not just amplifying, what we do), defining educational values becomes a foreground issue. The demands of an information society make an explicit emphasis on general cognitive skills a priority. The urgency of updating education's goals and methods recommends an activist research paradigm: to simultaneously create and study changes in processes and outcomes of human learning with new cognitive and educational tools. The computer, that symbolic workhorse and supreme numbercruncher, has lately become a central topic of thought and discussion for educators and psychologists. Brought by the advent of inexpensive and relatively powerful software for personal computers, now within the budgetary considerations of most if not all school systems, the uses of computers have raised many seminal questions about the future of education and for the research community in psychology and education. What exactly does Parts of this paper were presented at the annual meetings
Computability and recursion
 BULL. SYMBOLIC LOGIC
, 1996
"... We consider the informal concept of “computability” or “effective calculability” and two of the formalisms commonly used to define it, “(Turing) computability” and “(general) recursiveness.” We consider their origin, exact technical definition, concepts, history, general English meanings, how they b ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
We consider the informal concept of “computability” or “effective calculability” and two of the formalisms commonly used to define it, “(Turing) computability” and “(general) recursiveness.” We consider their origin, exact technical definition, concepts, history, general English meanings, how they became fixed in their present roles, how they were first and are now used, their impact on nonspecialists, how their use will affect the future content of the subject of computability theory, and its connection to other related areas. After a careful historical and conceptual analysis of computability and recursion we make several recommendations in section §7 about preserving the intensional differences between the concepts of “computability” and “recursion.” Specifically we recommend that: the term “recursive ” should no longer carry the additional meaning of “computable” or “decidable;” functions defined using Turing machines, register machines, or their variants should be called “computable” rather than “recursive;” we should distinguish the intensional difference between Church’s Thesis and Turing’s Thesis, and use the latter particularly in dealing with mechanistic questions; the name of the subject should be “Computability Theory” or simply Computability rather than
Beyond core knowledge: Natural geometry. Cognitive
 Journal of Experimental Psychology: Animal Behavior Processes
, 2008
"... For many centuries, philosophers and scientists have pondered the origins and nature of human intuitions about the properties of points, lines, and figures on the Euclidean plane, with most hypothesizing that a system of Euclidean concepts either is innate or is assembled by general learning process ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
For many centuries, philosophers and scientists have pondered the origins and nature of human intuitions about the properties of points, lines, and figures on the Euclidean plane, with most hypothesizing that a system of Euclidean concepts either is innate or is assembled by general learning processes. Recent research from cognitive and developmental psychology, cognitive anthropology, animal cognition, and cognitive neuroscience suggests a different view. Knowledge of geometry may be founded on at least two distinct, evolutionarily ancient, core cognitive systems for representing the shapes of largescale, navigable surface layouts and of smallscale, movable forms and objects. Each of these systems applies to some but not all perceptible arrays and captures some but not all of the three fundamental Euclidean relationships of distance (or length), angle, and direction (or sense). Like natural number (Carey, 2009), Euclidean geometry may be constructed through the productive combination of representations from these core systems, through the use of uniquely human symbolic systems.
Hyperbolic geometry
 In Flavors of geometry
, 1997
"... 3. Why Call it Hyperbolic Geometry? 63 4. Understanding the OneDimensional Case 65 ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
3. Why Call it Hyperbolic Geometry? 63 4. Understanding the OneDimensional Case 65
ON THE NUMERICAL EVALUATION OF FREDHOLM DETERMINANTS
, 804
"... Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. Some significant quantities in mathematics and physics are most naturally expressed as the Fredholm determinant of an integral operator, most notably many of the distribution functions in random matrix theory. Though their numerical values are of interest, there is no systematic numerical treatment of Fredholm determinants to be found in the literature. Instead, the few numerical evaluations that are available rely on eigenfunction expansions of the operator, if expressible in terms of special functions, or on alternative, numerically more straightforwardly accessible analytic expressions, e.g., in terms of Painlevé transcendents, that have masterfully been derived in some cases. In this paper we close the gap in the literature by studying projection methods and, above all, a simple, easily implementable, general method for the numerical evaluation of Fredholm determinants that is derived from the classical Nyström method for the solution of Fredholm equations of the second kind. Using Gauss–Legendre or Clenshaw– Curtis as the underlying quadrature rule, we prove that the approximation error essentially behaves like the quadrature error for the sections of the kernel. In particular, we get exponential convergence for analytic kernels, which are typical in random matrix theory. The application of the method to the distribution functions of the Gaussian unitary ensemble (GUE), in the bulk and the edge scaling limit, is discussed in detail. After extending the method to systems of integral operators, we evaluate the twopoint correlation functions of the more recently studied Airy and Airy 1 processes. Key words. Fredholm determinant, Nyström’s method, projection method, trace class operators, random
Reformulation and Convex Relaxation Techniques for Global Optimization
 4OR
, 2004
"... Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested i ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Many engineering optimization problems can be formulated as nonconvex nonlinear programming problems (NLPs) involving a nonlinear objective function subject to nonlinear constraints. Such problems may exhibit more than one locally optimal point. However, one is often solely or primarily interested in determining the globally optimal point. This thesis is concerned with techniques for establishing such global optima using spatial BranchandBound (sBB) algorithms.
On HamiltonJacobi theory as a classical root of quantum theory
, 2005
"... This paper gives a technically elementary treatment of some aspects of HamiltonJacobi theory, especially in relation to the calculus of variations. The second half of the paper describes the application to geometric optics, the opticomechanical analogy and the transition to quantum mechanics. Fina ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This paper gives a technically elementary treatment of some aspects of HamiltonJacobi theory, especially in relation to the calculus of variations. The second half of the paper describes the application to geometric optics, the opticomechanical analogy and the transition to quantum mechanics. Finally, I report recent work of Holland providing a Hamiltonian formulation of the pilotwave theory.