Results 1  10
of
23,728
Domain Theory
 Handbook of Logic in Computer Science
, 1994
"... Least fixpoints as meanings of recursive definitions. ..."
Abstract

Cited by 546 (25 self)
 Add to MetaCart
Least fixpoints as meanings of recursive definitions.
The irreducibility of the space of curves of given genus
 Publ. Math. IHES
, 1969
"... Fix an algebraically closed field k. Let Mg be the moduli space of curves of genus g over k. The main result of this note is that Mg is irreducible for every k. Of course, whether or not M s is irreducible depends only on the characteristic of k. When the characteristic s o, we can assume that k ~ ..."
Abstract

Cited by 512 (2 self)
 Add to MetaCart
Fix an algebraically closed field k. Let Mg be the moduli space of curves of genus g over k. The main result of this note is that Mg is irreducible for every k. Of course, whether or not M s is irreducible depends only on the characteristic of k. When the characteristic s o, we can assume that k ~ (1, and then the result is classical. A simple proof appears in EnriquesChisini [E, vol. 3, chap. 3], based on analyzing the totality of coverings of p1 of degree n, with a fixed number d of ordinary branch points. This method has been extended to char. p by William Fulton [F], using specializations from char. o to char. p provided that p> 2g qi. Unfortunately, attempts to extend this method to all p seem to get stuck on difficult questions of wild ramification. Nowadays, the Teichmtiller theory gives a thoroughly analytic but very profound insight into this irreducibility when kC. Our approach however is closest to Severi's incomplete proof ([Se], Anhang F; the error is on pp. 344345 and seems to be quite basic) and follows a suggestion of Grothendieck for using the result in char. o to deduce the result in char. p. The basis of both Severi's and Grothendieck's ideas is to construct families of curves X, some singular, with pa(X)=g, over nonsingular parameter spaces, which in some sense contain enough singular curves to link together any two components that Mg might have. The essential thing that makes this method work now is a recent " stable reduction theorem " for abelian varieties. This result was first proved independently in char. o by Grothendieck, using methods of etale cohomology (private correspondence with J. Tate), and by Mumford, applying the easy half of Theorem (2.5), to go from curves to abelian varieties (cf. [M2]). Grothendieck has recently strengthened his method so that it applies in all characteristics (SGA 7, ~968) 9 Mumford has also given a proof using theta functions in char. ~2. The result is this: Stable Reduction Theorem. Let R be a discrete valuation ring with quotient field K. Let A be an abelian variety over K. Then there exists a finite algebraic extension L of K such
Planning Algorithms
, 2004
"... This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning ..."
Abstract

Cited by 1108 (51 self)
 Add to MetaCart
This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning under uncertainty, sensorbased planning, visibility, decisiontheoretic planning, game theory, information spaces, reinforcement learning, nonlinear systems, trajectory planning, nonholonomic planning, and kinodynamic planning.
Analogical Mapping by Constraint Satisfaction
 COGNITIVE SCIENCE 13, 295 (1989)
, 1989
"... A theory of analogical mopping between source and target analogs based upon interacting structural, semantic, and pragmatic constraints is proposed here. The structural constraint of fsomorphfsm encourages mappings that maximize the consistency of relational corresondences between the elements of th ..."
Abstract

Cited by 389 (28 self)
 Add to MetaCart
A theory of analogical mopping between source and target analogs based upon interacting structural, semantic, and pragmatic constraints is proposed here. The structural constraint of fsomorphfsm encourages mappings that maximize the consistency of relational corresondences between the elements of the two analogs. The constraint of semantic similarity supports mapping hypotheses to the degree that mapped predicates have similar meanings. The constraint of pragmatic centrality fovors mappings involving elements the analogist believes to be important in order to achieve the purpose for which the anology Is being used. The theory is implemented in a computer progrom called ACME (Analogical Constraint Mapping Engine), which represents constraints by means of a network of supporting and competing hypotheses regarding what elements to map. A coop erative algorithm for parallel constraint satisfaction identifies mapping hypotheses that collectively represent the overall mapping that best fits the interactlng constraints. ACME has been applied to a wide range of examples that include problem analogies, analogical arguments, explanatory analogies, story analogies, formal analogies, and metaphors. ACME is sensitive to semantic and prag matic information if it is available,.and yet able to compute mappings between formally isomorphic analogs without any similar or identical elements. The theory Is able to account for empirical findings regarding the impact of consistenty and similarity on human processing of analogies.
A unified approach to global program optimization
 In Conference Record of the ACM Symposium on Principles of Programming Languages
, 1973
"... A technique is presented for global analysie of program structure in order to perform compile time optimization of object code generated for expressions. The global expression optimization presented includes constant propagation, common subexpression elimination, elimination of redundant register lo ..."
Abstract

Cited by 376 (0 self)
 Add to MetaCart
A technique is presented for global analysie of program structure in order to perform compile time optimization of object code generated for expressions. The global expression optimization presented includes constant propagation, common subexpression elimination, elimination of redundant register load operations, and live expression analysis. A general purpose program flow analysis algorithm is developed which depends upon the existence of an “optimizing function. ” The algorithm is defined formally using a directed graph model of program flow structure, and is shown to be correct, Several optimizing functions are defined which, when used in conjunction with the flow analysis algorithm, provide the various forms of code optimization. The flow analysis algorithm is sufficiently general that additional functions can easily be defined for other forms of globa ~ cod: optimization. 1. INTRODUCTION of the graph represent program control flow DossiA number of techniques have evolved for the bilities between the nodes at execution–time. compiletime analysis of program structure in order to locate redundant computations, perform constant
SelfTesting/Correcting with Applications to Numerical Problems
, 1990
"... Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute ..."
Abstract

Cited by 374 (31 self)
 Add to MetaCart
Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f . Should we trust that P works correctly? A selftesting/correcting pair allows us to: (1) estimate the probability that P (x) 6= f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by NSF Grant No. CCR 8813632. y International Computer Science Institute, Berkeley, California 94704 z Computer Science Division, U.C. Berkeley, Berkeley, California 94720, Supported by an IBM Graduate Fellowship and NSF Grant No. CCR 8813632. the original running time of P . We present general techniques for constructing simple to program selftesting /correcting pairs for a variety of numerical problems, including integer multiplication, modular multiplication, matrix multiplicatio...
Synchronization and linearity: an algebra for discrete event systems
, 2001
"... The first edition of this book was published in 1992 by Wiley (ISBN 0 471 93609 X). Since this book is now out of print, and to answer the request of several colleagues, the authors have decided to make it available freely on the Web, while retaining the copyright, for the benefit of the scientific ..."
Abstract

Cited by 369 (11 self)
 Add to MetaCart
The first edition of this book was published in 1992 by Wiley (ISBN 0 471 93609 X). Since this book is now out of print, and to answer the request of several colleagues, the authors have decided to make it available freely on the Web, while retaining the copyright, for the benefit of the scientific community. Copyright Statement This electronic document is in PDF format. One needs Acrobat Reader (available freely for most platforms from the Adobe web site) to benefit from the full interactive machinery: using the package hyperref by Sebastian Rahtz, the table of contents and all LATEX crossreferences are automatically converted into clickable hyperlinks, bookmarks are generated automatically, etc.. So, do not hesitate to click on references to equation or section numbers, on items of thetableofcontents and of the index, etc.. One may freely use and print this document for one’s own purpose or even distribute it freely, but not commercially, provided it is distributed in its entirety and without modifications, including this preface and copyright statement. Any use of thecontents should be acknowledged according to the standard scientific practice. The
ControlFlow Analysis of HigherOrder Languages
, 1991
"... representing the official policies, either expressed or implied, of ONR or the U.S. Government. Keywords: dataflow analysis, Scheme, LISP, ML, CPS, type recovery, higherorder functions, functional programming, optimising compilers, denotational semantics, nonstandard Programs written in powerful, ..."
Abstract

Cited by 362 (10 self)
 Add to MetaCart
representing the official policies, either expressed or implied, of ONR or the U.S. Government. Keywords: dataflow analysis, Scheme, LISP, ML, CPS, type recovery, higherorder functions, functional programming, optimising compilers, denotational semantics, nonstandard Programs written in powerful, higherorder languages like Scheme, ML, and Common Lisp should run as fast as their FORTRAN and C counterparts. They should, but they don’t. A major reason is the level of optimisation applied to these two classes of languages. Many FORTRAN and C compilers employ an arsenal of sophisticated global optimisations that depend upon dataflow analysis: commonsubexpression elimination, loopinvariant detection, inductionvariable elimination, and many, many more. Compilers for higherorder languages do not provide these optimisations. Without them, Scheme, LISP and ML compilers are doomed to produce code that runs slower than their FORTRAN and C counterparts. The problem is the lack of an explicit controlflow graph at compile time, something which traditional dataflow analysis techniques require. In this dissertation, I present a technique for recovering the controlflow graph of a Scheme program at compile time. I give examples of how this information can be used to perform several dataflow analysis optimisations, including copy propagation, inductionvariable elimination, uselessvariable elimination, and type recovery. The analysis is defined in terms of a nonstandard semantic interpretation. The denotational semantics is carefully developed, and several theorems establishing the correctness of the semantics and the implementing algorithms are proven. iii ivTo my parents, Julia and Olin. v viContents
Results 1  10
of
23,728