Results 1  10
of
180
Capturing the Uncertainty of MovingObject Representations
, 1999
"... Spatiotemporal applications, such as fleet management and air traffic control, involving continuously moving objects are increasingly at the focus of research efforts. The representation of the continuously changing positions of the objects is fundamentally important in these applications. ..."
Abstract

Cited by 118 (26 self)
 Add to MetaCart
Spatiotemporal applications, such as fleet management and air traffic control, involving continuously moving objects are increasingly at the focus of research efforts. The representation of the continuously changing positions of the objects is fundamentally important in these applications.
OMDoc an open markup format for mathematical documents (version 1.2
 Number 4180 in LNAI
, 2006
"... This Document is an online version of the OMDoc 1.2 Specification published by ..."
Abstract

Cited by 114 (18 self)
 Add to MetaCart
This Document is an online version of the OMDoc 1.2 Specification published by
On the spheredecoding algorithm I. Expected complexity
 IEEE Trans. Sig. Proc
, 2005
"... Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The ..."
Abstract

Cited by 76 (5 self)
 Add to MetaCart
Abstract—The problem of finding the leastsquares solution to a system of linear equations where the unknown vector is comprised of integers, but the matrix coefficient and given vector are comprised of real numbers, arises in many applications: communications, cryptography, GPS, to name a few. The problem is equivalent to finding the closest lattice point to a given point and is known to be NPhard. In communications applications, however, the given vector is not arbitrary but rather is an unknown lattice point that has been perturbed by an additive noise vector whose statistical properties are known. Therefore, in this paper, rather than dwell on the worstcase complexity of the integer leastsquares problem, we study its expected complexity, averaged over the noise and over the lattice. For the “sphere decoding” algorithm of Fincke and Pohst, we find a closedform expression for the expected complexity, both for the infinite and finite lattice.
KHOVANOV’S HOMOLOGY FOR TANGLES AND COBORDISMS
, 2005
"... We give a fresh introduction to the Khovanov Homology theory for knots and links, with special emphasis on its extension to tangles, cobordisms and 2knots. By staying within a world of topological pictures a little longer than in other articles on the subject, the required extension becomes essent ..."
Abstract

Cited by 71 (3 self)
 Add to MetaCart
We give a fresh introduction to the Khovanov Homology theory for knots and links, with special emphasis on its extension to tangles, cobordisms and 2knots. By staying within a world of topological pictures a little longer than in other articles on the subject, the required extension becomes essentially tautological. And then a simple application of an appropriate functor (a “TQFT”) to our pictures takes them to the familiar realm of complexes of (graded) vector spaces and ordinary homological invariants.
Telescoping languages: A strategy for automatic generation of scientific problemsolving systems from annotated libraries. www.netlib.org/utk/people/JackDongarra/PAPERS/ Telescope.pdf
, 2000
"... As machines and programs have become more complex, the process of programming applications that can exploit the power of highperformance systems has become more difficult and correspondingly more laborintensive. This has substantially widened the software gap the discrepancy between the need for n ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
As machines and programs have become more complex, the process of programming applications that can exploit the power of highperformance systems has become more difficult and correspondingly more laborintensive. This has substantially widened the software gap the discrepancy between the need for new software and the aggregate capacity of the workforce to produce it. This problem has been compounded by the slow growth of programming productivity, especially for highperformance programs, over the past two decades. One way to bridge this gap is to make it possible for end users to develop programs in highlevel domainspecific programming systems. In the past, a major impediment to the acceptance of such systems has been the poor performance of the resulting applications. To address this problem, we are developing a new compilerbased infrastructure, called
A Comparison Of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
, 1998
"... Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimiz ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase...
A plausible approach to computeraided cryptographic proofs. Cryptology ePrint Archive, Report 2005/181
, 2005
"... This paper tries to sell a potential approach to making the process of writing and verifying our cryptographic proofs less prone to errors. Specifically, I advocate creating an automated tool to help us with the mundane parts of writing and checking common arguments in our proofs. On a high level, t ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
This paper tries to sell a potential approach to making the process of writing and verifying our cryptographic proofs less prone to errors. Specifically, I advocate creating an automated tool to help us with the mundane parts of writing and checking common arguments in our proofs. On a high level, this tool should help us verify that two pieces of code induce the same probability distribution on some of their common variables. In this paper I explain why I think that such a tool would be useful, by considering two very different proofs of security from the literature and showing the places in those proofs where having this tool would have been useful. I also explain how I believe that this tool can be built. Perhaps surprisingly, it seems to me that the functionality of such tool can be implemented using only “static code analysis ” (i.e., things that compilers do). I plan to keep updated versions of this docuemnt along with other update reports on the web at
Proving By Simplification
 Computer Algebra
, 1997
"... This notebook describes a prover in natural deduction style. The prover is mainly a conditional term rewriting system used to generate abstract proof objects, i.e. structured data that contain enough information to describe all the individual steps in a proof. These proof objects can be regarded as ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
This notebook describes a prover in natural deduction style. The prover is mainly a conditional term rewriting system used to generate abstract proof objects, i.e. structured data that contain enough information to describe all the individual steps in a proof. These proof objects can be regarded as standalone proofs, or can be further integrated into more complex proofs that use simplification for intermediary proofs. The prover is written in Mathematica 3.0, and provides a viewer that converts proof objects in nested Mathematica cells, which are natural language, easytoread versions of proofs.
Telescoping Languages: A Compiler Strategy for Implementation of HighLevel DomainSpecific Programming Systems
, 2000
"... As both machines and programs have become more complex, the programming process has become correspondingly more laborintensive. This has created a software gap between the need for new software and the aggregate capacity of the current workforce to produce it. This problem has been compounded by th ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
As both machines and programs have become more complex, the programming process has become correspondingly more laborintensive. This has created a software gap between the need for new software and the aggregate capacity of the current workforce to produce it. This problem has been compounded by the slow growth of programming productivity over the past two decades. One way to bridge this gap is to make it possible for end users to develop programs in highlevel domainspecific programming systems. The principal impediment to the success of these systems in the past has been the poor performance of the resulting applications. To address this problem, we are developing a new compiler technology that supports scriptbased telescoping languages, which can be built from base languages and domainspecific libraries. By exhaustively compiling the libraries in advance, we can ensure that the performance and portability of the applications produced by such systems are high, while the compile t...