Results 1  10
of
11
Hypercomputation and the Physical ChurchTuring Thesis
, 2003
"... A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Tu ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
A version of the ChurchTuring Thesis states that every e#ectively realizable physical system can be defined by Turing Machines (`Thesis P'); in this formulation the Thesis appears an empirical, more than a logicomathematical, proposition. We review the main approaches to computation beyond Turing definability (`hypercomputation'): supertask, nonwellfounded, analog, quantum, and retrocausal computation. These models depend on infinite computation, explicitly or implicitly, and appear physically implausible; moreover, even if infinite computation were realizable, the Halting Problem would not be a#ected. Therefore, Thesis P is not essentially di#erent from the standard ChurchTuring Thesis.
Online calibrated forecasts: Memory efficiency versus universality for learning in games
 MACH LEARN
, 2006
"... We provide a simple learning process that enables an agent to forecast a sequence of outcomes. Our forecasting scheme, termed tracking forecast, is based on tracking the past observations while emphasizing recent outcomes. As opposed to other forecasting schemes, we sacrifice universality in favor ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
We provide a simple learning process that enables an agent to forecast a sequence of outcomes. Our forecasting scheme, termed tracking forecast, is based on tracking the past observations while emphasizing recent outcomes. As opposed to other forecasting schemes, we sacrifice universality in favor of a significantly reduced memory requirements. We show that if the sequence of outcomes has certain properties—it has some internal (hidden) state that does not change too rapidly—then the tracking forecast is weakly calibrated so that the forecast appears to be correct most of the time. For binary outcomes, this result holds without any internal state assumptions. We consider learning in a repeated strategic game where each player attempts to compute some forecast of the opponent actions and play a best response to it. We show that if one of the players uses a tracking forecast, while the other player uses a standard learning algorithm (such as exponential regret matching or smooth fictitious play), then the player using the tracking forecast obtains the best response to the actual play of the other players. We further show that if both players use tracking forecast, then under certain conditions on the game matrix, convergence to a Nash
Theory of real computation according to EGC
 In Proceedings of the Dagstuhl Seminar on Reliable Implementation of Real Number Algorithms: Theory and Practice, Lecture Notes in Computer Science
, 2006
"... ..."
Abstract computability and algebraic specification
 ACM Transactions on Computational Logic
, 2002
"... Abstract computable functions are defined by abstract finite deterministic algorithms on manysorted algebras. We show that there exist finite universal algebraic specifications that specify uniquely (up to isomorphism) (i) all abstract computable functions on any manysorted algebra; (ii) all functi ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Abstract computable functions are defined by abstract finite deterministic algorithms on manysorted algebras. We show that there exist finite universal algebraic specifications that specify uniquely (up to isomorphism) (i) all abstract computable functions on any manysorted algebra; (ii) all functions effectively approximable by abstract computable functions on any metric algebra. We show that there exist universal algebraic specifications for all the classically computable functions on the set R of real numbers. The algebraic specifications used are mainly bounded universal equations and conditional equations. We investigate the initial algebra semantics of these specifications, and derive situations where algebraic specifications precisely define the computable functions.
Toward accurate polynomial evaluation in rounded arithmetic
 In Foundations of computational mathematics, Santander 2005, volume 331 of London Math. Soc. Lecture Note Ser
, 2006
"... Given a multivariate real (or complex) polynomial p and a domain D, we would like to decide whether an algorithm exists to evaluate p(x) accurately for all x ∈ D using rounded real (or complex) arithmetic. Here “accurately ” means with relative error less than 1, i.e., with some correct leading digi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Given a multivariate real (or complex) polynomial p and a domain D, we would like to decide whether an algorithm exists to evaluate p(x) accurately for all x ∈ D using rounded real (or complex) arithmetic. Here “accurately ” means with relative error less than 1, i.e., with some correct leading digits. The answer depends on the model of rounded arithmetic: We assume that for any arithmetic operator op(a, b), for example a+b or a·b, its computed value is op(a, b)·(1+δ), where δ  is bounded by some constant ǫ where 0 < ǫ ≪ 1, but δ is otherwise arbitrary. This model is the traditional one used to analyze the accuracy of floating point algorithms. Our ultimate goal is to establish a decision procedure that, for any p and D, either exhibits an accurate algorithm or proves that none exists. In contrast to the case where numbers are stored and manipulated as finite bit strings (e.g., as floating point numbers or rational numbers) we show that some polynomials p are impossible to evaluate accurately. The existence of an accurate algorithm will depend not just on p and D, but on which arithmetic operators and constants are available to the algorithm and whether branching is permitted in the algorithm. Toward this goal, we present necessary conditions on p for it to be accurately evaluable on open real or complex domains D. We also give sufficient conditions, and describe progress toward a complete decision procedure. We do present a complete decision procedure for homogeneous polynomials p with integer coefficients, D = C n, using only arithmetic operations +, − and ·. 1
Approximate computation and implicit regularization for very largescale data analysis
 In Proceedings of the 31st ACM Symposium on Principles of Database Systems
, 2012
"... Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmicstatistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worstcase algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very largescale databases and that also have good inferential or predictive properties.
unknown title
"... On the computational structure of the connected components of a hard problem ..."
Abstract
 Add to MetaCart
On the computational structure of the connected components of a hard problem
Computability of String Functions Over Algebraic Structures
, 1996
"... We present a model of computation for string functions over singlesorted, total algebraic structures and study some features of a general theory of computability within this framework. Our concept generalizes the BlumShubSmale setting of computability over the reals and other rings. By dealing ..."
Abstract
 Add to MetaCart
We present a model of computation for string functions over singlesorted, total algebraic structures and study some features of a general theory of computability within this framework. Our concept generalizes the BlumShubSmale setting of computability over the reals and other rings. By dealing with strings of arbitrary length instead of tuples of fixed length, some suppositions of deeper results within former approaches to generalized recursion theory become superfluous. Moreover, this gives the basis for introducing computational complexity in a BSSlike manner. Relationships both to classical computability and to Friedman's concept of eds computability are established. Two kinds of nondeterminism as well as several variants of recognizability are investigated with respect to interdependencies on each other and on properties of the underlying structures. For structures of finite signatures, there are universal programs with the usual characteristics. In the general case (of not...