Results 1  10
of
130,619
Symmetric Functions and PRecursiveness
, 2001
"... In his 1990 paper [1], Ira Gessel introduced a notion of Dfinite for symmetric functions, and showed how it could be used to determine Dfiniteness of combinatorial generating functions. In this context, a symmetric function is a polynomial function of finite degree (here, n) in infinitely many var ..."
Abstract
 Add to MetaCart
In his 1990 paper [1], Ira Gessel introduced a notion of Dfinite for symmetric functions, and showed how it could be used to determine Dfiniteness of combinatorial generating functions. In this context, a symmetric function is a polynomial function of finite degree (here, n) in infinitely many
Effective Bounds for PRecursive Sequences
"... We describe an algorithm that takes as input a complex sequence (un) given by a linear recurrence relation with polynomial coefficients along with initial values, and outputs a simple explicit upper bound (vn) such that un  ≤ vn for all n. Generically, the bound is tight, in the sense that its as ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
asymptotic behaviour matches that of un. We discuss applications to the evaluation of power series with guaranteed precision. Key words: Algorithm, bounds, CauchyKovalevskaya majorant, certified evaluation, holonomic functions
Good ErrorCorrecting Codes based on Very Sparse Matrices
, 1999
"... We study two families of errorcorrecting codes defined in terms of very sparse matrices. "MN" (MacKayNeal) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties. The ..."
Abstract

Cited by 741 (23 self)
 Add to MetaCart
. The decoding of both codes can be tackled with a practical sumproduct algorithm. We prove that these codes are "very good," in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. This result holds not only for the binarysymmetric channel
Learning the Kernel Matrix with SemiDefinite Programming
, 2002
"... Kernelbased learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information ..."
Abstract

Cited by 780 (22 self)
 Add to MetaCart
is contained in the socalled kernel matrix, a symmetric and positive definite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input spaceclassical model selection
Bundle Adjustment  A Modern Synthesis
 VISION ALGORITHMS: THEORY AND PRACTICE, LNCS
, 2000
"... This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics c ..."
Abstract

Cited by 555 (12 self)
 Add to MetaCart
covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than
Term Rewriting Systems
, 1992
"... Term Rewriting Systems play an important role in various areas, such as abstract data type specifications, implementations of functional programming languages and automated deduction. In this chapter we introduce several of the basic comcepts and facts for TRS's. Specifically, we discuss Abstra ..."
Abstract

Cited by 613 (18 self)
 Add to MetaCart
Term Rewriting Systems play an important role in various areas, such as abstract data type specifications, implementations of functional programming languages and automated deduction. In this chapter we introduce several of the basic comcepts and facts for TRS's. Specifically, we discuss
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
A new learning algorithm for blind signal separation

, 1996
"... A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract

Cited by 614 (80 self)
 Add to MetaCart
of the sources. The GramCharlier expansion instead of the Edgeworth expansion is used in evaluating the MI. The natural gradient approach is used to minimize the MI. A novel activation function is proposed for the online learning algorithm which has an equivariant property and is easily implemented on a neural
A scheduling model for reduced CPU energy
 ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 1995
"... The energy usage of computer systems is becoming an important consideration, especially for batteryoperated systems. Various methods for reducing energy consumption have been investigated, both at the circuit level and at the operating systems level. In this paper, we propose a simple model of job s ..."
Abstract

Cited by 550 (3 self)
 Add to MetaCart
scheduling aimed at capturing some key aspects of energy minimization. In this model, each job is to be executed between its arrival time and deadline by a single processor with variable speed, under the assumption that energy usage per unit time, P, is a convex function of the processor speed s. We give
Results 1  10
of
130,619