Results 1  10
of
159
Connectionist Learning Procedures
 ARTIFICIAL INTELLIGENCE
, 1989
"... A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way ..."
Abstract

Cited by 339 (6 self)
 Add to MetaCart
A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradientdescent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradientdescent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks.
A PolynomialTime Approximation Algorithm for the Permanent of a Matrix with NonNegative Entries
 Journal of the ACM
, 2004
"... Abstract. We present a polynomialtime randomized algorithm for estimating the permanent of an arbitrary n ×n matrix with nonnegative entries. This algorithm—technically a “fullypolynomial randomized approximation scheme”—computes an approximation that is, with high probability, within arbitrarily ..."
Abstract

Cited by 324 (25 self)
 Add to MetaCart
Abstract. We present a polynomialtime randomized algorithm for estimating the permanent of an arbitrary n ×n matrix with nonnegative entries. This algorithm—technically a “fullypolynomial randomized approximation scheme”—computes an approximation that is, with high probability, within arbitrarily small specified relative error of the true value of the permanent. Categories and Subject Descriptors: F.2.2 [Analysis of algorithms and problem complexity]: Nonnumerical
A comparative study of energy minimization methods for Markov random fields
 In ECCV
, 2006
"... Abstract. One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Ma ..."
Abstract

Cited by 245 (26 self)
 Add to MetaCart
Abstract. One of the most exciting advances in early vision has been the development of efficient energy minimization algorithms. Many early vision tasks require labeling each pixel with some quantity such as depth or texture. While many such problems can be elegantly expressed in the language of Markov Random Fields (MRF’s), the resulting energy minimization problems were widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the topperforming stereo methods. Unfortunately, most papers define their own energy function, which is minimized with a specific algorithm of their choice. As a result, the tradeoffs among different energy minimization algorithms are not well understood. In this paper we describe a set of energy minimization benchmarks, which we use to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods—graph cuts, LBP, and treereweighted message passing—as well as the wellknown older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching and interactive segmentation. We also provide a generalpurpose software interface that allows vision researchers to easily switch between optimization methods with minimal overhead. We expect that the availability of our benchmarks and interface will make it significantly easier for vision researchers to adopt the best method for their specific problems. Benchmarks, code, results and images are available at
Variable neighborhood search: Principles and applications
, 2001
"... Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using an ..."
Abstract

Cited by 94 (9 self)
 Add to MetaCart
Systematic change of neighborhood within a possibly randomized local search algorithm yields a simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS). We present a basic scheme for this purpose, which can easily be implemented using any local search algorithm as a subroutine. Its effectiveness is illustrated by solving several classical combinatorial or global optimization problems. Moreover, several extensions are proposed for solving large problem instances: using VNS within the successive approximation method yields a twolevel VNS, called variable neighborhood decomposition search (VNDS); modifying the basic scheme to explore easily valleys far from the incumbent solution yields an efficient skewed VNS (SVNS) heuristic. Finally, we show how to stabilize column generation algorithms with help of VNS and discuss various ways to use VNS in graph theory, i.e., to suggest, disprove or give hints on how to prove conjectures, an area where metaheuristics do not appear
A local search approximation algorithm for kmeans clustering
, 2004
"... In kmeans clustering we are given a set of n data points in ddimensional space ℜd and an integer k, and the problem is to determine a set of k points in ℜd, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomialtime algorithms are kno ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
In kmeans clustering we are given a set of n data points in ddimensional space ℜd and an integer k, and the problem is to determine a set of k points in ℜd, called centers, to minimize the mean squared distance from each data point to its nearest center. No exact polynomialtime algorithms are known for this problem. Although asymptotically efficient approximation algorithms exist, these algorithms are not practical due to the very high constant factors involved. There are many heuristics that are used in practice, but we know of no bounds on their performance. We consider the question of whether there exists a simple and practical approximation algorithm for kmeans clustering. We present a local improvement heuristic based on swapping centers in and out. We prove that this yields a (9 + ε)approximation algorithm. We present an example showing that any approach based on performing a fixed number of swaps achieves an approximation factor of at least (9 − ε) in all sufficiently high dimensions. Thus, our approximation factor is almost tight for algorithms based on performing a fixed number of swaps. To establish the practical value of the heuristic, we present an empirical study that shows that, when combined with
A "Memetic" Approach for the Traveling Salesman Problem Implementation of a Computational Ecology for Combinatorial Optimization on MessagePassing Systems
 IN PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL COMPUTING AND TRANSPUTER APPLICATIONS
, 1992
"... In this paper we present an approach for global combinatorial optimization applied to the TSP which combines local search heuristics with a populationbased strategy. Due to its intrinsic parallelism and the inherent asynchronicity of the method it is specially appealing for MIMD messagepassing par ..."
Abstract

Cited by 62 (8 self)
 Add to MetaCart
In this paper we present an approach for global combinatorial optimization applied to the TSP which combines local search heuristics with a populationbased strategy. Due to its intrinsic parallelism and the inherent asynchronicity of the method it is specially appealing for MIMD messagepassing parallel computers, such as those constructed from transputers. The approach is similar to that used by Muhlenbein [14] [15] [16], Brown et al. [1], GorgesSchleuter [3] and work performed by the Dynamics of Computation Group at Xerox PARC [4]. We consider them as prototype examples of "memetic" algorithms in the sense described in Ref. [12] (see also Ref. [5]). A preliminary description of our work can also be found in Ref. [17].
Molecular Modeling Of Proteins And Mathematical Prediction Of Protein Structure
 SIAM Review
, 1997
"... . This paper discusses the mathematical formulation of and solution attempts for the socalled protein folding problem. The static aspect is concerned with how to predict the folded (native, tertiary) structure of a protein, given its sequence of amino acids. The dynamic aspect asks about the possib ..."
Abstract

Cited by 47 (4 self)
 Add to MetaCart
. This paper discusses the mathematical formulation of and solution attempts for the socalled protein folding problem. The static aspect is concerned with how to predict the folded (native, tertiary) structure of a protein, given its sequence of amino acids. The dynamic aspect asks about the possible pathways to folding and unfolding, including the stability of the folded protein. From a mathematical point of view, there are several main sides to the static problem:  the selection of an appropriate potential energy function;  the parameter identification by fitting to experimental data; and  the global optimization of the potential. The dynamic problem entails, in addition, the solution of (because of multiple time scales very stiff) ordinary or stochastic differential equations (molecular dynamics simulation), or (in case of constrained molecular dynamics) of differentialalgebraic equations. A theme connecting the static and dynamic aspect is the determination and formation of...
Cortical connections and parallel processing: Structure and function
 Behavioral and Brain Sciences
, 1986
"... This excerpt is provided, in screenviewable form, for personal use only by ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
This excerpt is provided, in screenviewable form, for personal use only by
Worstcase and Averagecase Approximations by Simple Randomized Search Heuristics
 In Proc. of STACS ’05, volume 3404 of LNCS
, 2005
"... Abstract. In recent years, probabilistic analyses of algorithms have received increasing attention. Despite results on the averagecase complexity and smoothed complexity of exact deterministic algorithms, little is known about the averagecase behavior of randomized search heuristics (RSHs). In thi ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
Abstract. In recent years, probabilistic analyses of algorithms have received increasing attention. Despite results on the averagecase complexity and smoothed complexity of exact deterministic algorithms, little is known about the averagecase behavior of randomized search heuristics (RSHs). In this paper, two simple RSHs are studied on a simple scheduling problem. While it turns out that in the worst case, both RSHs need exponential time to create solutions being significantly better than 4/3approximate, an averagecase analysis for two input distributions reveals that one RSH is convergent to optimality in polynomial time. Moreover, it is shown that for both RSHs, parallel runs yield a PRAS. 1
Bargaining with Limited Computation: Deliberation Equilibrium
 ARTIFICIAL INTELLIGENCE
, 2001
"... We develop a normative theory of interactionnegotiation in particularamong selfinterested computationally limited agents where computational actions are game theoretically treated as part of an agent's strategy. We focus on a 2agent setting where each agent has an intractable individual prob ..."
Abstract

Cited by 45 (19 self)
 Add to MetaCart
We develop a normative theory of interactionnegotiation in particularamong selfinterested computationally limited agents where computational actions are game theoretically treated as part of an agent's strategy. We focus on a 2agent setting where each agent has an intractable individual problem, and there is a potential gain from pooling the problems, giving rise to an intractable joint problem. At any time, an agent can compute to improve its solution to its own problem, its opponent's problem, or the joint problem. At a deadline the agents then decide whether to implement the joint solution, and if so, how to divide its value (or cost). We present a fully normative model for controlling anytime algorithms where each agent has statistical performance profiles which are optimally conditioned on the problem instance as well as on the path of results of the algorithm run so far. Using this model, we introduce a solution concept, which we call deliberation equilibrium. It is the perfect Bayesian equilibrium of the game where deliberation actions are part of each agent's strategy. The equilibria differ based on whether the performance profiles are deterministic or stochastic, whether the deadline is known or not, and whether the proposer is known in advance or not. We present algorithms for finding the equilibria. Finally, we show that there exist instances of the deliberationbargaining problem where no pure strategy equilibria exist and also instances where the unique equilibrium outcome is not Pareto efficient.