Results 1  10
of
106
Wrappers for Feature Subset Selection
 AIJ SPECIAL ISSUE ON RELEVANCE
, 1997
"... In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a ..."
Abstract

Cited by 1133 (3 self)
 Add to MetaCart
In the feature subset selection problem, a learning algorithm is faced with the problem of selecting a relevant subset of features upon which to focus its attention, while ignoring the rest. To achieve the best possible performance with a particular learning algorithm on a particular training set, a feature subset selection method should consider how the algorithm and the training set interact. We explore the relation between optimal feature subset selection and relevance. Our wrapper method searches for an optimal feature subset tailored to a particular algorithm and a domain. We study the strengths and weaknesses of the wrapper approach andshow a series of improved designs. We compare the wrapper approach to induction without feature subset selection and to Relief, a filter approach to feature subset selection. Significant improvement in accuracy is achieved for some datasets for the two families of induction algorithms used: decision trees and NaiveBayes.
Guided local search and its application to the traveling salesman problem
, 1999
"... The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. GLS sits on top of local search heuristics and has ..."
Abstract

Cited by 53 (16 self)
 Add to MetaCart
The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. GLS sits on top of local search heuristics and has as a main aim to guide these procedures in exploring efficiently and effectively the vast search spaces of combinatorial optimization problems. GLS can be combined with the neighborhood reduction scheme of FLS which significantly speeds up the operations of the algorithm. The combination of GLS and FLS with TSP local search heuristics of different efficiency and effectiveness is studied in an effort to determine the dependence of GLS on the underlying local search heuristic used. Comparisons are made with some of the best TSP heuristic algorithms and general optimization techniques which demonstrate the advantages of GLS over alternative heuristic approaches suggested for the problem.
Evaluating the Quality of Approximations to the NonDominated Set
, 1998
"... : The growing interest in hard multiple objective combinatorial and nonlinear problems resulted in a significant number of heuristic methods aiming at generating sets of feasible solutions as approximations to the set of nondominated solutions. The issue of evaluating these approximations is addre ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
: The growing interest in hard multiple objective combinatorial and nonlinear problems resulted in a significant number of heuristic methods aiming at generating sets of feasible solutions as approximations to the set of nondominated solutions. The issue of evaluating these approximations is addressed. Such evaluations are useful when performing experimental comparisons of different multiple objective heuristic algorithms, when defining stopping rules of multiple objective heuristic algorithms, and when adjusting parameters of heuristic algorithms to a given problem. A family of outperformance relations that can be used to compare approximations under very weak assumptions about a decisionmaker's preferences is introduced. These outperformance relations define incomplete orders in the set of all approximations. It is shown that in order to compare approximations, which are incomparable according to the outperformance relations, much stronger assumptions about the decisionmaker's p...
Image classification using Markov Random Fields with two new relaxation methods: Deterministic pseudo . . .
, 1991
"... In this paper, we present two relaxation techniques: Deterministic PseudoAnnealing (DPA) and Modified Metropolis Dynamics (MMD) in order to do image classification using a Markov Random Field modelization. For the first algorithm (DPA), the a posteriori probability of a tentative labeling is genera ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
In this paper, we present two relaxation techniques: Deterministic PseudoAnnealing (DPA) and Modified Metropolis Dynamics (MMD) in order to do image classification using a Markov Random Field modelization. For the first algorithm (DPA), the a posteriori probability of a tentative labeling is generalized to continuous labeling. The merit function thus defined has the same maxima under constraints yielding probability vectors. Changing these constraints convexify the merit function. The algorithm solve this unambigous maximization problem and then tracks down the solution while the original constraints are restored yielding a good even if suboptimal solution to the original labeling assignment problem. As for the second method (MMD), it is a modified version of the Metropolis algorithm: at each iteration the new state is chosen randomly but the decision to accept it is purely deterministic. This is of course also a suboptimal technique which gives faster results than stochastic relaxation. These two methods have been implemented on a Connection Machine CM2 and simulation results are shown with a synthetic noisy image and a SPOT image. These results are compared to those obtained with the Metropolis algorithm, the Gibbs sampler and ICM (Iterated Conditional Mode).
Data Perturbation for Escaping Local Maxima in Learning
 IN AAAI
, 2002
"... Almost all machine learning algorithmsbe they for regression, classification or density estimationseek hypotheses that optimize a score on training data. In most interesting cases, however, full global optimization is not feasible and local search techniques are used to discover reasonable ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
Almost all machine learning algorithmsbe they for regression, classification or density estimationseek hypotheses that optimize a score on training data. In most interesting cases, however, full global optimization is not feasible and local search techniques are used to discover reasonable solutions. Unfortunately,
Empirical Performance Evaluation Methodology and Its Application to Page Segmentation Algorithms
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... this paper, we use the following fivestep methodology to quantitatively compare the performance of page segmentation algorithms: 1) First, we create mutually exclusive training and test data sets with groundtruth, 2) we then select a meaningful and computable performance metric, 3) an optimizatio ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
this paper, we use the following fivestep methodology to quantitatively compare the performance of page segmentation algorithms: 1) First, we create mutually exclusive training and test data sets with groundtruth, 2) we then select a meaningful and computable performance metric, 3) an optimization procedure is then used to search automatically for the optimal parameter values of the segmentation algorithms on the training data set, 4) the segmentation algorithms are then evaluated on the test data set, and, finally, 5) a statistical and error analysis is performed to give the statistical significance of the experimental results. In particular, instead of the ad hoc and manual approach typically used in the literature for training algorithms, we pose the automatic training of algorithms as an optimization problem and use the Simplex algorithm to search for the optimal parameter value. A pairedmodel statistical analysis and an error analysis are then conducted to provide confidence intervals for the experimental results of the algorithms. This methodology is applied to the evaluation of five page segmentation algorithms of which, three are representative research algorithms and the other two are wellknown commercial products, on 978 images from the University of Washington III data set. It is found that the performance indices (average textline accuracy) of the Voronoi, Docstrum, and Caere segmentation algorithms are not significantly different from each other, but they are significantly better than that of ScanSoft's segmentation algorithm, which, in turn, is significantly better than that of XY cut
Markov approximation for combinatorial network optimization
, 2010
"... Many important network design problems can be formulated as a combinatorial optimization problem. A large number of such problems, however, cannot readily be tackled by distributed algorithms. The Markov approximation framework studied in this paper is a general technique for synthesizing distribut ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
(Show Context)
Many important network design problems can be formulated as a combinatorial optimization problem. A large number of such problems, however, cannot readily be tackled by distributed algorithms. The Markov approximation framework studied in this paper is a general technique for synthesizing distributed algorithms. We show that when using the logsumexp function to approximate the optimal value of any combinatorial problem, we end up with a solution that can be interpreted as the stationary probability distribution of a class of timereversible Markov chains. Certain carefully designed Markov chains among this class yield distributed algorithms that solve the logsumexp approximated combinatorial network optimization problem. By three case studies, we illustrate that Markov approximation technique not only can provide fresh perspective to existing distributed solutions, but also can help us generate new distributed algorithms in various domains with provable performance. We believe the Markov approximation framework will find applications in many network optimization problems, and this paper serves as a call for participation.
Guided Local Search  An Illustrative Example in Function Optimisation
 In BT Technology Journal, Vol.16, No.3
, 1998
"... The Guided Local Search method has been successfully applied to a number of hard combinatorial optimisation problems from the wellknown TSP and QAP to real world problems such as Frequency Assignment and Workforce Scheduling. In this paper, we are demonstrating that the potential applications of GL ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
(Show Context)
The Guided Local Search method has been successfully applied to a number of hard combinatorial optimisation problems from the wellknown TSP and QAP to real world problems such as Frequency Assignment and Workforce Scheduling. In this paper, we are demonstrating that the potential applications of GLS are not limited to optimisation problems of discrete nature but also to difficult continuous optimisation problems. Continuous optimisation problems arise in many engineering disciplines (such as electrical and mechanical engineering) in the context of analysis, design or simulation tasks. The problem examined gives an illustrative example of the behaviour of GLS, providing insights on the mechanisms of the algorithm. 1.
Parallel Strategies for Metaheuristics
"... We present a stateoftheart survey of parallel metaheuristic developments and results, discuss general design and implementation principles that apply to most metaheuristic classes, instantiate these principles for the three metaheuristic classes currently most extensively used  genetic metho ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present a stateoftheart survey of parallel metaheuristic developments and results, discuss general design and implementation principles that apply to most metaheuristic classes, instantiate these principles for the three metaheuristic classes currently most extensively used  genetic methods, simulated annealing, and tabu search, and identify a number of trends and promising research directions.