Results 1  10
of
67
Numerical solution of saddle point problems
 ACTA NUMERICA
, 2005
"... Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has b ..."
Abstract

Cited by 320 (25 self)
 Add to MetaCart
(Show Context)
Large linear systems of saddle point type arise in a wide variety of applications throughout computational science and engineering. Due to their indefiniteness and often poor spectral properties, such linear systems represent a significant challenge for solver developers. In recent years there has been a surge of interest in saddle point problems, and numerous solution techniques have been proposed for solving this type of systems. The aim of this paper is to present and discuss a large selection of solution methods for linear systems in saddle point form, with an emphasis on iterative methods for large and sparse problems.
Vehicle dispatching with timedependent travel times

, 2003
"... Most of the models for vehicle routing reported in the literature assume constant travel times. Clearly, ignoring the fact that the travel time between two locations does not depend only on the distance traveled, but on many other factors including the time of the day, impact the application of thes ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
Most of the models for vehicle routing reported in the literature assume constant travel times. Clearly, ignoring the fact that the travel time between two locations does not depend only on the distance traveled, but on many other factors including the time of the day, impact the application of these models to realworld problems. In this paper, we present a model based on timedependent travel speeds which satisfies the "firstinâfirstout" property. An experimental evaluation of the proposed model is performed in a static and a dynamic setting, using a parallel tabu search heuristic. It is shown that the timedependent model provides substantial improvements over a model based on fixed travel times.
Regression Models for Ordinal Data: A Machine Learning Approach
, 1999
"... In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. The task of ordinal regression arises frequently in the social sciences and in information retr ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. The task of ordinal regression arises frequently in the social sciences and in information retrieval where human preferences play a major role. Also many multiclass problems are really problems of ordinal regression due to an ordering of the classes. Although the problem is rather novel to the Machine Learning Community it has been widely considered in Statistics before. All the statistical methods rely on a probability model of a latent (unobserved) variable and on the condition of stochastic ordering. In this paper we develop a distribution independent formulation of the problem and give uniform bounds for our risk functional. The main difference to classification is the restriction that the mapping of objects to ranks must be transitive and asymmetric. Combining our theoretical framework with results from measurement theory we present an approach that is based on a mapping from objects to scalar utility values and thus guarantees transitivity and asymmetry. Applying the principle of Structural Risk Minimization as employed in Support Vector Machines we derive a new learning algorithm based on large margin rank boundaries for the task of ordinal regression. Our method is easily extended to nonlinear utility functions. We give experimental results for an Information Retrieval task of learning the order of documents with respect to an initial query. Moreover, we show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.
Placing registration marks
 In Proceedings of 1993 IEEE International Conference on Robotics and Automation
, 1993
"... ..."
(Show Context)
CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features
 Journal of Artificial Intelligence Research (JAIR
, 2005
"... In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed ope ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose a crossover operator for evolutionary algorithms with real values that is based on the statistical theory of population distributions. The operator is based on the theoretical distribution of the values of the genes of the best individuals in the population. The proposed operator takes into account the localization and dispersion features of the best individuals of the population with the objective that these features would be inherited by the offspring. Our aim is the optimization of the balance between exploration and exploitation in the search process. In order to test the efficiency and robustness of this crossover, we have used a set of functions to be optimized with regard to different criteria, such as, multimodality, separability, regularity and epistasis. With this set of functions we can extract conclusions in function of the problem at hand. We analyze the results using ANOVA and multiple comparison statistical tests. As an example of how our crossover can be used to solve artificial intelligence problems, we have applied the proposed model to the problem of obtaining the weight of each network in a ensemble of neural networks. The results obtained are above the performance of standard methods. 1.
Compression, Information Theory and Grammars: A Unified Approach
, 1990
"... Text compression is of considerable theoretical and practical interest. It is, for example, becoming increasingly important for satisfying the requirements of tting a large database onto a single CDROM. Many of the compression techniques discussed in the literature are model based. We here propose ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Text compression is of considerable theoretical and practical interest. It is, for example, becoming increasingly important for satisfying the requirements of tting a large database onto a single CDROM. Many of the compression techniques discussed in the literature are model based. We here propose the notion of a formal grammar as a flexible model of text generation that encompasses most of the models offered before as well as, in principle, extending the possibility of compression to a much more general class of languages. Assuming a general model of text generation, a derivation is given of the well known Shannon entropy formula, making possible a theory of information based upon text representation rather than on communication. The ideas are shown to apply to a number of commonly used text models. Finally, we focus on a Markov model of text generation, suggest an information theoretic measure of similarity between two probability distributions, and develop a clustering algorithm based on this measure. This algorithm allows us to cluster Markov states, and thereby base our compression algorithm on a smaller number of probability distributions than would otherwise have been required. A number of theoretical consequences of this approach to compression are explored, and a detailed example is given.
Controlling Networks with Collaborative Nets
, 2000
"... Networks, such as the electric grid, are operated by sets of agents that are heterogeneous, local and distributed. (By "heterogeneous" we mean that the agents can range from simple devices, like relays, to very intelligent entities, like committees of humans. By "local and distribu ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Networks, such as the electric grid, are operated by sets of agents that are heterogeneous, local and distributed. (By "heterogeneous" we mean that the agents can range from simple devices, like relays, to very intelligent entities, like committees of humans. By "local and distributed" we mean that each agent can sense only a few of the network's state variables and influence only a few of its control variables.) We are concerned with two issues: the quality and speed of decisionmaking by heterogeneous, local and distributed agents. For quality, our standard of comparison is an ideal, centralized agent, which senses the state of the entire network and makes globally optimal decisions. (Of course, such a centralized agent is impractical for large networks.
Rayleigh Mixture Model for Plaque Characterization
, 2010
"... Abstract—Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneou ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract—Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneous areas in ultrasound images. Since plaques may contain tissues with heterogeneous regions, more complex distributions depending on multiple parameters are usually needed, such as Rice, K or Nakagami distributions. In such cases, the problem formulation becomes more complex, and the optimization procedure to estimate the plaque echomorphology is more difficult. Here, we propose to model the tissue echomorphology by means of a mixture of Rayleigh distributions, known as the Rayleigh mixture model (RMM). The problem formulation is still simple, but its ability to describe complex textural patterns is very powerful. In this paper, we present a method for the automatic estimation of the RMM mixture parameters by means of the expectation maximization algorithm, which aims at characterizing tissue echomorphology in ultrasound (US). The performance of the proposed model is evaluated with a database of in vitro intravascular US cases. We show that the mixture coefficients and Rayleigh parameters explicitly derived from the mixture model are able to accurately describe different plaque types and to significantly improve the characterization performance of an already existing methodology. Index Terms—Echomorphology, intravascular ultrasound (IVUS), plaque characterization, Rayleigh mixture model (RMM), vulnerable plaque.