Results 1  10
of
27
Integration Graphs: A Class of Decidable Hybrid Systems
 In Hybrid Systems, volume 736 of Lecture Notes in Computer Science
, 1993
"... . Integration Graphs are a computational model developed in the attempt to identify simple Hybrid Systems with decidable analysis problems. We start with the class of constant slope hybrid systems (cshs), in which the right hand side of all differential equations is an integer constant. We refer to ..."
Abstract

Cited by 67 (9 self)
 Add to MetaCart
. Integration Graphs are a computational model developed in the attempt to identify simple Hybrid Systems with decidable analysis problems. We start with the class of constant slope hybrid systems (cshs), in which the right hand side of all differential equations is an integer constant. We refer to continuous variables whose right hand side constants are always 1 as timers. All other continuous variables are called integrators. The first result shown in the paper is that simple questions such as reachability of a given state are undecidable for even this simple class of systems. To restrict the model even further, we impose the requirement that no test that refers to integrators may appear within a loop in the graph. This restricted class of cshs is called integration graphs . The main results of the paper are that the reachability problem of integration graphs is decidable for two special cases: The case of a single timer and the case of a single test involving integrators. The expres...
Managing Uncertainty and Vagueness in Description Logics for the Semantic Web
, 2007
"... Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently direct ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
Ontologies play a crucial role in the development of the Semantic Web as a means for defining shared terms in web resources. They are formulated in web ontology languages, which are based on expressive description logics. Significant research efforts in the semantic web community are recently directed towards representing and reasoning with uncertainty and vagueness in ontologies for the Semantic Web. In this paper, we give an overview of approaches in this context to managing probabilistic uncertainty, possibilistic uncertainty, and vagueness in expressive description logics for the Semantic Web.
Nonunimodular Transformations of Nested Loops
 IN PROC. SUPERCOMPUTING 92
, 1992
"... This paper presents a linear algebraic approach to modeling loop transformations. The approach unifies apparently unrelated recent developments in supercompiler technology. Specifically we show the relationship between the dependence abstraction called dependence cones, and fully permutable loop nes ..."
Abstract

Cited by 44 (11 self)
 Add to MetaCart
This paper presents a linear algebraic approach to modeling loop transformations. The approach unifies apparently unrelated recent developments in supercompiler technology. Specifically we show the relationship between the dependence abstraction called dependence cones, and fully permutable loop nests. Compound transformations are modeled as matrices. Nonsingular linear transformations presented here subsumes the class of unimodular transformations. Nonunimodular transformations (with determinant 1) create "holes" in the transformed iteration space. We change the step size of loops in order to "step aside from these holes" when traversing the transformed iteration space. For the class of nonunimodular loop transformations, we present algorithms for deriving the loop bounds, the array access expressions and step sizes of loops in the nest. The algorithms are based on the Hermite Normal Form of the transformation matrix. We illustrate the use of this approach in several problems such a...
Description Logics with Fuzzy Concrete Domains
, 2005
"... We present a fuzzy version of description logics with concrete domains. Main features are: (i) concept constructors are based on tnorm, tconorm, negation and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers are allowed; and (iv) the reasoning algorithm is based on a m ..."
Abstract

Cited by 43 (18 self)
 Add to MetaCart
We present a fuzzy version of description logics with concrete domains. Main features are: (i) concept constructors are based on tnorm, tconorm, negation and implication; (ii) concrete domains are fuzzy sets; (iii) fuzzy modifiers are allowed; and (iv) the reasoning algorithm is based on a mixture of completion rules and bounded mixed integer programming.
Parallelization of the Vehicle Routing Problem with Time Windows
, 2001
"... Routing with time windows (VRPTW) has been an area of research that have
attracted many researchers within the last 10 { 15 years. In this period a number
of papers and technical reports have been published on the exact solution of the
VRPTW.
The VRPTW is a generalization of the wellknown capacitat ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Routing with time windows (VRPTW) has been an area of research that have
attracted many researchers within the last 10 { 15 years. In this period a number
of papers and technical reports have been published on the exact solution of the
VRPTW.
The VRPTW is a generalization of the wellknown capacitated routing problem
(VRP or CVRP). In the VRP a
eet of vehicles must visit (service) a number
of customers. All vehicles start and end at the depot. For each pair of customers
or customer and depot there is a cost. The cost denotes how much is costs a
vehicle to drive from one customer to another. Every customer must be visited
exactly ones. Additionally each customer demands a certain quantity of goods
delivered (know as the customer demand). For the vehicles we have an upper
limit on the amount of goods that can be carried (known as the capacity). In
the most basic case all vehicles are of the same type and hence have the same
capacity. The problem is now for a given scenario to plan routes for the vehicles
in accordance with the mentioned constraints such that the cost accumulated
on the routes, the xed costs (how much does it cost to maintain a vehicle) or
a combination hereof is minimized.
In the more general VRPTW each customer has a time window, and between
all pairs of customers or a customer and the depot we have a travel time. The
vehicles now have to comply with the additional constraint that servicing of the
customers can only be started within the time windows of the customers. It
is legal to arrive before a time window \opens" but the vehicle must wait and
service will not start until the time window of the customer actually opens.
For solving the problem exactly 4 general types of solution methods have
evolved in the literature: dynamic programming, DantzigWolfe (column generation),
Lagrange decomposition and solving the classical model formulation
directly.
Presently the algorithms that uses DantzigWolfe given the best results
(Desrochers, Desrosiers and Solomon, and Kohl), but the Ph.D. thesis of Kontoravdis
shows promising results for using the classical model formulation directly.
In this Ph.D. project we have used the DantzigWolfe method. In the
DantzigWolfe method the problem is split into two problems: a \master problem"
and a \subproblem". The master problem is a relaxed set partitioning
v
vi
problem that guarantees that each customer is visited exactly ones, while the
subproblem is a shortest path problem with additional constraints (capacity and
time window). Using the master problem the reduced costs are computed for
each arc, and these costs are then used in the subproblem in order to generate
routes from the depot and back to the depot again. The best (improving) routes
are then returned to the master problem and entered into the relaxed set partitioning
problem. As the set partitioning problem is relaxed by removing the
integer constraints the solution is seldomly integral therefore the DantzigWolfe
method is embedded in a separationbased solutiontechnique.
In this Ph.D. project we have been trying to exploit structural properties in
order to speed up execution times, and we have been using parallel computers
to be able to solve problems faster or solve larger problems.
The thesis starts with a review of previous work within the eld of VRPTW
both with respect to heuristic solution methods and exact (optimal) methods.
Through a series of experimental tests we seek to dene and examine a number
of structural characteristics.
The rst series of tests examine the use of dividing time windows as the
branching principle in the separationbased solutiontechnique. Instead of using
the methods previously described in the literature for dividing a problem into
smaller problems we use a methods developed for a variant of the VRPTW. The
results are unfortunately not positive.
Instead of dividing a problem into two smaller problems and try to solve
these we can try to get an integer solution without having to branch. A cut is an
inequality that separates the (nonintegral) optimal solution from all the integer
solutions. By nding and inserting cuts we can try to avoid branching. For the
VRPTW Kohl has developed the 2path cuts. In the separationalgorithm for
detecting 2path cuts a number of test are made. By structuring the order in
which we try to generate cuts we achieved very positive results.
In the DantzigWolfe process a large number of columns may be generated,
but a signicant fraction of the columns introduced will not be interesting with
respect to the master problem. It is a priori not possible to determine which
columns are attractive and which are not, but if a column does not become part
of the basis of the relaxed set partitioning problem we consider it to be of no
benet for the solution process. These columns are subsequently removed from
the master problem. Experiments demonstrate a signicant cut of the running
time.
Positive results were also achieved by stopping the routegeneration process
prematurely in the case of timeconsuming shortest path computations. Often
this leads to stopping the shortest path subroutine in cases where the information
(from the dual variables) leads to \bad" routes. The premature exit
from the shortest path subroutine restricts the generation of \bad" routes signi
cantly. This produces very good results and has made it possible to solve
problem instances not solved to optimality before.
The parallel algorithm is based upon the sequential DantzigWolfe based
algorithm developed earlier in the project. In an initial (sequential) phase unsolved
problems are generated and when there are unsolved problems enough
vii
to start work on every processor the parallel solution phase is initiated. In the
parallel phase each processor runs the sequential algorithm. To get a good workload
a strategy based on balancing the load between neighbouring processors is
implemented. The resulting algorithm is eÆcient and capable of attaining good
speedup values. The loadbalancing strategy shows an even distribution of work
among the processors. Due to the large demand for using the IBM SP2 parallel
computer at UNIC it has unfortunately not be possible to run as many tests
as we would have liked. We have although managed to solve one problem not
solved before using our parallel algorithm.
Typical Properties of Winners and Losers in Discrete Optimization
, 2004
"... We present a probabilistic analysis for a large class of combinatorial optimization problems containing, e.g., all binary optimization problems defined by linear constraints and a linear objective function over {0,1} n. By parameterizing which constraints are of stochastic and which are of adversari ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
We present a probabilistic analysis for a large class of combinatorial optimization problems containing, e.g., all binary optimization problems defined by linear constraints and a linear objective function over {0,1} n. By parameterizing which constraints are of stochastic and which are of adversarial nature, we obtain a semirandom input model that enables us to do a general averagecase analysis for a large class of optimization problems while at the same time taking care for the combinatorial structure of individual problems. Our analysis covers various probability distributions for the choice of the stochastic numbers and includes smoothed analysis with Gaussian and other kinds of perturbation models as a special case. In fact, we can exactly characterize the smoothed complexity of optimization problems in terms of their random worstcase complexity. A binary optimization problem has a polynomial smoothed
Exact Side Effects for Interprocedural Dependence Analysis
 Communications of the ACM
, 1992
"... Exact side effects of array references in subroutines are essential for exact interprocedural dependence analysis. To summarize the side effects of multiple array references, a collective representation of all the array elements accessed is needed. So far all existing forms of collective summary of ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Exact side effects of array references in subroutines are essential for exact interprocedural dependence analysis. To summarize the side effects of multiple array references, a collective representation of all the array elements accessed is needed. So far all existing forms of collective summary of side effects of multiple array references are approximate. In this paper, we present a method to represent the exact side effects of multiple array references in the form of the projection of a single integer programming problem. Since the representation is collective, it dramatically reduces the number of pairs of dependences checking compared with other methods of exact interprocedural analysis. The representation of the exact side effects proposed in this paper can be used by the Omega test to support the exact interprocedural dependence analysis in parallelizing compilers. Keywords: exact side effects of array references, interprocedural dependence analysis, exact dependence tests, loop...
Different formulations for solving the heaviest ksubgraph problem
, 2002
"... Abstract. We consider the heaviest ksubgraph problem, i.e. determine a block of k nodes of a weighted graph (of n nodes) such that the total edge weight within the subgraph induced by the block is maximized. We compare from a theoretical and practical point of view different mixed integer programmi ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. We consider the heaviest ksubgraph problem, i.e. determine a block of k nodes of a weighted graph (of n nodes) such that the total edge weight within the subgraph induced by the block is maximized. We compare from a theoretical and practical point of view different mixed integer programming formulations of this problem. Computational experiments when the weight of each edge is equal to 1 are reported. Key words: Heaviest ksubgraph problem, mixed integer linear programming, upper bounds, experiments. 1.
Dynamic programming approaches to the multiple criteria knapsack problem
 Naval Research Logistics
, 2000
"... Abstract In this paper we study the integer multiple criteria knapsack problem and propose dynamicprogrammingbased approaches to finding all the nondominated solutions. Different and more complex models are discussed including the binary multiple criteria knapsack problem, problems with more than ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract In this paper we study the integer multiple criteria knapsack problem and propose dynamicprogrammingbased approaches to finding all the nondominated solutions. Different and more complex models are discussed including the binary multiple criteria knapsack problem, problems with more than one constraint, and multiperiod as well as timedependent models. 1 Introduction The single criterion knapsack problem is a well known combinatorial optimization problem with a wide range of applications (for an overview see e.g. [20, 21]).