Results 1  10
of
42
Petri Net Supervisors for DES with Uncontrollable and Unobservable Transitions
 IEEE Transactions on Automatic Control
, 1999
"... A supervisor synthesis technique for Petri net plants with uncontrollable and unobservable transitions that enforces the conjunction of a set of linear inequalities on the reachable markings of the plant is presented. The approach is based on the concept of Petri net place invariants. Each step o ..."
Abstract

Cited by 33 (12 self)
 Add to MetaCart
(Show Context)
A supervisor synthesis technique for Petri net plants with uncontrollable and unobservable transitions that enforces the conjunction of a set of linear inequalities on the reachable markings of the plant is presented. The approach is based on the concept of Petri net place invariants. Each step of the procedure is illustrated through a running example involving the supervision of a robotic assembly cell. The controller is described by an auxiliary Petri net connected to the plant's transitions, providing a unified Petri net model of the closed loop system. The synthesis technique is based on the concept of admissible constraints. An inadmissible constraint can not be directly enforced on a plant due to the uncontrollability or unobservability of certain plant transitions. Procedures are given for identifying all admissible linear constraints for a plant with uncontrollable and unobservable transitions, as well as methods for transforming inadmissible constraints into admissib...
Apprenticeship learning using linear programming
 Proceedings of the 25th International Conference on Machine Learning
"... In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP’s true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear progr ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
In apprenticeship learning, the goal is to learn a policy in a Markov decision process that is at least as good as a policy demonstrated by an expert. The difficulty arises in that the MDP’s true reward function is assumed to be unknown. We show how to frame apprenticeship learning as a linear programming problem, and show that using an offtheshelf LP solver to solve this problem results in a substantial improvement in running time over existing methods — up to two orders of magnitude faster in our experiments. Additionally, our approach produces stationary policies, while all existing methods for apprenticeship learning output policies that are “mixed”, i.e. randomized combinations of stationary policies. The technique used is general enough to convert any mixed policy to a stationary policy. 1.
Mathematical Programming Algorithms for RegressionBased Nonlinear Filtering in R^N
 N ,” IEEE Transactions on Signal Processing
, 1999
"... This paper is concerned with regression under a "sum" of partial order constraints. Examples include locally monotonic, piecewise monotonic, runlength constrained, and unimodal and oligomodal regression. These are of interest not only in nonlinear filtering but also in density estimation a ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This paper is concerned with regression under a "sum" of partial order constraints. Examples include locally monotonic, piecewise monotonic, runlength constrained, and unimodal and oligomodal regression. These are of interest not only in nonlinear filtering but also in density estimation and chromatographic analysis. It is shown that under a least absolute error criterion, these problems can be transformed into appropriate finite problems, which can then be efficiently solved via dynamic programming techniques. Although the result does not carry over to least squares regression, hybrid programming algorithms can be developed to solve least squares counterparts of certain problems in the class. Index Terms Dynamic programming, locally monotonic, monotone regression, nonlinear filtering, oligomodal, piecewise monotonic, regression under order constraints, runlength constrained, unimodal. I.
Least Squares Algorithms Under Unimodality And NonNegativity Constraints
"... In this paper a least squares method is developed for estimating a matrix B that will minimize #Y  XB# subject to the constraint that the rows of B are unimodal, i.e., each has only one peak, and 2 2 #M# being the sum of squares of all elements of M. This method is directly applicable in many ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In this paper a least squares method is developed for estimating a matrix B that will minimize #Y  XB# subject to the constraint that the rows of B are unimodal, i.e., each has only one peak, and 2 2 #M# being the sum of squares of all elements of M. This method is directly applicable in many 2 2 curve resolution problems, but also for stabilizing other problems where unimodality is known to be a valid assumption. Typical problems arise in certain types of time series analysis like chromatography or flow injection analysis. A fundamental and surprising result of this work is that unimodal least squares regression (including optimization of mode location) is not anymore difficult than two simple Kruskal monotone regressions. The new method is useful in and exemplified with two and multiway methods based on alternating least squares regression solving problems from fluorescence spectroscopy and flow injection analysis. Keywords: Unimodal Least Squares Regression, alternating l...
Delay insertion method in clock skew scheduling
 IEEE TCAD
, 2005
"... www.library.drexel.edu The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, graphics, images, and illustrations (Material) which may be protected by copyri ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
www.library.drexel.edu The following item is made available as a courtesy to scholars by the author(s) and Drexel University Library and may contain materials and content, including computer code and tags, artwork, text, graphics, images, and illustrations (Material) which may be protected by copyright law. Unless otherwise noted, the Material is made available for non profit and educational purposes, such as research, teaching and private study. For these limited purposes, you may reproduce (print, download or make copies) the Material without prior permission. All copies must include any copyright notice originally included with the Material. You must seek permission from the authors or copyright owners for all uses that are not allowed by fair use and other provisions of the U.S. Copyright Law. The responsibility for making an independent legal assessment and securing any necessary permission rests with persons desiring to reproduce or use the Material.
Measures and algorithms for best basis selection
 in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing (ICASSP
, 1998
"... A general framework based on majorization, Schurconcavity, and concavity is given that facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed diversity measures useful for best basis selection. Admissible sparsity measures are given by the Schu ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
A general framework based on majorization, Schurconcavity, and concavity is given that facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed diversity measures useful for best basis selection. Admissible sparsity measures are given by the Schurconcave functions, which are the class of functions consistent with the partial ordering on vectors known as majorization. Concave functions form an important subclass of the Schurconcave functions which attain their minima at sparse solutions to the basis selection problem. Based on a particular functional factorization of the gradient, we give a general affine scaling optimization algorithm that converges to a sparse solution for measures chosen from within this subclass. 1.
Scalable, Balanced Modelbased Clustering
"... This paper presents a general framework for adapting any generative (modelbased) clustering algorithm to provide balanced solutions, i.e., clusters of comparable sizes. Partitional, modelbased clustering algorithms are viewed as an iterative twostep optimization processiterative model reestim ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper presents a general framework for adapting any generative (modelbased) clustering algorithm to provide balanced solutions, i.e., clusters of comparable sizes. Partitional, modelbased clustering algorithms are viewed as an iterative twostep optimization processiterative model reestimation and sample reassignment. Instead of a maximumlikelihood (ML) assignment, a balanceconstrained approach is used for the sample assignment step. An e#cient iterative bipartitioning heuristic is developed to reduce the computational complexity of this step and make the balanced sample assignment algorithm scalable to large datasets. We demonstrate the superiority of this approach to regular ML clustering on complex data such as arbitraryshape 2D spatial data, highdimensional text documents, and EEG time series.
Criticality and QoSBased Multiresource Negotiation
 and Adaptation”, RealTime Systems
, 1998
"... Abstract. This paper presents design, analysis, and implementation of a multiresource management system that enables criticality and QoSbased resource negotiation and adaptation for missioncritical multimedia applications. With the goal of maximizing the number of highcriticality multimedia stre ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper presents design, analysis, and implementation of a multiresource management system that enables criticality and QoSbased resource negotiation and adaptation for missioncritical multimedia applications. With the goal of maximizing the number of highcriticality multimedia streams and the degree of their QoS, it introduces a dynamic scheduling approach using online QoS adjustment and multiresource preemption. An integrated multiresource management infrastructure and a set of scheduling algorithms for multiresource preemption and online QoS adjustment are presented. The optimality and execution efficiency of two preemption algorithms are analyzed. A primaldualalgorithmbased approximation solution is shown (1) to be comparable to the linearprogrammingbased solution, which is near optimal; (2) to outperform a criticalitycognitive baseline algorithm; and (3) to be feasible for online scheduling. In addition, the dynamic QoS adjustment scheme is shown to greatly improve the quality of service for video streams. The multiresource management system is part of the Presto multimedia system environment prototyped at Honeywell for missioncritical applications.
Solving Fuzzy Relation Equations with a Linear Objective Function
, 1996
"... An optimization model with a linear objective function subject to a system of fuzzy relation equations is presented. Due to the nonconvexity of its feasible domain defined by fuzzy relation equations, designing an efficient solution procedure for solving such problems is not a trivial job. In this ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
An optimization model with a linear objective function subject to a system of fuzzy relation equations is presented. Due to the nonconvexity of its feasible domain defined by fuzzy relation equations, designing an efficient solution procedure for solving such problems is not a trivial job. In this paper, we first characterize the feasible domain and then convert the problem to an equivalent problem involving 01 integer programming with a branchandbound solution technique. After presenting our solution procedure, a concrete example is included for illustration purpose. Key words: Fuzzy relation equations, branchandbound method, integer programming. This research work was supported, in part, by the North Carolina Supercomputing Center, Cray Research Grant, and the National Textile Center Research Grant S952. 0 1 Introduction Let A = [a ij ], 0 a ij 1, be an m \Theta ndimensional fuzzy matrix and b = (b 1 ; \Delta \Delta \Delta ; b n ) T , 0 b j 1, be an ndimensional ...
A general approach to sparse basis selection: Majorization, concavity, and affine scaling
 IN PROCEEDINGS OF THE TWELFTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY
, 1997
"... Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures use ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures useful for sparse basis selection. It also allows one to define new concentration measures, and several general classes of measures are proposed and analyzed in this paper. Admissible measures are given by the Schurconcave functions, which are the class of functions consistent with the socalled Lorentz ordering (a partial ordering on vectors also known as majorization). In particular, concave functions form an important subclass of the Schurconcave functions which attain their minima at sparse solutions to the best basis selection problem. A general affine scaling optimization algorithm obtained from a special factorization of the gradient function is developed and proved to converge to a sparse solution for measures chosen from within this subclass.