Results 1 - 10
of
216
A Survey of Computational Complexity Results in Systems and Control
, 2000
"... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..."
Abstract
-
Cited by 187 (18 self)
- Add to MetaCart
The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
Computing Variance for Interval Data is NP-Hard
, 2002
"... When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NP-hard. We provide a feasible (quadratic time) algorithm for computi ..."
Abstract
-
Cited by 67 (49 self)
- Add to MetaCart
When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NP-hard. We provide a feasible (quadratic time) algorithm for computing the lower bound V on the variance of interval data. We also provide a feasible algorithm that computes V under reasonable easily verifiable conditions.
Towards combining probabilistic and interval uncertainty in engineering . . .
, 2006
"... ..."
(Show Context)
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
- 11733, SAND2007-0939. hal-00839639, version 1 - 28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract
-
Cited by 42 (20 self)
- Add to MetaCart
(Show Context)
Sandia is a multiprogram laboratory operated by Sandia Corporation,
A New Cauchy-Based Black-Box Technique for Uncertainty in Risk Analysis
- in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract
-
Cited by 36 (18 self)
- Add to MetaCart
(Show Context)
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new black-box technique for processing this type of uncertainty.
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "Black-Box" Programs
- HANDBOOK ON RANDOMIZED COMPUTING, KLUWER, 2001
, 2000
"... In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract
-
Cited by 32 (15 self)
- Add to MetaCart
(Show Context)
In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
Efficient and safe global constraints for handling numerical constraint systems
- SIAM J. NUMER. ANAL
, 2005
"... Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems ..."
Abstract
-
Cited by 25 (9 self)
- Add to MetaCart
(Show Context)
Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver, a new branch and prune algorithm that combines Quad, local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with state-of-the-art optimization solvers.
New methods for splice site recognition
, 2002
"... Splice sites are locations in DNA which separate protein-coding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from ..."
Abstract
-
Cited by 25 (4 self)
- Add to MetaCart
Splice sites are locations in DNA which separate protein-coding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from a labeled data set consisting of only local information around the potential splice site. Note that finding the correct position of splice sites without using global information is a rather hard task. We analyze the genomes of the nematode Caenorhabditis elegans and of humans using specially designed support vector kernels. One of the kernels is adapted from our previous work on detecting translation initiation sites in vertebrates and another uses an extension to the well-known Fisher-kernel. We find excellent performance on both data sets.
Consistency-Based Characterization for IC Trojan Detection
- Proc. IEEE/ ACM Int’l Conf. Computer-Aided Design (ICCAD 09), IEEE CS
, 2009
"... A Trojan attack maliciously modifies, alters, or embeds un-planned components inside the exploited chips. Given the original chip specifications, and process and simulation mod-els, the goal of Trojan detection is to identify the malicious components. This paper introduces a new Trojan detection met ..."
Abstract
-
Cited by 22 (8 self)
- Add to MetaCart
(Show Context)
A Trojan attack maliciously modifies, alters, or embeds un-planned components inside the exploited chips. Given the original chip specifications, and process and simulation mod-els, the goal of Trojan detection is to identify the malicious components. This paper introduces a new Trojan detection method based on nonintrusive external IC quiescent current measurements. We define a new metric called consistency. Based on the consistency metric and properties of the objec-tive function, we present a robust estimation method that estimates the gate properties while simultaneously detecting the Trojans. Experimental evaluations on standard bench-mark designs show the validity of the metric, and demon-strate the effectiveness of the new Trojan detection. 1.
Non-Destructive Testing of Aerospace Structures: Granularity and Data Mining Approach
- PROCEEDINGS OF FUZZ-IEEE'2002
, 2002
"... For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known s ..."
Abstract
-
Cited by 22 (19 self)
- Add to MetaCart
For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known statistical methods for fault reconstruction are not perfect. We show that the use of expert knowledge-based granulation improves the quality of fault reconstruction.