Results 1  10
of
142
A Survey of Computational Complexity Results in Systems and Control
, 2000
"... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..."
Abstract

Cited by 112 (20 self)
 Add to MetaCart
The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial time algorithms, NPcompleteness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, timevarying linear systems, nonlinear and hybrid systems, and stochastic optimal control.
Computing Variance for Interval Data is NPHard
, 2002
"... When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NPhard. We provide a feasible (quadratic time) algorithm for computi ..."
Abstract

Cited by 61 (44 self)
 Add to MetaCart
When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NPhard. We provide a feasible (quadratic time) algorithm for computing the lower bound V on the variance of interval data. We also provide a feasible algorithm that computes V under reasonable easily verifiable conditions.
Towards combining probabilistic and interval uncertainty in engineering calculations: algorithms for computing statistics under interval uncertainty, and their computational complexity
 Reliable Computing
, 2006
"... Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, ..."
Abstract

Cited by 41 (40 self)
 Add to MetaCart
Abstract. In many engineering applications, we have to combine probabilistic and interval uncertainty. For example, in environmental analysis, we observe a pollution level x(t) in a lake at different moments of time t, and we would like to estimate standard statistical characteristics such as mean, variance, autocorrelation, correlation with other measurements. In environmental measurements, we often only measure the values with interval uncertainty. We must therefore modify the existing statistical algorithms to process such interval data. In this paper, we provide a survey of algorithms for computing various statistics under interval uncertainty and their computational complexity. The survey includes both known and new algorithms.
Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "BlackBox" Programs
 Handbook on Randomized Computing, Kluwer, 2001
, 2000
"... In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract

Cited by 29 (13 self)
 Add to MetaCart
In many reallife situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
A New CauchyBased BlackBox Technique for Uncertainty in Risk Analysis
 in Risk Analysis, Reliability Engineering and Systems Safety
, 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract

Cited by 25 (13 self)
 Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new blackbox technique for processing this type of uncertainty.
NonDestructive Testing of Aerospace Structures: Granularity and Data Mining Approach
 PROCEEDINGS OF FUZZIEEE'2002
, 2002
"... For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known s ..."
Abstract

Cited by 21 (17 self)
 Add to MetaCart
For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known statistical methods for fault reconstruction are not perfect. We show that the use of expert knowledgebased granulation improves the quality of fault reconstruction.
New methods for splice site recognition
, 2002
"... Splice sites are locations in DNA which separate proteincoding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
Splice sites are locations in DNA which separate proteincoding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from a labeled data set consisting of only local information around the potential splice site. Note that finding the correct position of splice sites without using global information is a rather hard task. We analyze the genomes of the nematode Caenorhabditis elegans and of humans using specially designed support vector kernels. One of the kernels is adapted from our previous work on detecting translation initiation sites in vertebrates and another uses an extension to the wellknown Fisherkernel. We find excellent performance on both data sets.
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
, 2007
"... This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute variou ..."
Abstract

Cited by 20 (14 self)
 Add to MetaCart
This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.
Outlier Detection Under Interval Uncertainty: Algorithmic Solvability and Computational Complexity
, 2003
"... In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside ..."
Abstract

Cited by 19 (12 self)
 Add to MetaCart
In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1 ; : : : ; xn , compute the sample average E, the sample standard variation oe, and then mark a value x as an outlier if x is outside the k0sigma interval [E \Gamma k0 \Delta oe; E+k0 \Delta oe] (for some preselected parameter k0 ). In real life, we often have only interval ranges [x i ; x i ] for the normal values x1 ; : : : ; xn . In this case, we only have intervals of possible values for the bounds E \Gamma k0 \Delta oe and E+k0 \Delta oe. We can therefore identify outliers as values that are outside all k0sigma intervals.
Exact bounds on finite populations of interval data
 Reliable Computing
, 2001
"... In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the fin ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters (“descriptive statistics”). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound σ 2 on the finite population variance function of interval data. We prove that the problem of computing the upper bound σ 2 is, in general, NPhard. We provide a feasible algorithm that computes σ 2 under reasonable easily verifiable conditions, and provide preliminary results on computing other functions of finite populations. 1