• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Computational complexity and feasibility of data processing and interval computations Kluwer Academic Publishers, (1998)

by P Kahl, V Kreinovich, A Lakeyev, J Rohn
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 216
Next 10 →

A Survey of Computational Complexity Results in Systems and Control

by Vincent D. Blondel, John N. Tsitsiklis , 2000
"... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..."
Abstract - Cited by 187 (18 self) - Add to MetaCart
The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control.

Computing Variance for Interval Data is NP-Hard

by Scott Ferson, Lev Ginzburg, Vladik Kreinovich, Luc Longpré, Monica Aviles , 2002
"... When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NP-hard. We provide a feasible (quadratic time) algorithm for computi ..."
Abstract - Cited by 67 (49 self) - Add to MetaCart
When we have only interval ranges [x i ; x i ] of sample values x 1 ; : : : ; xn , what is the interval [V ; V ] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V is NP-hard. We provide a feasible (quadratic time) algorithm for computing the lower bound V on the variance of interval data. We also provide a feasible algorithm that computes V under reasonable easily verifiable conditions.

Towards combining probabilistic and interval uncertainty in engineering . . .

by Vladik Kreinovich, Gang Xiang, Scott A. Starks, Luc Longpre, Martine Ceberio, Roberto Araiza, J. Beck, A. Nayak, R. Kandathi, R. Torres, J. G. Hajagos , 2006
"... ..."
Abstract - Cited by 45 (42 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...reover, if we want to compute the variance range with a given accuracy ε, the problem is still NP-hard. (For a more detailed description of NP-hardness in relation to interval uncertainty, see, e.g., =-=[19]-=-.) Linearization. From the practical viewpoint, often, we may not need the exact range, we can often use approximate linearization techniques. For example, when the uncertainty comes from measurement ...

Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty

by Scott Ferson, Vladik Kreinovich, Janos Hajagos, William Oberkampf, Lev Ginzburg - 11733, SAND2007-0939. hal-00839639, version 1 - 28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract - Cited by 42 (20 self) - Add to MetaCart
Sandia is a multiprogram laboratory operated by Sandia Corporation,
(Show Context)

Citation Context

...statistics The table below summarizes the computability results for statistics of data sets containing intervals that have been established in this report and elsewhere (Ferson et al. 2002a,b; 2005a; =-=Kreinovich and Longpré 2003-=-; 2004; Kreinovich et al. 2003; 2004a,b; 2005a,b,c; Wu et al. 2003; Xiang 2006; Xiang et al. 2006; Dantsin et al. 2006; Xiang et al. 2007a). The column headings refer to exemplar problems. For instanc...

A New Cauchy-Based Black-Box Technique for Uncertainty in Risk Analysis

by V. Kreinovich, S.A. Ferson - in Risk Analysis, Reliability Engineering and Systems Safety , 2002
"... Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. T ..."
Abstract - Cited by 36 (18 self) - Add to MetaCart
Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new black-box technique for processing this type of uncertainty.
(Show Context)

Citation Context

...] and references therein). In this case, the problem of error estimation for indirect measurements becomes computationally difficult (NPhard) even when the function f(x 1 ; : : : ; x n ) is quadratic =-=[17,27]-=-. However, in most real-life situations, the possibility to ignore quadratic terms is a reasonable assumption, because, e.g., for an error of 1% its square is a negligible 0.01%. With the above restri...

Error Estimations For Indirect Measurements: Randomized Vs. Deterministic Algorithms For "Black-Box" Programs

by Vladik Kreinovich, Raúl Trejo - HANDBOOK ON RANDOMIZED COMPUTING, KLUWER, 2001 , 2000
"... In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure ..."
Abstract - Cited by 32 (15 self) - Add to MetaCart
In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1 ; : : : ; xn , and then by using the known relation between x i and y to reconstruct the value of the desired quantity y. In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates e x1 ; : : : ; e xn for x i into an estimate for y is only available as a black box (with no source code at hand). We consider this problem in two settings: statistical, when measurements errors \Deltax i = e x i \Gamma x i are inde...
(Show Context)

Citation Context

... and references therein). In this case, the problem of error estimation for indirect measurements becomes computationally difficult (NP-hard) even when the function f(x 1 ; : : : ; x n ) is quadratic =-=[43, 62]-=-. However, in most real-life situations, the possibility to ignore quadratic terms is a reasonable assumption, because, e.g., for an error of 1% its square is a negligible 0.01%. With the above restri...

Efficient and safe global constraints for handling numerical constraint systems

by Yahia Lebbah, Claude Michel, Michel Rueher, David Daney, Jean-pierre Merlet - SIAM J. NUMER. ANAL , 2005
"... Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems ..."
Abstract - Cited by 25 (9 self) - Add to MetaCart
Numerical constraint systems are often handled by branch and prune algorithms that combine splitting techniques, local consistencies, and interval methods. This paper first recalls the principles of Quad, a global constraint that works on a tight and safe linear relaxation of quadratic subsystems of constraints. Then, it introduces a generalization of Quad to polynomial constraint systems. It also introduces a method to get safe linear relaxations and shows how to compute safe bounds of the variables of the linear constraint system. Different linearization techniques are investigated to limit the number of generated constraints. QuadSolver, a new branch and prune algorithm that combines Quad, local consistencies, and interval methods, is introduced. QuadSolver has been evaluated on a variety of benchmarks from kinematics, mechanics, and robotics. On these benchmarks, it outperforms classical interval methods as well as constraint satisfaction problem solvers and it compares well with state-of-the-art optimization solvers.
(Show Context)

Citation Context

...]. x − x = [x − x, x − x] = [−10, 10] instead of [0, 0] as one could expect. In general, it is not possible to compute the exact enclosure of the range for an arbitrary function over the real numbers =-=[25]-=-. Thus, Moore introduced the concept of interval extension: the interval extension of a function is an interval function that computes outer approximations on the range of the function over a domain [...

New methods for splice site recognition

by S. Sonnenburg, G. Rätsch, A. Jagota, K.-R. Müller , 2002
"... Splice sites are locations in DNA which separate protein-coding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from ..."
Abstract - Cited by 25 (4 self) - Add to MetaCart
Splice sites are locations in DNA which separate protein-coding regions (exons) from noncoding regions (introns). Accurate splice site detectors thus form important components of computational gene finders. We pose splice site recognition as a classification problem with the classifier learnt from a labeled data set consisting of only local information around the potential splice site. Note that finding the correct position of splice sites without using global information is a rather hard task. We analyze the genomes of the nematode Caenorhabditis elegans and of humans using specially designed support vector kernels. One of the kernels is adapted from our previous work on detecting translation initiation sites in vertebrates and another uses an extension to the well-known Fisher-kernel. We find excellent performance on both data sets.

Consistency-Based Characterization for IC Trojan Detection

by Yousra Alkabani, Farinaz Koushanfar - Proc. IEEE/ ACM Int’l Conf. Computer-Aided Design (ICCAD 09), IEEE CS , 2009
"... A Trojan attack maliciously modifies, alters, or embeds un-planned components inside the exploited chips. Given the original chip specifications, and process and simulation mod-els, the goal of Trojan detection is to identify the malicious components. This paper introduces a new Trojan detection met ..."
Abstract - Cited by 22 (8 self) - Add to MetaCart
A Trojan attack maliciously modifies, alters, or embeds un-planned components inside the exploited chips. Given the original chip specifications, and process and simulation mod-els, the goal of Trojan detection is to identify the malicious components. This paper introduces a new Trojan detection method based on nonintrusive external IC quiescent current measurements. We define a new metric called consistency. Based on the consistency metric and properties of the objec-tive function, we present a robust estimation method that estimates the gate properties while simultaneously detecting the Trojans. Experimental evaluations on standard bench-mark designs show the validity of the metric, and demon-strate the effectiveness of the new Trojan detection. 1.
(Show Context)

Citation Context

...values. Such problems contain an uncertainty about the values of the variables and the interval of the benign variables (because of the measurement error and PV) and have been shown to be NP-complete =-=[7]-=-. 4.2 The detection algorithm In this section, we develop efficient heuristics to address the above complex problem. For the MSE estimation, an influence function (IF) is defined which measures the im...

Non-Destructive Testing of Aerospace Structures: Granularity and Data Mining Approach

by Roberto Osegueda , Vladik Kreinovich, Lakshmi Potluri - PROCEEDINGS OF FUZZ-IEEE'2002 , 2002
"... For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known s ..."
Abstract - Cited by 22 (19 self) - Add to MetaCart
For large aerospace structures, it is extremely important to detect faults, and nondestructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known statistical methods for fault reconstruction are not perfect. We show that the use of expert knowledge-based granulation improves the quality of fault reconstruction.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University