Results 1  10
of
14
Automatic Subspace Clustering of High Dimensional Data
 Data Mining and Knowledge Discovery
, 2005
"... Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, enduser comprehensibility of the results, nonpresumption of any canonical data distribution, and insensitivity to the or ..."
Abstract

Cited by 600 (12 self)
 Add to MetaCart
(Show Context)
Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, enduser comprehensibility of the results, nonpresumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate clusters in large high dimensional datasets.
Relating data compression and learnability
, 1986
"... We explore the learnability of twovalued functions from samples using the paradigm of Data Compression. A first algorithm (compression) choses a small subset of the sample which is called the kernel. A second algorithm predicts future values of the function from the kernel, i.e. the algorithm acts ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
(Show Context)
We explore the learnability of twovalued functions from samples using the paradigm of Data Compression. A first algorithm (compression) choses a small subset of the sample which is called the kernel. A second algorithm predicts future values of the function from the kernel, i.e. the algorithm acts as an hypothesis for the function to be learned. The second algorithm must be able to reconstruct the correct function values when given a point of the original sample. We demonstrate that the existence of a suitable data compression scheme is sufficient to ensure learnability. We express the probability that the hypothesis predicts the function correctly on a random sample point as a function of the sample and kernel sizes. No assumptions are made on the probability distributions according to which the sample points are generated. This approach provides an alternative to that of [BEHW86], which uses the VapnikChervonenkis dimension to classify learnable geometric concepts. Our bounds are derived directly from the kernel size of the algorithms rather than from the VapnikChervonenkis dimension of the hypothesis class. The proofs are simpler and the introduced compression scheme provides a rigorous model for studying data compression in connection with machine learning. 1
Circuit Minimization Problem
 In ACM Symposium on Theory of Computing (STOC
, 1999
"... We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s. We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surpris ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
We study the complexity of the circuit minimization problem: given the truth table of a Boolean function f and a parameter s, decide whether f can be realized by a Boolean circuit of size at most s. We argue why this problem is unlikely to be in P (or even in P=poly) by giving a number of surprising consequences of such an assumption. We also argue that proving this problem to be NPcomplete (if it is indeed true) would imply proving strong circuit lower bounds for the class E, which appears beyond the currently known techniques. Keywords: hard Boolean functions, derandomization, natural properties, NPcompleteness. 1 Introduction An nvariable Boolean function f n : f0; 1g n ! f0; 1g can be given by either its truth table of size 2 n , or a Boolean circuit whose size may be significantly smaller than 2 n . It is well known that most Boolean functions on n variables have circuit complexity at least 2 n =n [Sha49], but so far no family of sufficiently hard functions has ...
Covering Rectilinear Polygons with AxisParallel Rectangles
, 1999
"... We give an O(sqrt(log n)) factor approximation algorithm for covering a rectilinear polygon with holes using axisparallel rectangles. This is the first polynomial time approximation algorithm for this problem with a o(log n) approximation factor. ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We give an O(sqrt(log n)) factor approximation algorithm for covering a rectilinear polygon with holes using axisparallel rectangles. This is the first polynomial time approximation algorithm for this problem with a o(log n) approximation factor.
A general method for sensor planning in multisensor systems: Extension to random occlusion
, 2005
"... Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal wit ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Systems utilizing multiple sensors are required in many domains. In this paper, we specifically concern ourselves with applications where dynamic objects appear randomly and the system is employed to obtain some userspecified characteristics of such objects. For such systems, we deal with the tasks of determining measures for evaluating their performance and of determining good sensor configurations that would maximize such measures for better system performance. We introduce a constraint in sensor planning that has not been addressed earlier: visibility in the presence of random occluding objects. Two techniques are developed to analyze such visibility constraints: a probabilistic approach to determine “average ” visibility rates and a deterministic approach to address worstcase scenarios. Apart from this constraint, other important constraints to be considered include image resolution, field of view, capture orientation, and algorithmic constraints such as stereo matching and background appearance. Integration of such constraints is performed via the development of a probabilistic framework that allows one to reason about different occlusion events and integrates different multiview capture and visibility constraints in a natural way. Integration of the thus obtained capture quality measure across the region of interest yields a measure for the effectiveness of a sensor configuration and maximization of such measure yields sensor configurations that are
Complexities of Efficient Solutions of Rectilinear Polygon Cover Problems
, 1994
"... The rectilinear polygon cover problem is one in which a certain class of features of a rectilinear polygon of n vertices has to be covered with the minimum number of rectangles included in the polygon. In particular, we consider covering the entire interior, the boundary and the set of corners of th ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The rectilinear polygon cover problem is one in which a certain class of features of a rectilinear polygon of n vertices has to be covered with the minimum number of rectangles included in the polygon. In particular, we consider covering the entire interior, the boundary and the set of corners of the polygon. These problems have important applications in storing images and in the manufacture of integrated circuits. Unfortunately, most of these problems are known to be NPcomplete. Hence it is necessary to develop efficient heuristics for these problems or to show that the design of efficient heuristics is impossible. In this paper we show: (a) The corner cover problem is NPcomplete. (b) The boundary and the corner cover problem can be approximated within a ratio of 4 of the optimum in O(n log n) and O(n 1:5 ) time, respectively. (c) No polynomialtime approximation scheme exists for the interior and the boundary cover problems, unless P = NP .
Applying Partial Evaluation to VLSI Design Rule Checking
, 1995
"... This report describes the design and implementation of a complete VLSI design rule checking program. We use formal techniques to develop a methodology for performing design rule checking, and implement this methodology in the Scheme programming language. We specify the requirements for a simplified ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This report describes the design and implementation of a complete VLSI design rule checking program. We use formal techniques to develop a methodology for performing design rule checking, and implement this methodology in the Scheme programming language. We specify the requirements for a simplified VLSI design database and implement it, making use of the Hilbert Rtree. We describe the implementationof an efficient algorithm for the decompositionof rectilinear polygons into collections of rectangles. We apply partial evaluation techniques to our final design rule checking program in order to determine the effect this has on our program's structure and performance. Finally, we describe the implementation of a graphical user interface for the checker and summarise our experiences and insights gained during the course of this project.
Two Geometric Optimization Problems
 in DingZhu Du and Jie Sun (eds.), New Advances in Optimization and Approximation
, 1994
"... Abstract. We consider two optimization problems with geometric structures. The rst one concerns the following minimization problem, termed as the rectilinear polygon cover problem: \Cover certain features of a given rectilinear polygon (possibly with rectilinear holes) with the minimum number of rec ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We consider two optimization problems with geometric structures. The rst one concerns the following minimization problem, termed as the rectilinear polygon cover problem: \Cover certain features of a given rectilinear polygon (possibly with rectilinear holes) with the minimum number of rectangles included in the polygon. " Depending upon whether one wants to cover the interior, boundary or corners of the polygon, the problem is termed as the interior, boundary or corner cover problem, respectively. Most of these problems are known to be NPcomplete. In this chapter we survey some of the important previous results for these problems and provide a proof of impossibility of a polynomialtime approximationscheme for the interior and boundary cover problems. The second problem concerns routing in a segmented routing channel. The related problems are fundamental to routing and design automation for Field Programmable Gate Arrays (FPGAs), anewtype of electrically programmable VLSI. In this chapter we survey the theoretical results on the combinatorial complexity and algorithm design for segmented channel routing. It is known that the segmented channel routing problem is in general NPComplete. E cient polynomial time algorithms for a number of important special cases are presented.
The Class Cover Problem with Boxes
, 2012
"... In this paper we study the following problem: Given sets R and B of r red and b blue points respectively in the plane, find a minimumcardinality set H of axisaligned rectangles (boxes) so that every point in B is covered by at least one rectangle of H, and no rectangle of H contains a point of R. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper we study the following problem: Given sets R and B of r red and b blue points respectively in the plane, find a minimumcardinality set H of axisaligned rectangles (boxes) so that every point in B is covered by at least one rectangle of H, and no rectangle of H contains a point of R. We prove the NPhardness of the stated problem, and give either exact or approximate algorithms depending on the type of rectangles considered. If the covering boxes are vertical or horizontal strips we give an efficient algorithm that runs in O(r log r + b log b + √ rb) time. For covering with oriented halfstrips an optimal O((r+b) log(min{r, b}))time algorithm is shown. We prove that the problem remains NPhard if the covering boxes are halfstrips oriented in any of the four orientations, and show that there exists an O(1)approximation algorithm. We also give an NPhardness proof if the covering boxes are squares. In this situation, we show that there exists an O(1)approximation algorithm. 1
1 Handling and Archiving of Theses and Dissertations
, 2009
"... This is to certify that the thesis entitled Polygon and fortress guarding by diffuse reflection, angular visibility and direct visibility, submitted by Arindam Khan(04CS3001), to the department of Computer Science and Engineering in partial fulfillment for the award of the degree of Master of Techno ..."
Abstract
 Add to MetaCart
(Show Context)
This is to certify that the thesis entitled Polygon and fortress guarding by diffuse reflection, angular visibility and direct visibility, submitted by Arindam Khan(04CS3001), to the department of Computer Science and Engineering in partial fulfillment for the award of the degree of Master of Technology, is a bonafide record of work carried out by him under my supervision and guidance. The thesis has fulfilled all the requirements as per the regulations of this institute and, in my opinion, has reached the standard needed for submission.