Results 11  20
of
538
Translating pseudoboolean constraints into SAT
 Journal on Satisfiability, Boolean Modeling and Computation
, 2006
"... In this paper, we describe and evaluate three different techniques for translating pseudoboolean constraints (linear constraints over boolean variables) into clauses that can be handled by a standard SATsolver. We show that by applying a proper mix of translation techniques, a SATsolver can perfor ..."
Abstract

Cited by 121 (2 self)
 Add to MetaCart
In this paper, we describe and evaluate three different techniques for translating pseudoboolean constraints (linear constraints over boolean variables) into clauses that can be handled by a standard SATsolver. We show that by applying a proper mix of translation techniques, a SATsolver can perform on a par with the best existing native pseudoboolean solvers. This is particularly valuable in those cases where the constraint problem of interest is naturally expressed as a SAT problem, except for a handful of constraints. Translating those constraints to get a pure clausal problem will take full advantage of the latest improvements in SAT research. A particularly interesting result of this work is the efficiency of sorting networks to express pseudoboolean constraints. Although tangential to this presentation, the result gives a suggestion as to how synthesis tools may be modified to produce arithmetic circuits more suitable for SAT based reasoning. Keywords: pseudoBoolean, SATsolver, SAT translation, integer linear programming
Multiple Resolution Segmentation of Textured Images
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1991
"... This paper presents a multiple resolution algorithm for segmenting images into regions with differing statistical behavior. In addition, an algorithm is developed for determining the number of statistically distinct regions in an image and estimating the parameters of those regions. Both algorithms ..."
Abstract

Cited by 119 (7 self)
 Add to MetaCart
This paper presents a multiple resolution algorithm for segmenting images into regions with differing statistical behavior. In addition, an algorithm is developed for determining the number of statistically distinct regions in an image and estimating the parameters of those regions. Both algorithms use a causal Gaussian autoregressive (AR) model to describe the mean, variance and spatial correlation of the image textures. Together the algorithms may be used to perform unsupervised texture segmentation. The multiple resolution segmentation algorithm first segments images at coarse resolution and then progresses to finer resolutions until individual pixels are classified. This method results in accurate segmentations and requires significantly less computation than some previously known methods. The field containing the classification of each pixel in the image is modeled as a Markov random field (MRF). Segmentation at each resolution is then performed by maximizing the a posteriori prob...
Reducing the Space Requirement of Suffix Trees
 Software – Practice and Experience
, 1999
"... We show that suffix trees store various kinds of redundant information. We exploit these redundancies to obtain more space efficient representations. The most space efficient of our representations requires 20 bytes per input character in the worst case, and 10.1 bytes per input character on average ..."
Abstract

Cited by 118 (10 self)
 Add to MetaCart
We show that suffix trees store various kinds of redundant information. We exploit these redundancies to obtain more space efficient representations. The most space efficient of our representations requires 20 bytes per input character in the worst case, and 10.1 bytes per input character on average for a collection of 42 files of different type. This is an advantage of more than 8 bytes per input character over previous work. Our representations can be constructed without extra space, and as fast as previous representations. The asymptotic running times of suffix tree applications are retained. Copyright © 1999 John Wiley & Sons, Ltd. KEY WORDS: data structures; suffix trees; implementation techniques; space reduction
Algorithmic SelfAssembly of DNA
, 1998
"... How can molecules compute? In his early studies of reversible computation, Bennett imagined an enzymatic Turing Machine which modified a heteropolymer (such as DNA) to perform computation with asymptotically low energy expenditures. Adleman's recent experimental demonstration of a DNA computation, ..."
Abstract

Cited by 104 (6 self)
 Add to MetaCart
How can molecules compute? In his early studies of reversible computation, Bennett imagined an enzymatic Turing Machine which modified a heteropolymer (such as DNA) to perform computation with asymptotically low energy expenditures. Adleman's recent experimental demonstration of a DNA computation, using an entirely different approach, has led to a wealth of ideas for how to build DNAbased computers in the laboratory, whose energy efficiency, information density, and parallelism may have potential to surpass conventional electronic computers for some purposes. In this thesis, I examine one mechanism used in all designs for DNAbased computer  the selfassembly of DNA by hybridization and formation of the double helix  and show that this mechanism alone in theory can perform universal computation. To do so, I borrow an important result in the mathematical theory of tiling: Wang showed how jigsawshaped tiles can be designed to simulate the operation of any Turing Machine. I propose...
Improving Memory Hierarchy Performance for Irregular Applications Using Data and Computation Reorderings
 International Journal of Parallel Programming
, 2001
"... The performance of irregular applications on modern computer systems is hurt by the wide gap between CPU and memory speeds because these applications typically underutilize multilevel memory hierarchies, which help hide this gap. This paper investigates using data and computation reorderings to i ..."
Abstract

Cited by 89 (2 self)
 Add to MetaCart
The performance of irregular applications on modern computer systems is hurt by the wide gap between CPU and memory speeds because these applications typically underutilize multilevel memory hierarchies, which help hide this gap. This paper investigates using data and computation reorderings to improve memory hierarchy utilization for irregular applications. We evaluate the impact of reordering on data reuse at different levels in the memory hierarchy. We focus on coordinated data and computation reordering based on spacefilling curves and we introduce a new architectureindependent multilevel blocking strategy for irregular applications. For two particle codes we studied, the most effective reorderings reduced overall execution time by a factor of two and four, respectively. Preliminary experience with a scatter benchmark derived from a large unstructured mesh application showed that careful data and computation ordering reduced primary cache misses by a factor of two compared to a random ordering.
Universal computation via selfassembly of DNA: Some theory and experiments
 In DNA Based Computers II, volume 44 of DIMACS
, 1996
"... In this paper we examine the computational capabilities inherent inthehybridization of DNA molecules. First we consider theoretical models, and show that the selfassembly of oligonucleotides into linear duplex DNA can only generate sets of sequences equivalent to regular languages. If branched DNA ..."
Abstract

Cited by 88 (11 self)
 Add to MetaCart
In this paper we examine the computational capabilities inherent inthehybridization of DNA molecules. First we consider theoretical models, and show that the selfassembly of oligonucleotides into linear duplex DNA can only generate sets of sequences equivalent to regular languages. If branched DNA is used for selfassembly of dendrimer structures, only sets of sequences equivalent tocontextfree languages can be achieved. In contrast, the selfassembly of double crossover molecules into two dimensional sheets or three dimensional solids is theoretically capable of universal computation. The proof relies on a very direct simulation of a universal class of cellular automata. In the second part of this paper, we present results from preliminary experiments which investigate the critical computational step in atwodimensional selfassembly process. 1
RateControlled StaticPriority Queueing
 In Proc. IEEE Infocom '93
, 1993
"... We propose a new service discipline, called the RateControlled StaticPriority (RCSP) queueing discipline, that can provide throughput, delay, delay jitter, and loss free guarantees in a connectionoriented packetswitching network. Previously proposed solutions are based on either a timeframing s ..."
Abstract

Cited by 88 (2 self)
 Add to MetaCart
We propose a new service discipline, called the RateControlled StaticPriority (RCSP) queueing discipline, that can provide throughput, delay, delay jitter, and loss free guarantees in a connectionoriented packetswitching network. Previously proposed solutions are based on either a timeframing strategy, or a sorted priority queue mechanism. Timeframing schemes suffer from the dependencies that they introduce between the queueing delay and the granularity of bandwidth allocation; sorted priority queue may be difficult to implement. The proposed RCSP queueing discipline avoids both timeframing and sorted priority queue; it achieves flexibility in the allocation of delay and bandwidth, as well as simplicity of implementation. The key idea is to separate ratecontrol and delaycontrol functions in the design of the server. Applying this separation of functions will result in a class of service disciplines, of which RCSP is an instance. This research was supported by the National Sci...
Nearestneighbor searching and metric space dimensions
 In NearestNeighbor Methods for Learning and Vision: Theory and Practice
, 2006
"... Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives a data structure for this problem; the data structure is built using the distan ..."
Abstract

Cited by 87 (0 self)
 Add to MetaCart
Given a set S of n sites (points), and a distance measure d, the nearest neighbor searching problem is to build a data structure so that given a query point q, the site nearest to q can be found quickly. This paper gives a data structure for this problem; the data structure is built using the distance function as a “black box”. The structure is able to speed up nearest neighbor searching in a variety of settings, for example: points in lowdimensional or structured Euclidean space, strings under Hamming and edit distance, and bit vector data from an OCR application. The data structures are observed to need linear space, with a modest constant factor. The preprocessing time needed per site is observed to match the query time. The data structure can be viewed as an application of a “kdtree ” approach in the metric space setting, using Voronoi regions of a subset in place of axisaligned boxes. 1
The Tiger Video Fileserver
, 1996
"... Tiger is a distributed, faulttolerant realtime fileserver. It provides data streams at a constant, guaranteed rate to a large number of clients, in addition to supporting more traditional filesystem operations. It is intended to be the basis for multimedia (video on demand) fileservers, but may al ..."
Abstract

Cited by 80 (6 self)
 Add to MetaCart
Tiger is a distributed, faulttolerant realtime fileserver. It provides data streams at a constant, guaranteed rate to a large number of clients, in addition to supporting more traditional filesystem operations. It is intended to be the basis for multimedia (video on demand) fileservers, but may also be used in other applications needing constant rate data delivery. The fundamental problem addressed by the Tiger design is that of efficiently balancing user load against limited disk, network and I/O bus resources. Tiger accomplishes this balancing by striping file data across all disks and all computers in the (distributed) system, and then allocating data streams in a schedule that rotates across the disks. This paper describes the Tiger design and an implementation that runs on a collection of personal computers connected by an ATM switch. 1.