Results 1  10
of
294
Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions
, 2008
"... In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The ..."
Abstract

Cited by 235 (4 self)
 Add to MetaCart
In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The problem is of significant interest in a wide variety of areas.
Network Information Flow with Correlated Sources
 IEEE Trans. Inform. Theory
, 2006
"... Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memor ..."
Abstract

Cited by 64 (9 self)
 Add to MetaCart
Consider the following network communication setup, originating in a sensor networking application we refer to as the “sensor reachback ” problem. We have a directed graph G = (V, E), where V = {v0v1...vn} and E ⊆ V × V. If (vi, vj) ∈ E, then node i can send messages to node j over a discrete memoryless channel (Xij, pij(yx), Yij), of capacity Cij. The channels are independent. Each node vi gets to observe a source of information Ui (i = 0...M), with joint distribution p(U0U1...UM). Our goal is to solve an incast problem in G: nodes exchange messages with their neighbors, and after a finite number of communication rounds, one of the M + 1 nodes (v0 by convention) must have received enough information to reproduce the entire field of observations (U0U1...UM), with arbitrarily small probability of error. In this paper, we prove that such perfect reconstruction is possible if and only if H(USUSc) < i∈S,j∈Sc Cij, for all S ⊆ {0...M}, S � = ∅, 0 ∈ S c. Close examination of our achievability proof reveals that in this setup, Shannon information behaves as a classical network flow, identical in nature to the flow of water in pipes. This “information as flow ” view provides an algorithmic interpretation for our results, among which we
Efficient algorithms for web services selection with endtoend qos constraints
 ACM Transactions on the Web (TWEB
"... ServiceOriented Architecture (SOA) provides a flexible framework for service composition. Using standardbased protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with var ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
ServiceOriented Architecture (SOA) provides a flexible framework for service composition. Using standardbased protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some applicationdependent performance requirements. We design a brokerbased architecture to facilitate the selection of QoSbased services. The objective of service selection is to maximize an applicationspecific utility function under the endtoend QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 01 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.
Evaluation of Machine Translation and its Evaluation
 In Proceedings of MT Summit IX
, 2003
"... Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the wellknown measures precision, recall, and the Fmeasure. The Fmeasure has significantly higher correlation with human judgments than recently proposed al ..."
Abstract

Cited by 51 (4 self)
 Add to MetaCart
Evaluation of MT evaluation measures is limited by inconsistent human judgment data. Nonetheless, machine translation can be evaluated using the wellknown measures precision, recall, and the Fmeasure. The Fmeasure has significantly higher correlation with human judgments than recently proposed alternatives. More importantly, the standard measures have an intuitive graphical interpretation, which can facilitate insight into how MT systems might be improved. The relevant software is publicly available from http://nlp.cs.nyu.edu/GTM/
On the Maximum Stable Throughput Problem in Random Networks with Directional Antennas
 IN PROC. ACM MOBIHOC
, 2003
"... We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unitdisk graphs. Equipped with the maxflow/mincut theorem as our basic analysis tool, we obtain rates ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
We consider the problem of determining rates of growth for the maximum stable throughput achievable in dense wireless networks. We formulate this problem as one of finding maximum flows on random unitdisk graphs. Equipped with the maxflow/mincut theorem as our basic analysis tool, we obtain rates of growth under three models of communication: (a) omnidirectional transmissions; (b) "simple" directional transmissions, in which sending nodes generate a single beam aimed at a particular receiver; and (c) "complex " directional transmissions, in which sending nodes generate multiple beams aimed at multiple receivers. Our main finding is that an increase of 54 54 in maximum stable throughput is all that can be achieved by allowing arbitrarily complex signal processing (in the form of generation of directed beams) at the transmitters and receivers. We conclude therefore that neither directional antennas, nor the ability to communicate simultaneously with multiple nodes, can be expected in practice to effectively circumvent the constriction on capacity in dense networks that results from the geometric layout of nodes in space.
Maximizing Static Network Lifetime of Wireless Broadcast Adhoc Networks
"... We investigate the problem of energyefficient broadcast routing over wireless static adhoc network where host mobility is not involved. We define the lifetime of a network as the duration of time until the first node failure due to battery depletion. We provide a globally optimal solution to the p ..."
Abstract

Cited by 46 (3 self)
 Add to MetaCart
We investigate the problem of energyefficient broadcast routing over wireless static adhoc network where host mobility is not involved. We define the lifetime of a network as the duration of time until the first node failure due to battery depletion. We provide a globally optimal solution to the problem of maximizing a static network lifetime through a graph theoretic approach. We also provide extensive comparative simulation studies.
Outofcore algorithms for scientific visualization and computer graphics
 In Visualization’02 Course Notes
, 2002
"... Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in re ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of highresolution 3D scanners, and the need to visualize ASCIsize (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, which penalizes algorithms that do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing outofcore (and often cachefriendly) techniques. This paper surveys fundamental issues, current problems, and unresolved questions, and aims to provide graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Keywords: Outofcore algorithms, scientific visualization, computer graphics, interactive rendering, volume rendering, surface simplification.
Exact and approximate algorithms for the extension of embedded processor instruction sets
 IEEE Trans. on CAD of Integrated Circuits and Systems
"... Abstract—In embedded computing, cost, power, and performance constraints call for the design of specialized processors, rather than for the use of the existing offtheshelf solutions. While the design of these applicationspecific CPUs could be tackled from scratch, a cheaper and more effective opt ..."
Abstract

Cited by 46 (17 self)
 Add to MetaCart
Abstract—In embedded computing, cost, power, and performance constraints call for the design of specialized processors, rather than for the use of the existing offtheshelf solutions. While the design of these applicationspecific CPUs could be tackled from scratch, a cheaper and more effective option is that of extending the existing processors and toolchains. Extensibility is indeed a feature now offered in real designs, e.g., by processors such as Tensilica Xtensa [T. R. Halfhill, Microprocess
A Novel Coevolutionary Approach to Automatic Software Bug Fixing
 In Proceedings of the IEEE Congress on Evolutionary Computation (CEC ’08
, 2008
"... expensive, and that has led the investigation to how to automate them. In particular, Software Testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered i ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
expensive, and that has led the investigation to how to automate them. In particular, Software Testing can take up to half of the resources of the development of new software. Although there has been a lot of work on automating the testing phase, fixing a bug after its presence has been discovered is still a duty of the programmers. In this paper we propose an evolutionary approach to automate the task of fixing bugs. This novel evolutionary approach is based on Coevolution, in which programs and test cases coevolve, influencing each other with the aim of fixing the bugs of the programs. This competitive coevolution is similar to what happens in nature for predators and prey. The user needs only to provide a buggy program and a formal specification of it. No other information is required. Hence, the approach may work for any implementable software. We show some preliminary experiments in which bugs in an implementation of a sorting algorithm are automatically fixed. I.
SINGLE MACHINE SCHEDULING WITH RELEASE DATES
, 2002
"... We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a timeindexed formulation, the other on a completiontime formulation. We show their ..."
Abstract

Cited by 38 (12 self)
 Add to MetaCart
We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a timeindexed formulation, the other on a completiontime formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent αpoints, respectively. The analysis implies in particular that the worstcase relative error of the LP relaxations is at most 1.6853, and we provide instances showing thatitis atleaste/(e − 1) ≈ 1.5819. Both algorithms may be derandomized; their deterministic versions run in O(n²) time. The randomized algorithms also apply to the online setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards.