Results 1  10
of
91
Threedimensional object recognition from single twodimensional images
 Artificial Intelligence
, 1987
"... A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single grayscale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottomup from the visual input. Instead, ..."
Abstract

Cited by 399 (7 self)
 Add to MetaCart
A computer vision system has been implemented that can recognize threedimensional objects from unknown viewpoints in single grayscale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottomup from the visual input. Instead, three other mechanisms are used that can bridge the gap between the twodimensional image and knowledge of threedimensional objects. First, a process of perceptual organization is used to form groupings and structures in the image that are likely to be invariant over a wide range of viewpoints. Second, a probabilistic ranking method is used to reduce the size of the search space during model based matching. Finally, a process of spatial correspondence brings the projections of threedimensional models into direct correspondence with the image by solving for unknown viewpoint and model parameters. A high level of robustness in the presence of occlusion and missing data can be achieved through full application of a viewpoint consistency constraint. It is argued that similar mechanisms and constraints form the basis for recognition in human vision. This paper has been published in Artificial Intelligence, 31, 3 (March 1987), pp. 355–395. 1 1
A lineartime probabilistic counting algorithm for database applications
 ACM Transactions on Database Systems
, 1990
"... We present a probabilistic algorithm for counting the number of unique values in the presence of duplicates. This algorithm has O(q) time complexity, where q is the number of values including duplicates, and produces an estimation with an arbitrary accuracy prespecified by the user using only a smal ..."
Abstract

Cited by 94 (5 self)
 Add to MetaCart
We present a probabilistic algorithm for counting the number of unique values in the presence of duplicates. This algorithm has O(q) time complexity, where q is the number of values including duplicates, and produces an estimation with an arbitrary accuracy prespecified by the user using only a small amount of space. Traditionally, accurate counts of unique values were obtained by sorting, which has O(q log q) time complexity. Our technique, called linear counting, is based on hashing. We present a comprehensive theoretical and experimental analysis of linear counting. The analysis reveals an interesting result: A load factor (number of unique values/hash table size) much larger than 1.0 (e.g., 12) can be used for accurate estimation (e.g., 1 % of error). We present this technique with two important applications to database problems: namely, (1) obtaining the column cardinality (the number of unique values in a column of a relation) and (2) obtaining the join selectivity (the number of unique values in the join column resulting from an unconditional join divided by the number of unique join column values in the relation to he joined). These two parameters are important statistics that are used in relational query optimization and physical database design.
Efficient Algorithms for Approximating Polygonal Chains
"... We consider the problem of approximating a polygonal chain C by another polygonal chain C ′ whose vertices are constrained to be a subset of the set of vertices of C. The goal is to minimize the number of vertices needed in the approximation C ′. Based on a framework introduced by Imai and Iri [25 ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
We consider the problem of approximating a polygonal chain C by another polygonal chain C ′ whose vertices are constrained to be a subset of the set of vertices of C. The goal is to minimize the number of vertices needed in the approximation C ′. Based on a framework introduced by Imai and Iri [25], we define an error criterion for measuring the quality of an approximation. We consider two problems. (1) Given a polygonal chain C and a parameter ε ≥ 0, compute an approximation of C, among all approximations whose error is at most ε, that has the smallest number of vertices. We present an O(n 4/3+δ)time algorithm to solve this problem, for any δ>0; the constant of proportionality in the running time depends on δ. (2) Given a polygonal chain C and an integer k, compute an approximation of C with at most k vertices whose error is the smallest among all approximations with at most k vertices. We present a simple randomized algorithm, with expected running time O(n 4/3+δ), to solve this problem.
Efficient PiecewiseLinear Function Approximation Using the Uniform Metric
 Discrete & Computational Geometry
, 1994
"... We give an O(n log n)time method for finding a best klink piecewiselinear function approximating an npoint planar data set using the wellknown uniform metric to measure the error, ffl 0, of the approximation. Our method is based upon new characterizations of such functions, which we exploit to ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
We give an O(n log n)time method for finding a best klink piecewiselinear function approximating an npoint planar data set using the wellknown uniform metric to measure the error, ffl 0, of the approximation. Our method is based upon new characterizations of such functions, which we exploit to design an efficient algorithm using a plane sweep in "ffl space" followed by several applications of the parametric searching technique. The previous best running time for this problem was O(n 2 ). 1 Introduction Approximating a set S = f(x 1 ; y 1 ); (x 2 ; y 2 ); : : : ; (x n ; y n )g of points in the plane by a function is a classic problem in applied mathematics. The general goals in this area of research are to find a function F belonging to a class of functions F such that each F 2 F is simple to describe, represent, and compute and such that the chosen F approximates S well. For example, one may desire that F be the class of linear or piecewiselinear functions, and, for any parti...
Randomized Competitive Algorithms for the List Update Problem
 Algorithmica
, 1992
"... We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only d ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
We prove upper and lower bounds on the competitiveness of randomized algorithms for the list update problem of Sleator and Tarjan. We give a simple and elegant randomized algorithm that is more competitive than the best previous randomized algorithm due to Irani. Our algorithm uses randomness only during an initialization phase, and from then on runs completely deterministically. It is the first randomized competitive algorithm with this property to beat the deterministic lower bound. We generalize our approach to a model in which access costs are fixed but update costs are scaled by an arbitrary constant d. We prove lower bounds for deterministic list update algorithms and for randomized algorithms against oblivious and adaptive online adversaries. In particular, we show that for this problem adaptive online and adaptive offline adversaries are equally powerful. 1 Introduction Recently much attention has been given to competitive analysis of online algorithms [7, 20, 22, 25]. Ro...
Numerical Methods for Neuronal Modeling
 In Methods in Neuronal Modeling
, 1989
"... Introduction In this chapter we will discuss some practical and technical aspects of numerical methods that can be used to solve the equations that neuronal modelers frequently encounter. We will consider numerical methods for ordinary differential equations (ODEs) and for partial differential equa ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Introduction In this chapter we will discuss some practical and technical aspects of numerical methods that can be used to solve the equations that neuronal modelers frequently encounter. We will consider numerical methods for ordinary differential equations (ODEs) and for partial differential equations (PDEs) through examples. A typical case where ODEs arise in neuronal modeling is when one uses a single lumpedsoma compartmental model to describe a neuron. Arguably the most famous PDE system in neuronal modeling is the phenomenological model of the squid giant axon due to Hodgkin and Huxley. The difference between ODEs and PDEs is that ODEs are equations in which the rate of change of an unknown function of a single variable is prescribed, usually the derivative with respect to time. In contrast, PDEs involve the rates of change of the solution with respect to two or more independent variables, such as time and space. The numerical methods we will discuss for both ODEs and
A Framework for Optimal Battery Management for Wireless Nodes
, 2002
"... The focus of this paper is to extend the lifetime of a battery powered node in wireless context. The lifetime of a battery depends on both the manner of discharge and the transmission power requirements. We present a framework for computing the optimal discharge strategy which maximizes the lifetime ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
The focus of this paper is to extend the lifetime of a battery powered node in wireless context. The lifetime of a battery depends on both the manner of discharge and the transmission power requirements. We present a framework for computing the optimal discharge strategy which maximizes the lifetime of a node by exploiting the battery characteristics and adapting to the varying power requirements for wireless operations. The complexity of the optimal computation is linear in the number of system states. However, since the number of states can be large, the optimal strategy can only be computed offline and executed via a tablelookup. We present a simple discharge strategy which can be executed online without any table lookup and attains near maximum lifetime. I.
A survey of Monte Carlo algorithms for maximizing the likelihood of a twostage hierarchical model
, 2001
"... Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternati ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternative approach is to approximate the intractable integrals using Monte Carlo averages. Several dierent algorithms based on this idea have been proposed. In this paper we discuss the relative merits of simulated maximum likelihood, Monte Carlo EM, Monte Carlo NewtonRaphson and stochastic approximation. Key words and phrases : Eciency, Monte Carlo EM, Monte Carlo NewtonRaphson, Rate of convergence, Simulated maximum likelihood, Stochastic approximation All three authors partially supported by NSF Grant DMS0072827. 1 1
A Revised Simplex Search Procedure For Stochastic Simulation ResponseSurface Optimization
 INFORMS Journal on Computing
, 2000
"... We develop a variant of the... this paper consists of a threephase application of the NM method in which: (a) the ending values for one phase become the starting values for the next phase; (b) the size of the initial simplex (respectively, the shrink coefficient) decreases geometrically (respective ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We develop a variant of the... this paper consists of a threephase application of the NM method in which: (a) the ending values for one phase become the starting values for the next phase; (b) the size of the initial simplex (respectively, the shrink coefficient) decreases geometrically (respectively, increases linearly) over successive phases; and (c) the final estimated optimum is the best of the ending values for the three phases. To compare RSS versus the NM procedure and RS9 (a simplex search procedure recently proposed by Barton and Ivey), we summarize a simulation study based on separate factorial experiments and followup multiplecomparisons tests for four selected performance measures computed on each of six test problems, with three levels of problem dimensionality and noise variability used in each problem. The experimental results provide substantial evidence of RSS's improved performance with only marginally higher computational effort.
Curriculum and Course Syllabi for a HighSchool Program in Computer Science
 Computer Science Education
, 1999
"... The authors served on a committee that has designed a highschool curriculum in computer science and supervising the preparation of a comprehensive study program based on it. The new program is intended for the Israeli highschool system, and is expected to replace the old one by the end of 1999. Th ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
The authors served on a committee that has designed a highschool curriculum in computer science and supervising the preparation of a comprehensive study program based on it. The new program is intended for the Israeli highschool system, and is expected to replace the old one by the end of 1999. The program emphasizes the foundations of algorithmics, and teaches programming as a way to get the computer to carry out an algorithm. The purpose of this paper is to describe the curriculum and syllabi in some detail.