Results 11  20
of
454
Dynamic NURBS with Geometric Constraints for Interactive Sculpting
, 1994
"... This article develops a dynamic generalization of the nonuniform rational Bspline (NURBS) model. NURBS have become a de facto standard in commercial modeling systems because of their power to represent freeform shapes as well as common analytic shapes. To date, however, they have been viewed as pu ..."
Abstract

Cited by 105 (26 self)
 Add to MetaCart
(Show Context)
This article develops a dynamic generalization of the nonuniform rational Bspline (NURBS) model. NURBS have become a de facto standard in commercial modeling systems because of their power to represent freeform shapes as well as common analytic shapes. To date, however, they have been viewed as purely geometric primitives that require the user to manually adjust multiple control points and associated weights in order to design shapes. Dynamic NURBS, or DNURBS, are physicsbased models that incorporate mass distributions, inertial deformation energies, and other physical quantities into the popular NURBS geometric substrate. Using DNURBS, a modeler can interactively sculpt curves and surfaces and design complex shapes to required specifications not only in the traditional indirect fashion, by adjusting control points and weights, but also through direct physical manipulation, by applying simulated forces and local and global shape constraints. DNURBS move and deform in a physically intuitive manner in response to the user's direct manipulations. Their dynamic behavior results from the numerical integration of a set of nonlinear differential equations that automatically evolve the control points and weights in response to the applied forces and constraints. To derive these equations, we employ Lagrangian mechanics and finiteelementlike discretization. Our approach supports the trimming of DNURBS surfaces using DNURBS curves. We demonstrate DNURBS models and constraints in applications including the rounding of solids, optimal surface fitting to unstructured data, surface design from crosssections, and freeform deformation. We also introduce a new technique for 2D shape metamorphosis using constrained DNURBS surfaces.
Efficient Rerendering of Naturally Illuminated Environments
 IN FIFTH EUROGRAPHICS WORKSHOP ON RENDERING
, 1994
"... We present a method for the efficient rerendering of a scene under a directional illuminant at an arbitrary orientation. We take advantage of the linearity of the rendering operator with respect to illumination for a fixed scene and camera geometry. Rerendering is accomplished via linear combinati ..."
Abstract

Cited by 101 (4 self)
 Add to MetaCart
We present a method for the efficient rerendering of a scene under a directional illuminant at an arbitrary orientation. We take advantage of the linearity of the rendering operator with respect to illumination for a fixed scene and camera geometry. Rerendering is accomplished via linear combination of a set of prerendered "basis" images. The theory of steerable functions provides the machinery to derive an appropriate set of basis images. We demonstrate the technique on both simple and complex scenes illuminated by an approximation to natural skylight. We show rerendering simulations under conditions of varying sun position and cloudiness.
Quantized Overcomplete Expansions in R^N: Analysis, Synthesis, and Algorithms
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... Coefficient quantization has peculiar qualitative effects on representations of vectors in IR with respect to overcomplete sets of vectors. These effects are investigated in two settings: frame expansions (representations obtained by forming inner products with each element of the set) and matchi ..."
Abstract

Cited by 94 (15 self)
 Add to MetaCart
Coefficient quantization has peculiar qualitative effects on representations of vectors in IR with respect to overcomplete sets of vectors. These effects are investigated in two settings: frame expansions (representations obtained by forming inner products with each element of the set) and matching pursuit expansions (approximations obtained by greedily forming linear combinations). In both cases, based on the concept of consistency, it is shown that traditional linear reconstruction methods are suboptimal, and better consistent reconstruction algorithms are given. The proposed consistent reconstruction algorithms were in each case implemented, and experimental results are included. For frame expansions, results are proven to bound distortion as a function of frame redundancy r and quantization step size for linear, consistent, and optimal reconstruction methods. Taken together, these suggest that optimal reconstruction methods will yield O(1=r ) meansquared error (MSE), and that consistency is sufficient to insure this asymptotic behavior. A result on the asymptotic tightness of random frames is also proven. Applicability of quantized matching pursuit to lossy vector compression is explored. Experiments demonstrate the likelihood that a linear reconstruction is inconsistent, the MSE reduction obtained with a nonlinear (consistent) reconstruction algorithm, and generally competitive performance at low bit rates.
An Active Contour Model For Mapping The Cortex
 IEEE TRANS. ON MEDICAL IMAGING
, 1995
"... A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approac ..."
Abstract

Cited by 91 (15 self)
 Add to MetaCart
A new active contour model for finding and mapping the outer cortex in brain images is developed. A crosssection of the brain cortex is modeled as a ribbon, and a constant speed mapping of its spine is sought. A variational formulation, an associated force balance condition, and a numerical approach are proposed to achieve this goal. The primary difference between this formulation and that of snakes is in the specification of the external force acting on the active contour. A study of the uniqueness and fidelity of solutions is made through convexity and frequency domain analyses, and a criterion for selection of the regularization coefficient is developed. Examples demonstrating the performance of this method on simulated and real data are provided.
Wavelet transforms versus Fourier transforms
 Department of Mathematics, MIT, Cambridge MA
, 213
"... Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and t ..."
Abstract

Cited by 82 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higherorder wavelets are constructed, and it is surprisingly quick to compute with them — always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including highdefinition television). So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine Transform — is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory. 1. The Haar wavelet To explain wavelets we start with an example. It has every property we hope for, except one. If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems — many still unsolved. A sensible person would stop after the first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe: W(x)
Digital inpainting based on the MumfordShahEuler image model
 European J. Appl. Math
, 2002
"... Abstract. Image inpainting is an image restoration problem, in which image models play a critical role, as demonstrated by Chan, Kang and Shen’s recent inpainting schemes based on the bounded variation [10] and the elastica [9] image models. In the present paper, we propose two novel inpainting mode ..."
Abstract

Cited by 81 (23 self)
 Add to MetaCart
(Show Context)
Abstract. Image inpainting is an image restoration problem, in which image models play a critical role, as demonstrated by Chan, Kang and Shen’s recent inpainting schemes based on the bounded variation [10] and the elastica [9] image models. In the present paper, we propose two novel inpainting models based on the MumfordShah image model [37], and its high order correction — the MumfordShahEuler image model. We also present their efficient numerical realization based on the ¡ and De Giorgi [18]. Key words. Inpainting, Bayesian, image model, Euler’s elastica, ¡
Computational Differential Equations
, 1996
"... Introduction This first part has two main purposes. The first is to review some mathematical prerequisites needed for the numerical solution of differential equations, including material from calculus, linear algebra, numerical linear algebra, and approximation of functions by (piecewise) polynomial ..."
Abstract

Cited by 65 (4 self)
 Add to MetaCart
Introduction This first part has two main purposes. The first is to review some mathematical prerequisites needed for the numerical solution of differential equations, including material from calculus, linear algebra, numerical linear algebra, and approximation of functions by (piecewise) polynomials. The second purpose is to introduce the basic issues in the numerical solution of differential equations by discussing some concrete examples. We start by proving the Fundamental Theorem of Calculus by proving the convergence of a numerical method for computing an integral. We then introduce Galerkin's method for the numerical solution of differential equations in the context of two basic model problems from population dynamics and stationary heat conduction.
A Posteriori Finite Element Bounds for LinearFunctional Outputs of Elliptic Partial Differential Equations
 Computer Methods in Applied Mechanics and Engineering
, 1997
"... We present a domain decomposition finite element technique for efficiently generating lower and upper bounds to outputs which are linear functionals of the solutions to symmetric or nonsymmetric second order elliptic linear partial differential equations in two space dimensions. The method is base ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
(Show Context)
We present a domain decomposition finite element technique for efficiently generating lower and upper bounds to outputs which are linear functionals of the solutions to symmetric or nonsymmetric second order elliptic linear partial differential equations in two space dimensions. The method is based upon the construction of an augmented Lagrangian, in which the objective is a quadratic "energy" reformulation of the desired output, and the constraints are the finite element equilibrium equations and intersubdomain continuity requirements. The bounds on the output for a suitably fine "truthmesh" discretization are then derived by appealing to a dual maxmin relaxation evaluated for optimally chosen adjoint and hybridflux candidate Lagrange multipliers generated by a Kelement coarser "workingmesh" approximation. Independent of the form of the original partial differential equation, the computation on the truth mesh is reduced to K decoupled subdomainlocal, symmetric Neumann pro...
Circulant Preconditioners for Hermitian Toeplitz Systems
 SIAM J. Matrix Anal. Appl
, 1989
"... We study the solutions of Hermitian positive definite Toeplitz systems Ax = b by the preconditioned conjugate gradient method for three families of circulant preconditioners C. The convergence rates of these iterative methods depend on the spectrum of C A. For a Toeplitz matrix A with entries whi ..."
Abstract

Cited by 62 (18 self)
 Add to MetaCart
We study the solutions of Hermitian positive definite Toeplitz systems Ax = b by the preconditioned conjugate gradient method for three families of circulant preconditioners C. The convergence rates of these iterative methods depend on the spectrum of C A. For a Toeplitz matrix A with entries which are Fourier coefficients of a positive function f in the Wiener class, we establish the invertiblity of C, and that the spectrum of the preconditioned matrix C A clusters around one. We prove that if f is (l + 1)times differentiable, with l ? 0, then the error after 2q conjugate gradient steps will decrease like ((q \Gamma 1)!) . We also show that if C copies the central diagonals of A, then C minimizes jjC \Gamma Ajj 1 and jjC \Gamma Ajj 1 .
Noise estimation from a single image
 In Proceedings of CVPR
, 2006
"... In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior mode ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
In order to work well, many computer vision algorithms require that their parameters be adjusted according to the image noise level, making it an important quantity to estimate. We show how to estimate an upper bound on the noise level from a single image based on a piecewise smooth image prior model and measured CCD camera response functions. We also learn the space of noise level functions– how noise level changes with respect to brightness–and use Bayesian MAP inference to infer the noise level function from a single image. We illustrate the utility of this noise estimation for two algorithms: edge detection and featurepreserving smoothing through bilateral filtering. For a variety of different noise levels, we obtain good results for both these algorithms with no userspecified inputs. 1.