Results 1  10
of
40
N.: A LogEuclidean framework for statistics on diffeomorphisms
 In: Proc. MICCAI’06. (2006) 924–931
"... Abstract. In this article, we focus on the computation of statistics of invertible geometrical deformations (i.e., diffeomorphisms), based on the generalization to this type of data of the notion of principal logarithm. Remarkably, this logarithm is a simple 3D vector field, and is welldefined for ..."
Abstract

Cited by 66 (34 self)
 Add to MetaCart
Abstract. In this article, we focus on the computation of statistics of invertible geometrical deformations (i.e., diffeomorphisms), based on the generalization to this type of data of the notion of principal logarithm. Remarkably, this logarithm is a simple 3D vector field, and is welldefined for diffeomorphisms close enough to the identity. This allows to perform vectorial statistics on diffeomorphisms, while preserving the invertibility constraint, contrary to Euclidean statistics on displacement fields. We also present here two efficient algorithms to compute logarithms of diffeomorphisms and exponentials of vector fields, whose accuracy is studied on synthetic data. Finally, we apply these tools to compute the mean of a set of diffeomorphisms, in the context of a registration experiment between an atlas an a database of 9 T1 MR images of the human brain. 1
Supplement for realtime soft shadows in dynamic scenes using spherical harmonic exponentiation
 Microsoft Corporation. available on the SIGGRAPH 2006 Conference DVD
, 2006
"... Previous methods for soft shadows numerically integrate over many light directions at each receiver point, testing blocker visibility in each direction. We introduce a method for realtime soft shadows in dynamic scenes illuminated by large, lowfrequency light sources where such integration is impr ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
Previous methods for soft shadows numerically integrate over many light directions at each receiver point, testing blocker visibility in each direction. We introduce a method for realtime soft shadows in dynamic scenes illuminated by large, lowfrequency light sources where such integration is impractical. Our method operates on vectors representing lowfrequency visibility of blockers in the spherical harmonic basis. Blocking geometry is modeled as a set of spheres; relatively few spheres capture the lowfrequency blocking effect of complicated geometry. At each receiver point, we compute the product of visibility vectors for these blocker spheres as seen from the point. Instead of computing an expensive SH product per blocker as in previous work, we perform inexpensive vector sums to accumulate the log of blocker visibility. SH exponentiation then yields the product visibility vector over all blockers. We show how the SH exponentiation required can be approximated accurately and efficiently for loworder SH, accelerating previous CPUbased methods by a factor of 10 or more, depending on blocker complexity, and allowing realtime GPU implementation.
Error estimation and evaluation of matrix functions via the Faber transform
 SIAM J. Numer. Anal
"... Abstract. The need to evaluate expressions of the form f(A) orf(A)b, wheref is a nonlinear function, A is a large sparse n × n matrix, and b is an nvector, arises in many applications. This paper describes how the Faber transform applied to the field of values of A can be used to determine improved ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
Abstract. The need to evaluate expressions of the form f(A) orf(A)b, wheref is a nonlinear function, A is a large sparse n × n matrix, and b is an nvector, arises in many applications. This paper describes how the Faber transform applied to the field of values of A can be used to determine improved error bounds for popular polynomial approximation methods based on the Arnoldi process. Applications of the Faber transform to rational approximation methods and, in particular, to the rational Arnoldi process also are discussed.
A NEW SCALING AND SQUARING ALGORITHM FOR THE MATRIX EXPONENTIAL ∗
, 2009
"... Abstract. The scaling and squaring method for the matrix exponential is based on the approximation eA ≈ (rm(2−sA)) 2s,whererm(x) isthe[m/m] Padéapproximant to ex and the integers m and s are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed o ..."
Abstract

Cited by 15 (12 self)
 Add to MetaCart
Abstract. The scaling and squaring method for the matrix exponential is based on the approximation eA ≈ (rm(2−sA)) 2s,whererm(x) isthe[m/m] Padéapproximant to ex and the integers m and s are to be chosen. Several authors have identified a weakness of existing scaling and squaring algorithms termed overscaling, in which a value of s much larger than necessary is chosen, causing a loss of accuracy in floating point arithmetic. Building on the scaling and squaring algorithm of Higham [SIAM J. Matrix Anal. Appl., 26 (2005), pp. 1179–1193], which is used by the MATLAB function expm, we derive a new algorithm that alleviates the overscaling problem. Two key ideas are employed. The first, specific to triangular matrices, is to compute the diagonal elements in the squaring phase as exponentials instead of from powers of rm. The second idea is to base the backward error analysis that underlies the algorithm on members of the sequence {‖Ak‖1/k} instead of ‖A‖, since for nonnormal matrices it is possible that ‖Ak‖1/k is much smaller than ‖A‖, andindeed this is likely when overscaling occurs in existing algorithms. The terms ‖Ak‖1/k are estimated without computing powers of A by using a matrix 1norm estimator in conjunction with a bound of the form ‖Ak‖1/k ≤ max ( ‖Ap‖1/p, ‖Aq‖1/q) that holds for certain fixed p and q less than k. The improvements to the truncation error bounds have to be balanced by the potential for a large ‖A‖
A fast and LogEuclidean polyaffine framework for locally affine registration. Research report RR5865
, 2006
"... Abstract. In this article, we focus on the parameterization of nonrigid geometrical deformations with a small number of flexible degrees of freedom. In previous work, we proposed a general framework called polyaffine to parameterize deformations with a small number of rigid or affine components, whi ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Abstract. In this article, we focus on the parameterization of nonrigid geometrical deformations with a small number of flexible degrees of freedom. In previous work, we proposed a general framework called polyaffine to parameterize deformations with a small number of rigid or affine components, while guaranteeing the invertibility of global deformations. However, this framework lacks some important properties: the inverse of a polyaffine transformation is not polyaffine in general, and the polyaffine fusion of affine components is not invariant with respect to a change of coordinate system. We present here a novel general framework, called LogEuclidean polyaffine, which overcomes these defects. We also detail a simple algorithm, the Fast Polyaffine Transform, which allows to compute very efficiently LogEuclidean polyaffine transformations and their inverses on a regular grid. The results presented here on real 3D locally affine registration suggest that our novel framework provides a general and efficient way of fusing local rigid or affine deformations into a global invertible transformation without introducing artifacts, independently of the way local deformations are first estimated. 1
A Fast and LogEuclidean Polyaffine Framework for Locally Linear Registration
 JOURNAL OF MATHEMATICAL IMAGING AND VISION
"... In this article, we focus on the parameterization of nonrigid geometrical deformations with a small number of flexible degrees of freedom. In previous work, we proposed a general framework called polyaffine to parameterize deformations with a finite number of rigid or affine components, while guar ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
In this article, we focus on the parameterization of nonrigid geometrical deformations with a small number of flexible degrees of freedom. In previous work, we proposed a general framework called polyaffine to parameterize deformations with a finite number of rigid or affine components, while guaranteeing the invertibility of global deformations. However, this framework lacks some important properties: the inverse of a polyaffine transformation is not polyaffine in general, and the polyaffine fusion of affine components is not invariant with respect to a change of coordinate system. We present here a novel general framework, called LogEuclidean polyaffine, which overcomes these defects. We also detail a simple algorithm, the Fast Polyaffine Transform, which allows to compute very efficiently LogEuclidean polyaffine transformations and their inverses on regular grids. The results presented here on real 3D locally affine registration suggest that our novel framework provides a general and efficient way of fusing local rigid or affine deformations into a global invertible transformation without introducing artifacts, independently of the way local deformations are first estimated.
COMPUTING THE FRÉCHET DERIVATIVE OF THE MATRIX EXPONENTIAL, WITH AN APPLICATION TO CONDITION NUMBER ESTIMATION ∗
, 2008
"... Abstract. The matrix exponential is a muchstudied matrix function having many applications. The Fréchet derivative of the matrix exponential describes the first order sensitivity of e A to perturbations in A and its norm determines a condition number for e A. Among the numerous methods for computin ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
Abstract. The matrix exponential is a muchstudied matrix function having many applications. The Fréchet derivative of the matrix exponential describes the first order sensitivity of e A to perturbations in A and its norm determines a condition number for e A. Among the numerous methods for computing e A the scaling and squaring method is the most widely used. We show that the implementation of the method in [N. J. Higham. The scaling and squaring method for the matrix exponential revisited. SIAM J. Matrix Anal. Appl., 26(4):1179–1193, 2005] can be extended to compute both e A and the Fréchet derivative at A in the direction E, denoted by L(A, E), at a cost about three times that for computing e A alone. The algorithm is derived from the scaling and squaring method by differentiating the Padé approximants and the squaring recurrence, reusing quantities computed during the evaluation of the Padé approximant, and intertwining the recurrences in the squaring phase. To guide the choice of algorithmic parameters an extension of the existing backward error analysis for the scaling and squaring method is developed which shows that, modulo rounding errors, the approximations obtained are e A+∆A and L(A+∆A, E + ∆E), with the same ∆A in both cases, and with computable bounds on �∆A � and �∆E�. The algorithm for L(A, E) is used to develop an algorithm that computes e A together with an estimate of its condition number.
A Schur–Padé algorithm for fractional powers of a matrix
 SIAM J. Matrix Anal. Appl
"... And by contacting: ..."
Efficient algorithms for the matrix cosine and sine. Numerical Analysis Report 461
 Numer. Algorithms
, 2005
"... Several improvements are made to an algorithm of Higham and Smith for computing the matrix cosine. The original algorithm scales the matrix by a power of 2 to bring the ∞norm to 1 or less, evaluates the [8/8] Padé approximant, then uses the doubleangle formula cos(2A) = 2cos2A − I to recover the c ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
Several improvements are made to an algorithm of Higham and Smith for computing the matrix cosine. The original algorithm scales the matrix by a power of 2 to bring the ∞norm to 1 or less, evaluates the [8/8] Padé approximant, then uses the doubleangle formula cos(2A) = 2cos2A − I to recover the cosine of the original matrix. The first improvement is to phrase truncation error bounds in terms of �A2�1/2 instead of the (no smaller and potentially much larger quantity) �A�. The second is to choose the degree of the Padé approximant to minimize the computational cost subject to achieving a desired truncation error. A third improvement is to use an absolute, rather than relative, error criterion in the choice of Padé approximant; this allows the use of higher degree approximants without worsening an a priori error bound. Our theory and experiments show that each of these modifications brings a reduction in computational cost. Moreover, because the modifications tend to reduce the number of doubleangle steps they usually result in a more accurate computed cosine in floating point arithmetic. We also derive an algorithm for computing both cos(A) and sin(A), by adapting the ideas developed for the cosine and intertwining the cosine and sine double angle recurrences.
Functions of matrices
 Society for Industrial and Applied Mathematics (SIAM
, 2008
"... Reports available from: And by contacting: ..."