Results 11  20
of
29
Parameter estimation using sparse reconstruction with dynamic dictionaries
 in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP 11), May 22–27 2011
"... We consider the problem of parameter estimation for signals characterized by sums of parameterized functions. We present a dynamic dictionary subset selection approach to parameter estimation where we iteratively select a small number of dictionary elements and then alter the parameters of these dic ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider the problem of parameter estimation for signals characterized by sums of parameterized functions. We present a dynamic dictionary subset selection approach to parameter estimation where we iteratively select a small number of dictionary elements and then alter the parameters of these dictionary elements to achieve better signal model fit. The proposed approach avoids the use of highly oversampled (and highly correlated) dictionary elements, which are needed in fixed dictionary approaches to reduce parameter bias associated with dictionary quantization. We demonstrate estimation performance on a sinusoidal signal estimation example. Index Terms — Sparse reconstruction, Parameter estimation,
A POLYNOMIALTIME INTERIORPOINT METHOD FOR CONIC OPTIMIZATION, WITH INEXACT BARRIER EVALUATIONS ∗
"... Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if appro ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We consider a primaldual shortstep interiorpoint method for conic convex optimization problems for which exact evaluation of the gradient and Hessian of the primal and dual barrier functions is either impossible or prohibitively expensive. As our main contribution, we show that if approximate gradients and Hessians of the primal barrier function can be computed, and the relative errors in such quantities are not too large, then the method has polynomial worstcase iteration complexity. (In particular, polynomial iteration complexity ensues when the gradient and Hessian are evaluated exactly.) In addition, the algorithm requires no evaluation—or even approximate evaluation—of quantities related to the barrier function for the dual cone, even for problems in which the underlying cone is not selfdual.
156 Straintunable Photonic Band Gap Microcavity Waveguides in Silicon at 1.55 µm
"... The majority of photonic crystals developed tilldate are not dynamically tunable, especially in siliconbased structures. Dynamic tunability is required not only for reconfiguration of the optical characteristics based on userdemand, but also for compensation against external disturbances and rela ..."
Abstract
 Add to MetaCart
The majority of photonic crystals developed tilldate are not dynamically tunable, especially in siliconbased structures. Dynamic tunability is required not only for reconfiguration of the optical characteristics based on userdemand, but also for compensation against external disturbances and relaxation of tight device fabrication tolerances. Recent developments in photonic crystals have suggested interesting possibilities for static smallstrain modulations to affect the optical characteristics 13, including a proposal for dynamic straintunability 4. Here we report the theoretical analysis, device fabrication, and experimental measurements of tunable silicon photonic band gap microcavities in optical waveguides, through direct application of dynamic strain to the periodic structures 5. The device concept consists of embedding the microcavity waveguide 6 on a deformable SiO2 membrane. The membrane is strained through integrated thinfilm piezoelectric microactuators. We show a 1.54 nm shift in cavity resonances at 1.56 µm wavelengths for an applied piezoelectric strain of 0.04%. This is in excellent agreement with our modeling, predicted through firstorder semianalytical perturbation theory 7 and finitedifference timedomain calculations. The measured microcavity transmission shows resonances between 1.55 to 1.57 µm, with Q factors ranging from 159 to 280. For operation at infrared wavelengths, we integrate Xray and electronbeam lithography (for critical 100 nm feature sizes) with thinfilm piezoelectric surface micromachining. This level of integration permits realizable siliconbased photonic chip devices, such as highdensity optical filters and spontaneousemission enhancement devices with tunable configurations.
A Probabilistic Framework For Object Recognition In Video
 in: Proc. of International Conference on Image Processing
, 2004
"... We propose a solution to the problem of object recognition given a continuous video sequence containing multiple views of an object. Initially, object models are acquired from images of the objects taken from different views. Recognition is achieved from the video sequences by employing a multiple ..."
Abstract
 Add to MetaCart
We propose a solution to the problem of object recognition given a continuous video sequence containing multiple views of an object. Initially, object models are acquired from images of the objects taken from different views. Recognition is achieved from the video sequences by employing a multiple hypothesis approach. Appearance similarity, and pose transition smoothness constraints are used to estimate the probability of the measurement being generated from a certain model hypothesis at each time instant. A smooth gradient direction feature that is quasiinvariant to illumination changes and noise is used to represent the appearance of object. The pose of the object at each time instant is modelled as a von MisesFisher distribution. Recognition is achieved by choosing the hypothesis set that has accumulated the maximum evidence at the end of the sequence. We have performed detailed experiments demonstrating the viability of the proposed approach.
ModelConstrained Optimization methods for . . .
, 2007
"... Most model reduction techniques employ a projection framework that utilizes a reducedspace basis. The basis is usually formed as the span of a set of solutions of the largescale system, which are computed for selected values (samples) of input parameters and forcing inputs. In existing model reduc ..."
Abstract
 Add to MetaCart
Most model reduction techniques employ a projection framework that utilizes a reducedspace basis. The basis is usually formed as the span of a set of solutions of the largescale system, which are computed for selected values (samples) of input parameters and forcing inputs. In existing model reduction techniques, choosing where and how many samples to generate has been, in general, an adhoc process. A key challenge is therefore how to systematically sample the input space, which is of high dimension for many applications of interest. This thesis proposes and analyzes a modelconstrained greedybased adaptive sampling approach in which the parametric input sampling problem is formulated as an optimization problem that targets an error estimation of reduced model output prediction. The method solves the optimization problem to find a locallyoptimal point
Multiple Shell QBall Imaging within Constant Solid Angle
, 2009
"... Qball imaging (QBI) is a high angular resolution diffusion imaging (HARDI) technique which has been proven very successful in resolving multiple intravoxel fiber orientations in MR images. The standard computation of the orientation distribution function (ODF, the probability of diffusion in a give ..."
Abstract
 Add to MetaCart
Qball imaging (QBI) is a high angular resolution diffusion imaging (HARDI) technique which has been proven very successful in resolving multiple intravoxel fiber orientations in MR images. The standard computation of the orientation distribution function (ODF, the probability of diffusion in a given direction) from qball data uses linear radial projection, neglecting the change in the volume element along each direction. This results in spherical distributions that are different from the true ODFs. For instance, they are neither normalized nor as sharp as expected, and generally require postprocessing, such as artificial sharpening or spherical deconvolution. In this paper, a new technique is proposed that, by considering the solid angle factor, uses the mathematically correct definition of the ODF and results in a dimensionless and normalized ODF expression. Our model is flexible enough so ODFs can either be estimated from single qshell datasets, or exploit the greater information available from multiple qshell acquisitions. We show that this can be achieved by using a more accurate multiexponential model for the diffusion signal. The improved performance of the proposed method is demonstrated on artificial data and real HARDI volumes. Key words:
NOVEL MODELS FOR HOURLY SOLAR RADIATION USING A 2D APPROACH
, 2008
"... Abstract. In this work one year hourly solar radiation data are analyzed and modeled using a novel visualization method. Using a 2D(Dimensional) surface fitting approach, the general behavior of the solar radiation in a year is modeled. By the help of the newly adopted visualization approach, a tot ..."
Abstract
 Add to MetaCart
Abstract. In this work one year hourly solar radiation data are analyzed and modeled using a novel visualization method. Using a 2D(Dimensional) surface fitting approach, the general behavior of the solar radiation in a year is modeled. By the help of the newly adopted visualization approach, a total of 9 analytical surface models are obtained and compared. The Gaussian surface model with proper model parameters is found to be the most accurate model among the tested analytical models for data characterization purposes. The accuracy of this surface model is tested and compared with a dynamic surface model obtained from a feedforward Neural Network (NN). Analytical surface models and NN surface model are compared in the sense of Root Mean Square Error (RMSE). It is obtained that the NN surface model gives better results with smaller RMSE values. However, unlike the specificity of the NN surface model, the analytical surface model provides a simple, intuitive and more generalized form that can be suitable for several geographical locations on earth.
A Basic Foundation for Unravelling Quantity Discounts: Gaining more Insight into Supplier Cost Mechanisms Summary
"... Selling organizations often offer quantity discounts schedules, but do not provide the underlying Quantity Discount Price Functions (QDPF). In literature an analysis on how QDPF could be derived from discount schedules is lacking. This is remarkable as QDPF contain useful information for buying orga ..."
Abstract
 Add to MetaCart
Selling organizations often offer quantity discounts schedules, but do not provide the underlying Quantity Discount Price Functions (QDPF). In literature an analysis on how QDPF could be derived from discount schedules is lacking. This is remarkable as QDPF contain useful information for buying organizations. QDPF give more insight into the fixed and variable costs of selling organizations and can be a useful tool for buying organizations in selecting and negotiating processes. Furthermore, QDPF can be used for calculating and allocating price savings in group purchasing and multiple sourcing decisions. In this paper we develop one general QDPF and two related measures for negotiating spaces. We prove that our QDPF gives a highly reliable approximation of 66 quantity discount schedules of different selling organizations. Finally, we compare the QDPF parameters of the 66 schedules and discuss their basic properties. Educator and practitioner summary In this paper we develop a general quantity discount price function and two indicators for negotiating spaces. These instruments provide more insight into the fixed and variable costs of selling organizations and can be used (1) as a tool in selecting and negotiating processes, and (2) to calculate and allocate price savings in multiple sourcing decisions and group purchasing.
on Aluminum Alloys
"... We present a coupled simulation–optimization procedure for the improvement of the laser welding process. This is achieved by introducing a functional to measure the quality of a weld and later performing a mathematical optimization of it. The welding process to be included in the functional is simul ..."
Abstract
 Add to MetaCart
We present a coupled simulation–optimization procedure for the improvement of the laser welding process. This is achieved by introducing a functional to measure the quality of a weld and later performing a mathematical optimization of it. The welding process to be included in the functional is simulated using an adaptive finite element method for the thermal and mechanical subproblems. The functional is optimized using a constrained mathematical optimization method and the optimized parameters giving some desired properties of the welds are found. In this paper, the results obtained for two different optimization goals are presented, namely a general test problem in which all good properties of the welds are assumed to have the same importance, and another in which a higher importance is given to the residual stress and the full penetration of the weld. Key words: welding, laser welding PACS: 44.05.+e, 46.35.+z, 81.20.Vj 1.