Results 1  10
of
81
Optimal motion and structure estimation
 IEEE Trans. Pattern Anal. Mach. Intell
, 1993
"... This paper studies optimal estimation for motion and structure from point correspondences. (1) A study of the characteristics of thc problem provides insight into the need for optimal estimation. (2) Methods have been developed for optimal estimation with known or unknown noise distribution. The sim ..."
Abstract

Cited by 150 (5 self)
 Add to MetaCart
This paper studies optimal estimation for motion and structure from point correspondences. (1) A study of the characteristics of thc problem provides insight into the need for optimal estimation. (2) Methods have been developed for optimal estimation with known or unknown noise distribution. The simulations showed that the optimal estimations achieve remarkable improvement over the preliminary estimates given by the linear algorithm. (3) An approach to estimating errors in the optimized solution is presented. (4) The performance of the algorithm is compared with a theoretical lower bound CramCrRao bound. Simulations show that the actual errors have essentially reached the bound. (5) A batch leastsquares technique (LevenbergMarquardt) and a sequential leastsquares technique (iterated extended Kalman filtering) are analyzed and compared. The analysis and experiments show that, in general, a batch technique will perform better than a sequential technique for any nonlinear problems. Recursive batch processing technique is proposed for nonlinear problems that require recursive estimation. 1.
Interpreting neuronal population activity by reconstruction: unified framework with application to hippocampal place cells
 J. Neumphysiol
, 1998
"... such as the orientation of a line in the visual field or the location of Two main goals for reconstruction are approached in this the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which paper. The first goal is technical and ..."
Abstract

Cited by 112 (7 self)
 Add to MetaCart
such as the orientation of a line in the visual field or the location of Two main goals for reconstruction are approached in this the body in space are coded as activity levels in populations of neurons. Reconstruction or decoding is an inverse problem in which paper. The first goal is technical and is exemplified by the the physical variables are estimated from observed neural activity. population vector method applied to motor cortical activities Reconstruction is useful first in quantifying how much information during various reaching tasks (Georgopoulos et al. 1986, 1989; about the physical variables is present in the population and, second, Schwartz 1994) and the template matching method applied to in providing insight into how the brain might use distributed represen disparity selective cells in the visual cortex (Lehky and Sejnowtations in solving related computational problems such as visual ob ski 1990) and hippocampal place cells during rapid learning of ject recognition and spatial navigation. Two classes of reconstruction place fields in a novel environment (Wilson and McNaughton methods, namely, probabilistic or Bayesian methods and basis func 1993). In these examples, reconstruction extracts information tion methods, are discussed. They include important existing methods from noisy neuronal population activity and transforms it to a
A Sequential Procedure for Multihypothesis Testing
 IEEE Trans. Inform. Theory
, 1994
"... AbstractThe sequential testing of more than two hypotheses has important applications in directsequence spread spectrum signal acquisition, multipleresolutionelement radar, and other areas. A useful sequential test which we term the MSPRT is studied in this paper. The test is shown to be a gener ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
AbstractThe sequential testing of more than two hypotheses has important applications in directsequence spread spectrum signal acquisition, multipleresolutionelement radar, and other areas. A useful sequential test which we term the MSPRT is studied in this paper. The test is shown to be a generalization of the Sequential Probability Ratio Test. Under Bayesian assumptions, it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large. Bounds on error probabilities are derived, and asymptotic expressions for the stopping time and error probabilities are given. A design procedure is presented for determining the parameters of the MSPRT. Two examples involving Gaussian densities are included, and comparisons are made between simulation results and asymptotic expressions. Comparisons with Bayesian fixed sample size tests are also made, and it is found that the MSPRT requires two to three times fewer samples on average. Index TermsSequential analysis, hypothesis testing, informational divergence, nonlinear renewal theory. I.
From experiment design to closedloop control
, 2005
"... The links between identification and control are examined. The main trends in this research area are summarized, with particular focus on the design of low complexity controllers from a statistical perspective. It is argued that a guiding principle should be to model as well as possible before any m ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
The links between identification and control are examined. The main trends in this research area are summarized, with particular focus on the design of low complexity controllers from a statistical perspective. It is argued that a guiding principle should be to model as well as possible before any model or controller simplifications are made as this ensures the best statistical accuracy. This does not necessarily mean that a fullorder model always is necessary as well designed experiments allow for restricted complexity models to be nearoptimal. Experiment design can therefore be seen as the key to successful applications. For this reason, particular attention is given to the interaction between experimental constraints and performance specifications.
An iterative procedure for obtaining maximumlikelihood estimates of the parameters for a mixture of normal distributions
 SIAM J. Appl. Math
, 1978
"... Abstract. This paper addresses the problem of obtaining numerically maximumlikelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successiveapproximations procedure, based on the likelihood equations, was shown empirically to be effective in ..."
Abstract

Cited by 36 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper addresses the problem of obtaining numerically maximumlikelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successiveapproximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximumlikelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepestascent (deflectedgradient) type, which is just the procedure known in the literature when the stepsize is taken to be 1. We show that, with probability as the sample size grows large, this procedure converges locally to the strongly consistent maximumlikelihood estimate whenever the stepsize lies between 0 and 2. We also show that the stepsize which yields optimal local convergence rates for large samples is determined in a sense by the "separation " of the component normal densities and is bounded below by a number between and 2. 1. Introduction. Let
Bootstrap estimate of KullbackLeibler information for model selection
 Statistica Sinica
, 1997
"... Estimation of KullbackLeibler amount of information is a crucial part of deriving a statistical model selection procedure which is based on likelihood principle like AIC. To discriminate nested models, we have to estimate it up to the order of constant while the KullbackLeibler information itself ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
Estimation of KullbackLeibler amount of information is a crucial part of deriving a statistical model selection procedure which is based on likelihood principle like AIC. To discriminate nested models, we have to estimate it up to the order of constant while the KullbackLeibler information itself is of the order of the number of observations. A correction term employed in AIC is an example to ful ll this requirement but it is a simple minded bias correction to the log maximum likelihood. Therefore there is no assurance that such a bias correction yields a good estimate of KullbackLeibler information. In this paper as an alternative, bootstrap type estimation is considered. We will rst show that both bootstrap estimates proposed by Efron (1983,1986,1993) and Cavanaugh and Shumway(1994) are at least asymptotically equivalent and there exist many other equivalent bootstrap estimates. We also show that all such methods are asymptotically equivalent to a nonbootstrap method, known as TIC (Takeuchi's Information Criterion) which is a generalization of AIC.
Maximum likelihood estimation of signal amplitude and noise variance from MR data. Magnetic Resonance in Medicine
, 2004
"... In magnetic resonance imaging, the raw data, which are acquired in spatial frequency space, are intrinsically complex valued and corrupted by Gaussian distributed noise. After applying an inverse Fourier transform the data remain complex valued and Gaussian distributed. If the signal amplitude is t ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
In magnetic resonance imaging, the raw data, which are acquired in spatial frequency space, are intrinsically complex valued and corrupted by Gaussian distributed noise. After applying an inverse Fourier transform the data remain complex valued and Gaussian distributed. If the signal amplitude is to be estimated, one has two options. It can be estimated directly from the complex valued data set, or one can first perform a magnitude operation on this data set, which changes the distribution of the data from Gaussian to Rician, and estimate the signal amplitude from the thus obtained magnitude image. Similarly, the noise variance can be estimated from both the complex and magnitude data sets. This paper addresses the question whether it is better to use complex valued data or magnitude data for the estimation of these parameters using the Maximum Likelihood method. As a performance criterion, the meansquared error (MSE) is used. 1
Rethinking biased estimation: Improving maximum likelihood and the CramérRao bound
 TRENDS IN SIGNAL PROCESS
, 2007
"... One of the prime goals of statistical estimation theory is the development of performance bounds when estimating parameters of interest in a given model, as well as constructing estimators that achieve these limits. When the parameters to be estimated are deterministic, a popular approach is to boun ..."
Abstract

Cited by 21 (12 self)
 Add to MetaCart
One of the prime goals of statistical estimation theory is the development of performance bounds when estimating parameters of interest in a given model, as well as constructing estimators that achieve these limits. When the parameters to be estimated are deterministic, a popular approach is to bound the meansquared error (MSE) achievable within the class of unbiased estimators. Although it is wellknown that lower MSE can be obtained by allowing for a bias, in applications it is typically unclear how to choose an appropriate bias. In this survey we introduce MSE bounds that are lower than the unbiased Cramér–Rao bound (CRB) for all values of the unknowns. We then present a general framework for constructing biased estimators with smaller MSE than the standard maximumlikelihood (ML) approach, regardless of the true unknown values. Specializing the results to the linear Gaussian model, we derive a class of estimators that dominate leastsquares in terms of MSE. We also introduce methods for choosing regularization parameters in penalized ML estimators that outperform standard techniques such as cross validation.
Automatic spectral analysis with time series models
 IEEE Trans. Instrum. Meas
, 2002
"... Abstract. The increased computational speed and developments in the robustness of algorithms have created the possibility to identify automatically a well fitting time series model for stochastic data. It is possible to compute more than 500 models and to select only one, which certainly is one of t ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The increased computational speed and developments in the robustness of algorithms have created the possibility to identify automatically a well fitting time series model for stochastic data. It is possible to compute more than 500 models and to select only one, which certainly is one of the better models if not the very best. That model characterizes the spectral density of the data. Time series models are excellent for random data if the model type and the model order are known. For unknown data characteristics, a large number of candidate models has to be computed. This necessarily includes too low or too high model orders and models of the wrong types, thus requiring robust estimation methods. The computer selects a model order for each of the three model types. From those three, the model type with the smallest expectation of the prediction error is selected. That unique selected model includes precisely the statistically significant details that are present in the data. 1
Strength of Two Data Encryption Standard Implementations under Timing Attacks
 ACM Transactions on Information and System Security
, 1998
"... We study the vulnerability of several implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method designed to break cryptographic systems that was recently proposed by Paul Kocher. It exploits the engineering aspects involved in the implement ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
We study the vulnerability of several implementations of the Data Encryption Standard (DES) cryptosystem under a timing attack. A timing attack is a method designed to break cryptographic systems that was recently proposed by Paul Kocher. It exploits the engineering aspects involved in the implementation of cryptosystems and might succeed even against cryptosystems that remain impervious to sophisticated cryptanalytic techniques. A timing attack is, essentially, a way of obtaining some user's private information by carefully measuring the time it takes the user to carry out cryptographic operations. In this work we analyze two implementations of DES. We show that a timing attack yields the Hamming weight of the key used by both DES implementations. Moreover, the attack is computationally inexpensive. We also show that all the design characteristics of the target system, necessary to carry out the timing attack, can be inferred from timing measurements.