Results 1  10
of
20
Sequential Monte Carlo Methods to Train Neural Network Models
, 2000
"... We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequentia ..."
Abstract

Cited by 35 (8 self)
 Add to MetaCart
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/ sampling importance resampling algorithm (HySIR). In terms of computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimization strategy that allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving online, nonlinear, and nongaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the onestepahead probability density functions of the options prices.
Biomolecular dynamics at long timesteps: Bridging the timescale gap between simulation and experimentation
 ANNU. REV. BIOPHYS. BIOMOL. STRUCT
, 1997
"... Innovative algorithms have been developed during the past decade for simulating Newtonian physics for macromolecules. A major goal is alleviation of the severe requirement that the integration timestep be small enough to resolve the fastest components of the motion and thus guarantee numerical stab ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
Innovative algorithms have been developed during the past decade for simulating Newtonian physics for macromolecules. A major goal is alleviation of the severe requirement that the integration timestep be small enough to resolve the fastest components of the motion and thus guarantee numerical stability. This timestep problem is challenging if strictly faster methods with the same allatom resolution at small timesteps are sought. Mathematical techniques that have worked well in other multipletimescale contexts—where the fast motions are rapidly decaying or largely decoupled from others—have not been as successful for biomolecules, where vibrational coupling is strong. This review examines general issues that limit the timestep and describes available methods (constrained, reducedvariable, implicit, symplectic, multipletimestep, and normalmodebased schemes). A section compares results of selected integrators for a model dipeptide, assessing physical and numerical performance. Included is our dual timestep method LN, which relies on an approximate linearization of the equations of motion every �t interval (5 fs or less), the solution of which is obtained by explicit integration at the inner timestep �τ (e.g., 0.5 fs). LN is computationally competitive, providing 4–5 speedup factors, and results are in good agreement, in comparison to 0.5 fs trajectories. These collective algorithmic efforts help fill the gap between the time range that can be simulated and the timespans of major biological interest (milliseconds and longer). Still, only a hierarchy of models and methods, along with
Robust Full Bayesian Learning for Radial Basis Networks
, 2001
"... We propose a hierachical full Bayesian model for radial basis networks. This model treats the model dimension (number of neurons), model parameters,... ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
We propose a hierachical full Bayesian model for radial basis networks. This model treats the model dimension (number of neurons), model parameters,...
Robust Full Bayesian Learning for Neural Networks
, 1999
"... In this paper, we propose a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. We develop a reversible jump Markov chain Monte ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
In this paper, we propose a hierarchical full Bayesian model for neural networks. This model treats the model dimension (number of neurons), model parameters, regularisation parameters and noise parameters as random variables that need to be estimated. We develop a reversible jump Markov chain Monte Carlo (MCMC) method to perform the necessary computations. We find that the results obtained using this method are not only better than the ones reported previously, but also appear to be robust with respect to the prior specification. In addition, we propose a novel and computationally efficient reversible jump MCMC simulated annealing algorithm to optimise neural networks. This algorithm enables us to maximise the joint posterior distribution of the network parameters and the number of basis function. It performs a global search in the joint space of the parameters and number of parameters, thereby surmounting the problem of local minima. We show that by calibrating the full hierarchical ...
Hybrid Monte Carlo with Adaptive Temperature in a MixedCanonical Ensemble: Efficient Conformational Analysis of RNA
 J. COMPUT. CHEM
, 1997
"... A hybrid Monte Carlo method with adaptive temperature choice is presented, which exactly generates the distribution of a mixedcanonical ensemble composed of two canonical ensembles at low and high temperature. The analysis of resulting Markov chains with the reweighting technique shows an effici ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
A hybrid Monte Carlo method with adaptive temperature choice is presented, which exactly generates the distribution of a mixedcanonical ensemble composed of two canonical ensembles at low and high temperature. The analysis of resulting Markov chains with the reweighting technique shows an efficient sampling of the canonical distribution at low temperature, whereas the high temperature component facilitates conformational transitions, which allows shorter simulation times. The algorithm was tested by comparing analytical and numerical results for the small nbutane molecule before simulations were performed for a triribonucleotide. Sampling the complex multiminima energy landscape of this small RNA segment, we observe enforced crossing of energy barriers.
Bayesian Methods for Neural Networks
, 1999
"... Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and meas ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Summary The application of the Bayesian learning paradigm to neural networks results in a flexible and powerful nonlinear modelling framework that can be used for regression, density estimation, prediction and classification. Within this framework, all sources of uncertainty are expressed and measured by probabilities. This formulation allows for a probabilistic treatment of our a priori knowledge, domain specific knowledge, model selection schemes, parameter estimation methods and noise estimation techniques. Many researchers have contributed towards the development of the Bayesian learning approach for neural networks. This thesis advances this research by proposing several novel extensions in the areas of sequential learning, model selection, optimisation and convergence assessment. The first contribution is a regularisation strategy for sequential learning based on extended Kalman filtering and noise estimation via evidence maximisation. Using the expectation maximisation (EM) algorithm, a similar algorithm is derived for batch learning. Much of the thesis is, however, devoted to Monte Carlo simulation methods. A robust Bayesian method is proposed to estimate,
Sequential Monte Carlo Methods For Optimisation Of Neural Network Models
, 1998
"... We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/sampling importance resampling algorithm (HySIR). In terms of both computational time and accuracy, the hybrid SIR is a clear improvement over conventional seque ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/sampling importance resampling algorithm (HySIR). In terms of both computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimisation strategy, which allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving online, nonlinear and nonGaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the onestepahead probability density functions of the options prices.
Algorithm and data structures for efficient energy maintenance during Monte Carlo simulation of proteins
 Journal of Computational Biology
, 2004
"... Monte Carlo simulation (MCS) is a common methodology to compute pathways and thermodynamic properties of proteins. A simulation run is a series of random steps in conformation space, each perturbing some degrees of freedom of the molecule. A step is accepted with a probability that depends on the c ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Monte Carlo simulation (MCS) is a common methodology to compute pathways and thermodynamic properties of proteins. A simulation run is a series of random steps in conformation space, each perturbing some degrees of freedom of the molecule. A step is accepted with a probability that depends on the change in value of an energy function. Typical energy functions sum many terms. The most costly ones to compute are contributed by atom pairs closer than some cutoff distance. This paper introduces a new method that speeds up MCS by exploiting the facts that proteins are long kinematic chains and that few degrees of freedom are changed at each step. A novel data structure, called the ChainTree, captures both the kinematics and the shape of a protein at successive levels of detail. It is used to efficiently detect selfcollision (steric clash between atoms) and/or find all atom pairs contributing to the energy. It also makes it possible to identify partial energy sums left unchanged by a perturbation, thus allowing the energy value to be incrementally updated. Computational tests on four proteins of sizes ranging from 68 to 755 amino acids show that MCS with the ChainTree method is significantly faster (as much as 10 times faster for the largest protein) than with the widely used grid method. They also indicate that speedup increases with larger proteins.
Hierarchical UncouplingCoupling of Metastable Conformations
, 2002
"... Uncouplingcoupling Monte Carlo (UCMC) combines uncoupling techniques for finite Markov chains with Markov chain Monte Carlo methodology. UCMC aims at avoiding the typical metastable or trapping behavior of Monte Carlo techniques. From the viewpoint of Monte Carlo, a slowly converging longtime Marko ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Uncouplingcoupling Monte Carlo (UCMC) combines uncoupling techniques for finite Markov chains with Markov chain Monte Carlo methodology. UCMC aims at avoiding the typical metastable or trapping behavior of Monte Carlo techniques. From the viewpoint of Monte Carlo, a slowly converging longtime Markov chain is replaced by a limited number of rapidly mixing shorttime ones. Therefore, the state space of the chain has to be hierarchically decomposed into its metastable conformations. This is done by means of combining the technique of conformation analysis as recently introduced by the authors, and appropriate annealing strategies. We present a detailed examination of the uncouplingcoupling procedure which uncovers its theoretical background, and illustrates the hierarchical algorithmic approach. Furthermore, application of the UCMC algorithm to the npentane molecule allows us to discuss the effect of its crucial steps in a typical molecular scenario.
Improved Sampling for Biological Molecules Using Shadow Hybrid Monte Carlo
 Accepted in International Conference on Computational Science (ICCS 2004
, 2004
"... Shadow Hybrid Monte Carlo (SHMC) is a new method for sampling the phase space of large biological molecules. It improves sampling by allowing larger time steps and system sizes in the molecular dynamics (MD) step of Hybrid Monte Carlo (HMC). This is achieved by sampling from high order approxima ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
Shadow Hybrid Monte Carlo (SHMC) is a new method for sampling the phase space of large biological molecules. It improves sampling by allowing larger time steps and system sizes in the molecular dynamics (MD) step of Hybrid Monte Carlo (HMC). This is achieved by sampling from high order approximations to the modified Hamiltonian, which is exactly integrated by a symplectic MD integrator. SHMC requires extra storage, modest computational overhead, and a reweighting step to obtain averages from the canonical ensemble. Numerical experiments are performed on biological molecules, ranging from a small peptide with 66 atoms to a large solvated protein with 14281 atoms. Experimentally, SHMC achieves an order magnitude speedup in sampling e#ciency for medium sized proteins.