Results 1  10
of
13
Non quantum uncertainty relations of stochastic dynamics
 Chaos, Solitons & Fractals
, 2005
"... for position and momentum, and 〈∆H〉〈∆t 〉 ≥ 1 2η for hamiltonian and time, exist for stochastic dynamics of hamiltonian systems. These relations describe, through action or its conjugate variables, the fluctuation of stochastic dynamics due to random perturbation characterized by the parameter η. Af ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
for position and momentum, and 〈∆H〉〈∆t 〉 ≥ 1 2η for hamiltonian and time, exist for stochastic dynamics of hamiltonian systems. These relations describe, through action or its conjugate variables, the fluctuation of stochastic dynamics due to random perturbation characterized by the parameter η. After a recapitulation of an informationaction method for the study of stochastic dynamics of hamiltonian systems perturbed by thermal noise and chaotic instability, we show that, for the ensemble of all the possible paths between two state points, the action principle acquires a statistical form 〈δA 〉 = 0. The main objective of this paper is to prove that, via this informationaction description, some uncertainty relations such as 〈∆A 〉 ≥ 1
Action principle and Jaynes’ guess method
"... A path information is defined in connection with the probability distribution of paths of nonequilibrium hamiltonian systems moving in phase space from an initial cell to different final cells. On the basis of the assumption that these paths are physically characterized by their action, we show that ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
A path information is defined in connection with the probability distribution of paths of nonequilibrium hamiltonian systems moving in phase space from an initial cell to different final cells. On the basis of the assumption that these paths are physically characterized by their action, we show that the maximum path information leads to an exponential probability distribution of action which implies that the most probable paths are just the paths of stationary action. We also show that the averaged (over initial conditions) path information between an initial cell and all the possible final cells can be related to the entropy change defined with natural invariant measures for dynamical systems. Hence the principle of maximum path information suggests maximum entropy and entropy change which, in other words, is just an application of the action principle of classical mechanics to the cases of stochastic or instable dynamics. 1
Review The GouyStodola Theorem in Bioenergetic Analysis of Living Systems (Irreversibility in Bioenergetics of Living Systems)
, 2014
"... www.mdpi.com/journal/energies ..."
(Show Context)
Article A Link between Nano and Classical Thermodynamics: Dissipation Analysis (The Entropy Generation Approach in NanoThermodynamics)
"... www.mdpi.com/journal/entropy ..."
(Show Context)
pour l’obtention du grade de Docteur en Sciences Path Probability and An Extension of Least Action Principle to Random Motion
, 2013
"... The present thesis is devoted to the study of path probability of random motion on thebasisofanextensionofHamiltonian/Lagrangianmechanicstostochasticdynamics. The path probability is first investigated by numerical simulation for Gaussian stochastic motion of non dissipative systems. This ideal dyna ..."
Abstract
 Add to MetaCart
(Show Context)
The present thesis is devoted to the study of path probability of random motion on thebasisofanextensionofHamiltonian/Lagrangianmechanicstostochasticdynamics. The path probability is first investigated by numerical simulation for Gaussian stochastic motion of non dissipative systems. This ideal dynamical model implies that, apart from the Gaussian random forces, the system is only subject to conservative forces. This model can be applied to underdamped real random motion in the presence of friction force when the dissipated energy is negligible with respect to the variation of the potential energy. We find that the path probability decreases exponentially with increasingaction, i.e., P(A) ∼ e−γA, whereγ isaconstantcharacterizingthesensitivity of the action dependence of the path probability, the action is given by A = ∫ T Ldt, a 0 time integral of the Lagrangian L = K−V over a fixed time period T, K is the kinetic energy and V is the potential energy. This result is a confirmation of the existence of a classical analogue of the Feynman factor e iA/ � for the path integral formalism of
Bayesian inference . . .
, 2007
"... The subject of this work is the parametric inference problem, i.e. how to infer from data on the parameters of the data likelihood of a random process whose parametric form is known a priori. The assumption that Bayes’ theorem has to be used to add new data samples reduces the problem to the questi ..."
Abstract
 Add to MetaCart
The subject of this work is the parametric inference problem, i.e. how to infer from data on the parameters of the data likelihood of a random process whose parametric form is known a priori. The assumption that Bayes’ theorem has to be used to add new data samples reduces the problem to the question of how to specify a prior before having seen any data. For this subproblem three theorems are stated. The first one is that Jaynes’ Maximum Entropy Principle requires at least a constraint on the expected data likelihood entropy, which gives entropic priors without the need of further axioms. Second I show that maximizing Shannon entropy under an expected data likelihood entropy constraint is equivalent to maximizing relative entropy and therefore reparametrization invariant for continuousvalued data likelihoods. Third, I propose that in the state of absolute ignorance of the data likelihood entropy, one should choose the hyperparameter α of an entropic prior such that the change of expected data likelihood entropy is maximized. Among other beautiful properties, this principle is equivalent to the maximization of the meansquared entropy error and invariant against any reparametrizations of the data likelihood. Altogether we get a Bayesian inference procedure that incorporates special prior knowledge if available but has also a sound solution if not, and leaves no hyperparameters unspecified.