Results 11  20
of
71
A New Entropy Measure Based on the Wavelet Transform . . .
, 1998
"... We present in this brief a new way to measure the information in a signal, based on noise modeling. We show that the use of such an entropyrelated measure leads to good results for signal restoration. I. INTRODUCTION The term "entropy" is due to Clausius (1865), and the concept of entropy was int ..."
Abstract

Cited by 17 (15 self)
 Add to MetaCart
We present in this brief a new way to measure the information in a signal, based on noise modeling. We show that the use of such an entropyrelated measure leads to good results for signal restoration. I. INTRODUCTION The term "entropy" is due to Clausius (1865), and the concept of entropy was introduced by Boltzmann into statistical mechanics, in order to measure the number of microscopic ways that a given macroscopic state can be realized. Shannon [11] founded the mathematical theory of communication when he suggested that the information gained in a measurement depends on the number of possible outcomes out of which one is realized. Shannon also suggested that the entropy can be used for maximization of the bits transferred under a quality constraint. Jaynes [7] proposed to use the entropy measure for radio interometric image deconvolution, in order to select in a set of possible solutions which contains the minimum of information, or following his entropy definition, that which h...
Can the Maximum Entropy Principle Be Explained as a Consistency Requirement?
, 1997
"... The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathe ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
The principle of maximumentropy is a general method to assign values to probability distributions on the basis of partial information. This principle, introduced by Jaynes in 1957, forms an extension of the classical principle of insufficient reason. It has been further generalized, both in mathematical formulation and in intended scope, into the principle of maximum relative entropy or of minimum information. It has been claimed that these principles are singled out as unique methods of statistical inference that agree with certain compelling consistency requirements. This paper reviews these consistency arguments and the surrounding controversy. It is shown that the uniqueness proofs are flawed, or rest on unreasonably strong assumptions. A more general class of 1 inference rules, maximizing the socalled R'enyi entropies, is exhibited which also fulfill the reasonable part of the consistency assumptions. 1 Introduction In any application of probability theory to the pro...
Multiscale Entropy Filtering
, 1999
"... We present in this paper a new method for filtering an image, based on a new definition of its entropy. A large number of examples illustrate the results. Comparisons are performed with other waveletbased methods. ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
We present in this paper a new method for filtering an image, based on a new definition of its entropy. A large number of examples illustrate the results. Comparisons are performed with other waveletbased methods.
The Promise of Bayesian Inference for Astrophysics
, 1992
"... . The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems. The ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. The `frequentist' approach to statistics, currently dominating statistical practice in astrophysics, is compared to the historically older Bayesian approach, which is now growing in popularity in other scientific disciplines, and which provides unique, optimal solutions to wellposed problems. The two approaches address the same questions with very different calculations, but in simple cases often give the same final results, confusing the issue of whether one is superior to the other. Here frequentist and Bayesian methods are applied to problems where such a mathematical coincidence does not occur, allowing assessment of their relative merits based on their performance, rather than on philosophical argument. Emphasis is placed on a key distinction between the two approaches: Bayesian methods, based on comparisons among alternative hypotheses using the single observed data set, consider averages over hypotheses; frequentist methods, in contrast, average over hypothetical alternative...
A New Look at the Entropy for Solving Linear Inverse Problems
 IEEE Transactions on Information Theory
, 1994
"... Entropybased methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear illposed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods ar ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Entropybased methods are widely used for solving inverse problems, especially when the solution is known to be positive. We address here the linear illposed and noisy inverse problems y = Ax + n with a more general convex constraint x 2 C, where C is a convex set. Although projective methods are well adapted to this context, we study here alternative methods which rely highly on some "informationbased" criteria. Our goal is to enlight the role played by entropy in this frame, and to present a new and deeper point of view on the entropy, using general tools and results of convex analysis and large deviations theory. Then, we present a new and large scheme of entropicbased inversion of linearnoisy inverse problems. This scheme was introduced by Navaza in 1985 [48] in connection with a physical modeling for crystallographic applications, and further studied by DacunhaCastelle and Gamboa [13]. Important features of this paper are (i) a unified presentation of many well kno...
Maximum entropy, fluctuations and priors
, 2000
"... The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the theory of thermodynamic fluctuations. The formulation is exact, covariant under changes of coordinates, and allows fluctuations of both the extensive and the conjugate intensive variables. The second application is to the construction of an objective prior for Bayesian inference. The prior obtained by following the ME method to its inevitable conclusion turns out to be a special case (α = 1) of what are currently known under the name of entropic priors.
Relative Entropy and Inductive Inference
 in Bayesian Inference and Maximum Entropy Methods in Science and Engineering
, 2004
"... We discuss how the method of maximum entropy, MaxEnt, can be extended beyond its original scope, as a rule to assign a probability distribution, to a fullfledged method for inductive inference. The main concept is the (relative) entropy S[pq] which is designed as a tool to update from a prior prob ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We discuss how the method of maximum entropy, MaxEnt, can be extended beyond its original scope, as a rule to assign a probability distribution, to a fullfledged method for inductive inference. The main concept is the (relative) entropy S[pq] which is designed as a tool to update from a prior probability distribution q to a posterior probability distribution p when new information in the form of a constraint becomes available. The extended method goes beyond the mere selection of a single posterior p, but also addresses the question of how much less probable other distributions might be. Our approach clarifies how the entropy S[pq] is used while avoiding the question of its meaning. Ultimately, entropy is a tool for induction which needs no interpretation. Finally, being a tool for generalization from special examples, we ask whether the functional form of the entropy depends on the choice of the examples and we find that it does. The conclusion is that there is no single general theory of inductive inference and that alternative expressions for the entropy are possible. 1
Overview and construction of meshfree basis functions: From moving least squares to entropy approximants
 Int. J. Numer. Methods Engrg
, 2007
"... In this paper, an overview of the construction of meshfree basis functions is presented, with particular emphasis on moving leastsquares approximants, natural neighbourbased polygonal interpolants, and entropy approximants. The use of informationtheoretic variational principles to derive approxim ..."
Abstract

Cited by 11 (11 self)
 Add to MetaCart
In this paper, an overview of the construction of meshfree basis functions is presented, with particular emphasis on moving leastsquares approximants, natural neighbourbased polygonal interpolants, and entropy approximants. The use of informationtheoretic variational principles to derive approximation schemes is a recent development. In this setting, data approximation is viewed as an inductive inference problem, with the basis functions being synonymous with a discrete probability distribution and the polynomial reproducing conditions acting as the linear constraints. The maximization (minimization) of the Shannon–Jaynes entropy functional (relative entropy functional) is used to unify the construction of globally and locally supported convex approximation schemes. A JAVA applet is used to visualize the meshfree basis functions, and comparisons and links between different meshfree approximation schemes
Maximum Entropy and Bayesian Data Analysis: Entropic Priors
, 2003
"... The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the in ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The problem of assigning probability distributions which objectively reflect the prior information available about experiments is one of the major stumbling blocks in the use of Bayesian methods of data analysis. In this paper the method of Maximum (relative) Entropy (ME) is used to translate the information contained in the known form of the likelihood into a prior distribution for Bayesian inference. The argument is inspired and guided by intuition gained from the successful use of ME methods in statistical mechanics. For experiments that cannot be repeated the resulting “entropic prior ” is formally identical with the Einstein fluctuation formula. For repeatable experiments, however, the expected value of the entropy of the likelihood turns out to be relevant information that must be included in the analysis. The important case of a Gaussian likelihood is treated in detail.