Results 1  10
of
10
Region Competition: Unifying Snakes, Region Growing, and Bayes/MDL for Multiband Image Segmentation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... We present a novel statistical and variational approach to image segmentation based on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL criterion using the variational principle. The algorithm is guaranteed to converge to a local minimum and c ..."
Abstract

Cited by 627 (20 self)
 Add to MetaCart
We present a novel statistical and variational approach to image segmentation based on a new algorithm named region competition. This algorithm is derived by minimizing a generalized Bayes/MDL criterion using the variational principle. The algorithm is guaranteed to converge to a local minimum and combines aspects of snakes/balloons and region growing. Indeed the classic snakes/balloons and region growing algorithms can be directly derived from our approach. We provide theoretical analysis of region competition including accuracy of boundary location, criteria for initial conditions, and the relationship to edge detection using filters. It is straightforward to generalize the algorithm to multiband segmentation and we demonstrate it on grey level images, color images and texture images. The novel color model allows us to eliminate intensity gradients and shadows, thereby obtaining segmentation based on the albedos of objects. It also helps detect highlight regions. 1 Division of Appli...
The Design and Analysis of Efficient Lossless Data Compression Systems
, 1993
"... Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as ..."
Abstract

Cited by 49 (0 self)
 Add to MetaCart
Our thesis is that high compression efficiency for text and images can be obtained by using sophisticated statistical compression techniques, and that greatly increased speed can be achieved at only a small cost in compression efficiency. Our emphasis is on elegant design and mathematical as well as empirical analysis. We analyze arithmetic coding as it is commonly implemented and show rigorously that almost no compression is lost in the implementation. We show that highefficiency lossless compression of both text and grayscale images can be obtained by using appropriate models in conjunction with arithmetic coding. We introduce a fourcomponent paradigm for lossless image compression and present two methods that give state of the art compression efficiency. In the text compression area, we give a small improvement on the preferred method in the literature. We show that we can often obtain significantly improved throughput at the cost of slightly reduced compression. The extra speed c...
Practical Implementations of Arithmetic Coding
 IN IMAGE AND TEXT
, 1992
"... We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmet ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
We provide a tutorial on arithmetic coding, showing how it provides nearly optimal data compression and how it can be matched with almost any probabilistic model. We indicate the main disadvantage of arithmetic coding, its slowness, and give the basis of a fast, spaceefficient, approximate arithmetic coder with only minimal loss of compression efficiency. Our coder is based on the replacement of arithmetic by table lookups coupled with a new deterministic probability estimation scheme.
A Formal Definition of Intelligence Based on an Intensional Variant of Algorithmic Complexity
 In Proceedings of the International Symposium of Engineering of Intelligent Systems (EIS'98
, 1998
"... Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] ..."
Abstract

Cited by 30 (17 self)
 Add to MetaCart
Machine Due to the current technology of the computers we can use, we have chosen an extremely abridged emulation of the machine that will effectively run the programs, instead of more proper languages, like lcalculus (or LISP). We have adapted the "toy RISC" machine of [Hernndez & Hernndez 1993] with two remarkable features inherited from its objectoriented coding in C++: it is easily tunable for our needs, and it is efficient. We have made it even more reduced, removing any operand in the instruction set, even for the loop operations. We have only three registers which are AX (the accumulator), BX and CX. The operations Q b we have used for our experiment are in Table 1: LOOPTOP Decrements CX. If it is not equal to the first element jump to the program top.
Reduced complexity rule induction
 In Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI91
, 1991
"... We present an architecture for rule induction that emphasizes compact, reducedcomplexity rules. A new heuristic technique for finding a covering rule set of sample data is described. This technique refines a set of production rules by iteratively replacing a component of a rule with its single best ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
We present an architecture for rule induction that emphasizes compact, reducedcomplexity rules. A new heuristic technique for finding a covering rule set of sample data is described. This technique refines a set of production rules by iteratively replacing a component of a rule with its single best replacement. A method for rule induction has been developed that combines this covering and refinement scheme with other techniques known to help reduce the complexity of rule sets, such as weakestlink pruning, resampling, and the judicious use of linear discriminants. Published results on several realworld datasets are reviewed where decision trees have performed relatively poorly. It is shown that far simpler decision rules can be found with predictive performance that exceeds those previously reported for various learning models, including neural nets and decision trees. 1
CramÃ©rRao Bounds for Parametric Shape Estimation in Inverse Problems
 IEEE Trans. on Image Processing
, 2003
"... We address the problem of computing fundamental performance bounds for estimation of object boundaries from noisy measurements in inverse problems, when the boundaries are parameterized by a finite number of unknown variables. Our model applies to multiple unknown objects, each with its own unknown ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We address the problem of computing fundamental performance bounds for estimation of object boundaries from noisy measurements in inverse problems, when the boundaries are parameterized by a finite number of unknown variables. Our model applies to multiple unknown objects, each with its own unknown gray level, or color, and boundary parameterization, on an arbitrary known background. While such fundamental bounds on the performance of shape estimation algorithms can in principle be derived from the CramerRao lower bounds, very few results have been reported due to the di#culty of computing the derivatives of a functional with respect to shape deformation. In this paper, we provide a general formula for computing CramerRao lower bounds in inverse problems where the observations are related to the object by a general linear transform, followed by a possibly nonlinear and noisy measurement system.
Clutter Modeling and Performance Analysis in Automatic Target Recognition
 In Proceedings Workshop on Detection and Classification of Difficult Targets. Redstone Arsenal
, 1998
"... The past decade has witnessed rapid development in accurate modeling of 3D targets and multiple sensor fusion in automatic target recognition (ATR), however, the scientific study for quantifying nontarget objects in a cluttered scene has made very limited progress, due to its enormous difficulties. ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The past decade has witnessed rapid development in accurate modeling of 3D targets and multiple sensor fusion in automatic target recognition (ATR), however, the scientific study for quantifying nontarget objects in a cluttered scene has made very limited progress, due to its enormous difficulties. In this paper, we study two important themes in ATR: I) clutter modeling  how can we build generic and lowdimensional probabilistic models for cluttered scenes, and how can we automatically learn such models from observed images? II) performance analysis  how can we quantify the effects of clutter on the performance of the ATR algorithms, and how much do the learned clutter models improve ATR performance? We answer the above questions by combing two important trends which have emerged in the past few years. The first is the minimax entropy learning theory, proposed by Zhu, Wu and Mumford[12]. According to this theory, cluttered scenes are defined on random fields with features character...
Does Algorithmic Probability Solve the Problem of Induction?
, 2001
"... We will begin with a definition of Algorithmic Probability (ALP), and discuss some of its properties. From these remarks it will become clear that it is extremely effective for computing probabilities of future events  the best technique we have. As such, it gives us an ideal theoretical solution t ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We will begin with a definition of Algorithmic Probability (ALP), and discuss some of its properties. From these remarks it will become clear that it is extremely effective for computing probabilities of future events  the best technique we have. As such, it gives us an ideal theoretical solution to the problem of inductive inference. I say "theoretical" because any device as accurate as ALP must necessarily be incomputable. For practical induction we use a set of approximations of increasing power to approach ALP. This set is called Resource Bounded Probability (RBP), and it constitutes a general solution to the problem of practical induction. Some of its properties are quite di erent from those of ALP. The rest of the paper will discuss philosophical and practical implications of the properties of ALP and RBP. It should be noted that the only argument that need be considered for the use of these techniques in induction is their effectiveness in getting good probability values for future events Whether their properties are in accord with our intuitions about induction is a peripheral issue. The main question is "do they work?" As we shall see, they do work.
Sparse Representations for Image Decompositions
, 1999
"... We are given an image I and a library of templates such that is an overcomplete basis for I . The templates can represent objects, faces, features, analytical functions, or be single pixel templates (canonical templates). There are infinitely many ways to decompose I as a linear combination of t ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We are given an image I and a library of templates such that is an overcomplete basis for I . The templates can represent objects, faces, features, analytical functions, or be single pixel templates (canonical templates). There are infinitely many ways to decompose I as a linear combination of the library templates. Each decomposition defines a representation for the image I , given L.
Detection of the Number of Signals in Noise with Banded Covariance Matrices
, 1996
"... A new approach is presented to the array signal processing problem of detecting the number of incident signals in unknown coloured noise environments with banded covariance structure. The principle of ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
A new approach is presented to the array signal processing problem of detecting the number of incident signals in unknown coloured noise environments with banded covariance structure. The principle of