• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,697
Next 10 →

High performance scalable image compression with EBCOT

by David Taubman - IEEE Trans. Image Processing , 2000
"... A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich feature set, including resolution and SNR ..."
Abstract - Cited by 586 (11 self) - Add to MetaCart
A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich feature set, including resolution and SNR

SPEA2: Improving the Strength Pareto Evolutionary Algorithm

by Eckart Zitzler, Marco Laumanns, Lothar Thiele , 2001
"... The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler and Thiele 1999) is a relatively recent technique for finding or approximating the Pareto-optimal set for multiobjective optimization problems. In different studies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown very ..."
Abstract - Cited by 708 (19 self) - Add to MetaCart
The Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler and Thiele 1999) is a relatively recent technique for finding or approximating the Pareto-optimal set for multiobjective optimization problems. In different studies (Zitzler and Thiele 1999; Zitzler, Deb, and Thiele 2000) SPEA has shown

Lag length selection and the construction of unit root tests with good size and power

by Serena Ng, Pierre Perron - Econometrica , 2001
"... It is widely known that when there are errors with a moving-average root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We conside ..."
Abstract - Cited by 558 (14 self) - Add to MetaCart
It is widely known that when there are errors with a moving-average root close to −1, a high order augmented autoregression is necessary for unit root tests to have good size, but that information criteria such as the AIC and the BIC tend to select a truncation lag (k) that is very small. We

K-theory for operator algebras

by Bruce Blackadar - Mathematical Sciences Research Institute Publications , 1998
"... p. XII line-5: since p. 1-2: I blew this simple formula: should be α = −〈ξ, η〉/〈η, η〉. p. 2 I.1.1.4: The Riesz-Fischer Theorem is often stated this way today, but neither Riesz nor Fischer (who worked independently) phrased it in terms of completeness of the orthogonal system {e int}. If [a, b] is a ..."
Abstract - Cited by 558 (0 self) - Add to MetaCart
is nonseparable. In fact, I. Farah (private communication) has shown that a Hilbert space of dimension 2ℵ0 has a dense subspace which does not contain any uncountable orthonormal set. A similar example was obtained by Dixmier [Dix53]. p. 8-9 I.2.4.3(i): Some of the statements on p. 9 can be false if the measure

Five Facts About Prices: A Reevaluation of Menu Cost Models,” Quarterly

by Emi Nakamura , Jón Steinsson - Journal of Economics , 2008
"... Abstract We establish five facts about prices in the U.S. economy: 1) The median frequency of nonsale price change is 9-12% per month, roughly half of what it is including sales. This implies an uncensored median duration of regular prices of 8-11 months. Product turnover plays an important role in ..."
Abstract - Cited by 326 (9 self) - Add to MetaCart
in truncating price spells in durable goods. The median frequency of price change for finished goods producer prices is roughly 11% per month. 2) One-third of regular price changes are price decreases. 3) The frequency of price increases covaries strongly with inflation while the frequency of price decreases

FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS

by N. Halko, P. G. Martinsson, J. A. Tropp
"... Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract - Cited by 253 (6 self) - Add to MetaCart
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool

Truncated Gaussians as Tolerance Sets

by Fabio Cozman, Eric Krotkov
"... This paper benefited from several suggestions made by Mike Erdmann in the reading of a first draft. References ..."
Abstract - Cited by 11 (0 self) - Add to MetaCart
This paper benefited from several suggestions made by Mike Erdmann in the reading of a first draft. References

Truncated Gaussians as Tolerance Sets

by Fabio Cozman Eric, Eric Krotkov
"... This work focuses on the use of truncated Gaussian distributions as models for bounded data -- measurements that are constrained to appear between fixed limits. We prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance a ..."
Abstract - Add to MetaCart
This work focuses on the use of truncated Gaussian distributions as models for bounded data -- measurements that are constrained to appear between fixed limits. We prove that the truncated Gaussian can be viewed as a maximum entropy distribution for truncated bounded data, when mean and covariance

Sparse Online Learning via Truncated Gradient

by John Langford, Lihong Li, Tong Zhang
"... We propose a general method called truncated gradient to induce sparsity in the weights of online-learning algorithms with convex loss. This method has several essential properties. First, the degree of sparsity is continuous—a parameter controls the rate of sparsification from no sparsification to ..."
Abstract - Cited by 107 (4 self) - Add to MetaCart
We propose a general method called truncated gradient to induce sparsity in the weights of online-learning algorithms with convex loss. This method has several essential properties. First, the degree of sparsity is continuous—a parameter controls the rate of sparsification from no sparsification

Efficient Simulation from the Multivariate Normal and Student-t Distributions Subject to Linear Constraints and the Evaluation of Constraint Probabilities

by John Geweke , 1991
"... The construction and implementation of a Gibbs sampler for efficient simulation from the truncated multivariate normal and Student-t distributions is described. It is shown how the accuracy and convergence of integrals based on the Gibbs sample may be constructed, and how an estimate of the probabil ..."
Abstract - Cited by 211 (10 self) - Add to MetaCart
of the probability of the constraint set under the unrestricted distribution may be produced. Keywords: Bayesian inference; Gibbs sampler; Monte Carlo; multiple integration; truncated normal This paper was prepared for a presentation at the meeting Computing Science and Statistics: the Twenty-Third Symposium
Next 10 →
Results 1 - 10 of 1,697
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University