Results 1 - 10
of
2,664
Ideal spatial adaptation by wavelet shrinkage
- Biometrika
, 1994
"... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..."
Abstract
-
Cited by 1269 (5 self)
- Add to MetaCart
is the sample size. Moreover no estimator can give a better guarantee than this. Within the class of spatially adaptive procedures, RiskShrink is essentially optimal. Relying only on the data, it comes within a factor log 2 n of the performance of piecewise polynomial and variable-knot spline methods equipped
Adapting to unknown smoothness via wavelet shrinkage
- JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1995
"... We attempt to recover a function of unknown smoothness from noisy, sampled data. We introduce a procedure, SureShrink, which suppresses noise by thresholding the empirical wavelet coefficients. The thresholding is adaptive: a threshold level is assigned to each dyadic resolution level by the princip ..."
Abstract
-
Cited by 1006 (18 self)
- Add to MetaCart
by the principle of minimizing the Stein Unbiased Estimate of Risk (Sure) for threshold estimates. The computational effort of the overall procedure is order N log(N) as a function of the sample size N. SureShrink is smoothness-adaptive: if the unknown function contains jumps, the reconstruction (essentially) does
Informed Content Delivery Across Adaptive Overlay Networks
, 2002
"... Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize through-put of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmit-ted packet. We first make the case for ..."
Abstract
-
Cited by 247 (8 self)
- Add to MetaCart
for transmitting encoded content in this scenario, arguing for the digital fountain approach which en-ables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial de
Measurement and modeling of the temporal dependence in packet loss
, 1999
"... Abstract — Understanding and modelling packet loss in the Internet is especially relevant for the design and analysis of delay-sensitive multimedia applications. In this paper, we present analysis of ¢¡¤ £ hours of endto-end unicast and multicast packet loss measurement. From these we selected ¥§ ¦ ..."
Abstract
-
Cited by 270 (11 self)
- Add to MetaCart
chain model of order ¡ or greater was found to be necessary to accurately model the rest of the segments. For the case of adaptive applications which track loss, we address two issues of on-line loss estimation: the required memory size and whether to use exponential smoothing or a sliding window
An Empirical Bayes Approach to Inferring Large-Scale Gene Association Networks
- BIOINFORMATICS
, 2004
"... Motivation: Genetic networks are often described statistically by graphical models (e.g. Bayesian networks). However, inferring the network structure offers a serious challenge in microarray analysis where the sample size is small compared to the number of considered genes. This renders many standar ..."
Abstract
-
Cited by 237 (6 self)
- Add to MetaCart
) that are now frequently used to describe gene association networks and to detect conditionally dependent genes. Our new approach is based on (i) improved (regularized) small-sample point estimates of partial correlation, (ii) an exact test of edge inclusion with adaptive estimation of the degree of freedom
PracticaJ selectivity estimation through adaptive sampling
- In Proc. ,4CM SIGMOD International Conf on Management of Data
, 1990
"... Recently we have proposed an adaptive, random sampling algorithm for general query size estlmatlon In earlier work we analyzed the asymptotic ef’l?clency and accuracy of the algorithm, m this paper we mvestlgate Its practlcahty as applied to selects and Jams First, we extend our previous analysis to ..."
Abstract
-
Cited by 164 (7 self)
- Add to MetaCart
Recently we have proposed an adaptive, random sampling algorithm for general query size estlmatlon In earlier work we analyzed the asymptotic ef’l?clency and accuracy of the algorithm, m this paper we mvestlgate Its practlcahty as applied to selects and Jams First, we extend our previous analysis
Optimal aggregation of classifiers in statistical learning
- ANN. STATIST
, 2004
"... Classification can be considered as nonparametric estimation of sets, where the risk is defined by means of a specific distance between sets associated with misclassification error. It is shown that the rates of convergence of classifiers depend on two parameters: the complexity of the class of cand ..."
Abstract
-
Cited by 220 (6 self)
- Add to MetaCart
of candidate sets and the margin parameter. The dependence is explicitly given, indicating that optimal fast rates approaching O(nâ1) can be attained, where n is the sample size, and that the proposed classifiers have the property of robustness to the margin. The main result of the paper concerns optimal
Adapting the Sample Size in Particle Filters Through KLD-Sampling
- International Journal of Robotics Research
, 2003
"... Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. ..."
Abstract
-
Cited by 150 (9 self)
- Add to MetaCart
Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process.
Charting a Manifold
- Advances in Neural Information Processing Systems 15
, 2003
"... this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold i ..."
Abstract
-
Cited by 206 (7 self)
- Add to MetaCart
this paper we use m i ( j ) N ( j ; i , s ), with the scale parameter s specifying the expected size of a neighborhood on the manifold in sample space. A reasonable choice is s = r/2, so that 2erf(2) > 99.5% of the density of m i ( j ) is contained in the area around y i where the manifold
Managing Multi-Configurable Hardware via Dynamic Working Set Analysis
- In 29th Annual International Symposium on Computer Architecture
, 2002
"... Microprocessors are designed to provide good average performance over a variety of workloads. This can lead to inefficiencies both in power and performance for individual programs and during individual phases within the same program. Microarchitectures with multi-configuration units (e.g. caches, pr ..."
Abstract
-
Cited by 192 (3 self)
- Add to MetaCart
algorithms that use working set signatures to 1) detect working set changes and trigger re-tuning; 2) identify recurring working sets and re-install saved optimal reconfigurations, thus avoiding the time-consuming tuning process; 3) estimate working set sizes to configure caches directly to the proper size
Results 1 - 10
of
2,664