Results 11  20
of
18,300
Equivariant Adaptive Source Separation
 IEEE Trans. on Signal Processing
, 1996
"... Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI (Eq ..."
Abstract

Cited by 449 (9 self)
 Add to MetaCart
Source separation consists in recovering a set of independent signals when only mixtures with unknown coefficients are observed. This paper introduces a class of adaptive algorithms for source separation which implements an adaptive version of equivariant estimation and is henceforth called EASI
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract

Cited by 373 (28 self)
 Add to MetaCart
if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide
Mixed Logit with Repeated Choices: Households' Choices Of Appliance Efficiency Level
, 1997
"... : Mixed logit models, also called randomparameters or errorcomponents logit, are a generalization of standard logit that do not exhibit the restrictive "independence from irrelevant alternatives" property and explicitly account for correlations in unobserved utility over repeated choices ..."
Abstract

Cited by 338 (10 self)
 Add to MetaCart
choices by each customer. Mixed logits are estimated for households' choices of appliances under utilitysponsored programs that offer rebates or loans on highefficiency appliances. JEL Codes: C15, C23, C25, D12, L68, L94, Q40 2 Mixed Logit with Repeated Choices: Households' Choices
Background and Foreground Modeling Using Nonparametric Kernel Density Estimation for Visual Surveillance
 PROCEEDINGS OF THE IEEE
, 2002
"... ... This paper focuses on two issues related to this problem. First, we construct a statistical representation of the scene background that supports sensitive detection of moving objects in the scene, but is robust to clutter arising out of natural scene variations. Second, we build statistical repr ..."
Abstract

Cited by 294 (8 self)
 Add to MetaCart
utilize general nonparametric kernel density estimation techniques for building these statistical representations of the background and the foreground. These techniques estimate the pdf directly from the data without any assumptions about the underlying distributions. Example results from applications
An Empirical Study of Operating System Errors
, 2001
"... We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previous studies that consider errors found by manual inspection of logs, testing, and surveys because static analysis is applied uniforml ..."
Abstract

Cited by 363 (9 self)
 Add to MetaCart
uniformly to the entire kernel source, though our approach necessarily considers a less comprehensive variety of errors than previous studies. In addition, automation allows us to track errors over multiple versions of the kernel source to estimate how long errors remain in the system before they are fixed
Minimax Estimation via Wavelet Shrinkage
, 1992
"... We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coe cients. The shrinkage can be tuned to be nearly minim ..."
Abstract

Cited by 321 (29 self)
 Add to MetaCart
minimax over any member of a wide range of Triebel and Besovtype smoothness constraints, and asymptotically minimax over Besov bodies with p q. Linear estimates cannot achieve even the minimax rates over Triebel and Besov classes with p <2, so our method can signi cantly outperform every linear
MulticastBased Inference of NetworkInternal Characteristics: Accuracy of Packet Loss Estimation
 IEEE Transactions on Information Theory
, 1998
"... We explore the use of endtoend multicast traffic as measurement probes to infer networkinternal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique explo ..."
Abstract

Cited by 323 (40 self)
 Add to MetaCart
. In particular, we report on the error between inferred loss rates and actual loss rates as we vary the network topology, propagation delay, packet drop policy, background traffic mix, and probe traffic type. In all but one case, estimated losses and probe losses agree to within 2 percent on average. We feel
Approximating the permanent
 SIAM J. Computing
, 1989
"... Abstract. A randomised approximation scheme for the permanent of a 01 matrix is presented. The task of estimating a permanent is reduced to that of almost uniformly generating perfect matchings in a graph; the latter is accomplished by simulating a Markov chain whose states are the matchings in the ..."
Abstract

Cited by 345 (26 self)
 Add to MetaCart
Abstract. A randomised approximation scheme for the permanent of a 01 matrix is presented. The task of estimating a permanent is reduced to that of almost uniformly generating perfect matchings in a graph; the latter is accomplished by simulating a Markov chain whose states are the matchings
Dependency tree kernels for relation extraction
 In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL04
, 2004
"... We extend previous work on tree kernels to estimate the similarity between the dependency trees of sentences. Using this kernel within a Support Vector Machine, we detect and classify relations between entities in the Automatic Content Extraction (ACE) corpus of news articles. We examine the utility ..."
Abstract

Cited by 263 (2 self)
 Add to MetaCart
We extend previous work on tree kernels to estimate the similarity between the dependency trees of sentences. Using this kernel within a Support Vector Machine, we detect and classify relations between entities in the Automatic Content Extraction (ACE) corpus of news articles. We examine
Consistency of the group lasso and multiple kernel learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2007
"... We consider the leastsquare regression problem with regularization by a block 1norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1norm where all spaces have dimension one, where it ..."
Abstract

Cited by 274 (33 self)
 Add to MetaCart
are replaced by functions and reproducing kernel Hilbert norms, the problem is usually referred to as multiple kernel learning and is commonly used for learning from heterogeneous data sources and for non linear variable selection. Using tools from functional analysis, and in particular covariance operators
Results 11  20
of
18,300