Results 11  20
of
3,502
Noise power spectral density estimation based on optimal smoothing and minimum statistics
 IEEE TRANS. SPEECH AND AUDIO PROCESSING
, 2001
"... We describe a method to estimate the power spectral density of nonstationary noise when a noisy speech signal is given. The method can be combined with any speech enhancement algorithm which requires a noise power spectral density estimate. In contrast to other methods, our approach does not use a ..."
Abstract

Cited by 276 (7 self)
 Add to MetaCart
We describe a method to estimate the power spectral density of nonstationary noise when a noisy speech signal is given. The method can be combined with any speech enhancement algorithm which requires a noise power spectral density estimate. In contrast to other methods, our approach does not use
CHAMELEON: A Hierarchical Clustering Algorithm Using Dynamic Modeling
, 1999
"... Clustering in data mining is a discovery process that groups a set of data such that the intracluster similarity is maximized and the intercluster similarity is minimized. Existing clustering algorithms, such as Kmeans, PAM, CLARANS, DBSCAN, CURE, and ROCK are designed to find clusters that fit s ..."
Abstract

Cited by 268 (19 self)
 Add to MetaCart
some static models. These algorithms can breakdown if the choice of parameters in the static model is incorrect with respect to the data set being clustered, or if the model is not adequate to capture the characteristics of clusters. Furthermore, most of these algorithms breakdown when the data
Algorithms and Complexity Concerning the Preemptive Scheduling of Periodic, RealTime Tasks on One Processor
 RealTime Systems
, 1990
"... We investigate the preemptive scheduling of periodic, realtime task systems on one processor. First, we show that when all parameters to the system are integers, we may assume without loss of generality that all preemptions occur at integer time values. We then assume, for the remainder of the pape ..."
Abstract

Cited by 248 (15 self)
 Add to MetaCart
of the paper, that all parameters are indeed integers. We then give as our main lemma both necessary and sufficient conditions for a task system to be feasible on one processor. Although these conditions cannot, in general, be tested efficiently (unless P = NP), they do allow us to give efficient algorithms
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstractâ€”An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 264 (5 self)
 Add to MetaCart
state finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter
Supervised learning of semantic classes for image annotation and retrieval
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2007
"... Abstractâ€”A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this oneto ..."
Abstract

Cited by 223 (18 self)
 Add to MetaCart
toone correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images
Learning from Labeled and Unlabeled Data with Label Propagation
, 2002
"... We investigate the use of unlabeled data to help labeled data in classification. We propose a simple iterative algorithm, label propagation, to propagate labels through the dataset along high density areas defined by unlabeled data. We give the analysis of the algorithm, show its solution, and its c ..."
Abstract

Cited by 195 (0 self)
 Add to MetaCart
We investigate the use of unlabeled data to help labeled data in classification. We propose a simple iterative algorithm, label propagation, to propagate labels through the dataset along high density areas defined by unlabeled data. We give the analysis of the algorithm, show its solution, and its
Variational Inference for Bayesian Mixtures of Factor Analysers
 In Advances in Neural Information Processing Systems 12
, 2000
"... We present an algorithm that infers the model structure of a mixture of factor analysers using an ecient and deterministic variational approximation to full Bayesian integration over model parameters. This procedure can automatically determine the optimal number of components and the local dimension ..."
Abstract

Cited by 191 (22 self)
 Add to MetaCart
We present an algorithm that infers the model structure of a mixture of factor analysers using an ecient and deterministic variational approximation to full Bayesian integration over model parameters. This procedure can automatically determine the optimal number of components and the local
MIMIC: Finding Optima by Estimating Probability Densities
 ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
, 1996
"... In many optimization problems, the structure of solutions reflects complex relationships between the different input parameters. For example, experience may tell us that certain parameters are closely related and should not be explored independently. Similarly, experience may establish that a subset ..."
Abstract

Cited by 154 (1 self)
 Add to MetaCart
subset of parameters must take on particular values. Any search of the cost landscape should take advantage of these relationships. We present MIMIC, a framework in which we analyze the global structure of the optimization landscape. A novel and efficient algorithm for the estimation of this structure
Relative Loss Bounds for Online Density Estimation with the Exponential Family of Distributions
 MACHINE LEARNING
, 2000
"... We consider online density estimation with a parameterized density from the exponential family. The online algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which is the n ..."
Abstract

Cited by 152 (12 self)
 Add to MetaCart
We consider online density estimation with a parameterized density from the exponential family. The online algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which
Optimisation of Density Estimation Models with Evolutionary Algorithms
, 1998
"... . We propose a new optimisation method for estimating both the parameters and the structure, i. e. the number of components, of a finite mixture model for density estimation. We employ a hybrid method consisting of an evolutionary algorithm for structure optimisation in conjunction with a gradientb ..."
Abstract
 Add to MetaCart
. We propose a new optimisation method for estimating both the parameters and the structure, i. e. the number of components, of a finite mixture model for density estimation. We employ a hybrid method consisting of an evolutionary algorithm for structure optimisation in conjunction with a gradient
Results 11  20
of
3,502