Results 1 
8 of
8
Learning minimum volume sets
 J. Machine Learning Res
, 2006
"... Given a probability measure P and a reference measure µ, one is often interested in the minimum µmeasure set with Pmeasure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence region ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
Given a probability measure P and a reference measure µ, one is often interested in the minimum µmeasure set with Pmeasure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence regions. This paper addresses the problem of estimating minimum volume sets based on independent samples distributed according to P. Other than these samples, no other information is available regarding P, but the reference measure µ is assumed to be known. We introduce rules for estimating minimum volume sets that parallel the empirical risk minimization and structural risk minimization principles in classification. As in classification, we show that the performances of our estimators are controlled by the rate of uniform convergence of empirical to true probabilities over the class from which the estimator is drawn. Thus we obtain finite sample size performance bounds in terms of VC dimension and related quantities. We also demonstrate strong universal consistency and an oracle inequality. Estimators based on histograms and dyadic partitions illustrate the proposed rules. 1
Anomaly detection through a bayesian support vector machine
 IEEE Trans. Reliab. 2010
"... Abstract—This paper investigates the use of a oneclass support vector machine algorithm to detect the onset of system anomalies, and trend output classification probabilities, as a way to monitor the health of a system. In the absence of “unhealthy ” (negative class) information, a marginal kern ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract—This paper investigates the use of a oneclass support vector machine algorithm to detect the onset of system anomalies, and trend output classification probabilities, as a way to monitor the health of a system. In the absence of “unhealthy ” (negative class) information, a marginal kernel density estimate of the “healthy ” (positive class) distribution is used to construct an estimate of the negative class. The output of the oneclass support vector classifier is calibrated to posterior probabilities by fitting a logistic distribution to the support vector predictor model in an effort to manage false alarms. Index Terms—Anomaly detection, Bayesian linear models, Bayesian posterior class probabilities, kernel density estimation, oneclass classifier, support vector machine. ACRONYMS PHM Prognostics and Health Management SVM Support vector machine OSH Optimal separating hyperplane PCA Principal component analysis SVD Singular value decomposition GLM Generalized linear model BLM Bayesian linear model PoF Physics of failure KDE Kernel density estimate RBF Radial basis function MAP Maximum a posteriori Minimum volume sets NOTATION Input vector Class label vector Training data matrix Model subspace
RKHS classification for multivariate extremevalue analysis
, 2008
"... Abstract — In many engineering applications, data samples are expensive to get and limited in number. In such a difficult context, this paper shows how classification based on Reproducing Kernel Hilbert Space (RKHS) can be used in conjunction with Extreme Value Theory (EVT) to estimate extreme multi ..."
Abstract
 Add to MetaCart
Abstract — In many engineering applications, data samples are expensive to get and limited in number. In such a difficult context, this paper shows how classification based on Reproducing Kernel Hilbert Space (RKHS) can be used in conjunction with Extreme Value Theory (EVT) to estimate extreme multivariate quantiles and small probabilities of failure. For estimating extreme multivariate quantiles, RKHS oneclass classification makes it possible to map vectorvalued data onto R, so as to estimate a high quantile of a univariate distribution by means of EVT. In order to estimate small probabilities of failure we basically apply multivariate EVT, however EVT is hampered by the fact that many samples may be needed before observing a single tail event. By means of a new method again based on RKHS classification, we can partially solve this problem and increase the proportion of tail events in the samples collected.
Learning HighDensity Regions for a Generalized KolmogorovSmirnov Test in HighDimensional Data
"... Address ..."
∂t
"... � � � � � � � � � � � � � � � � � � ω1, � � � � � � � � � � � � � � � � � � � � � � ω2, � � ω1∩ω2=∅, � � ω1 � � ω2 � � � � � � � � � � � �.ω1 � � � � � � � � Γ1,ω2 �� ..."
Abstract
 Add to MetaCart
� � � � � � � � � � � � � � � � � � ω1, � � � � � � � � � � � � � � � � � � � � � � ω2, � � ω1∩ω2=∅, � � ω1 � � ω2 � � � � � � � � � � � �.ω1 � � � � � � � � Γ1,ω2 ��
Mis en page avec la classe thloria. Table des matières
, 2013
"... Contributions à la simulation des évènements rares dans les systèmes complexes ..."
Abstract
 Add to MetaCart
Contributions à la simulation des évènements rares dans les systèmes complexes
Prof. dr. ir. M.J.T. Reinders
, 2006
"... Learning to recognise. A study on oneclass classification and active learning. Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus Prof. dr. ir. J.T. Fokkema, voorzitter van het College voor Promoties, ..."
Abstract
 Add to MetaCart
Learning to recognise. A study on oneclass classification and active learning. Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus Prof. dr. ir. J.T. Fokkema, voorzitter van het College voor Promoties,
Functional Models and Probability Density Functions
"... There exist many approaches to discern a functional relationship between two variables. A functional model is useful for two reasons: Firstly, if the function is a relatively simple model in the plane, it provides us with qualitative information about the relationship. Secondly, given a fixed va ..."
Abstract
 Add to MetaCart
There exist many approaches to discern a functional relationship between two variables. A functional model is useful for two reasons: Firstly, if the function is a relatively simple model in the plane, it provides us with qualitative information about the relationship. Secondly, given a fixed value for one variable, the other one can be calculated as a means for prediction. In this paper an approach for the extraction of functional models from probability density functions is proposed. The transformation of the conditional probability density function into a single value or a set of values is the basis for our discussion. Several transformations such as the mean value, the median and the modal intervals are well established. Regression models are compared to the functional models introduced here and as a consequence, two indicators to relate functional models to probability density functions are provided.