Results 11  20
of
31
Website fingerprinting in onion routing based anonymization networks
 in Proceedings of the 18th ACM Computer and Communications Security (ACM CCS) Workshop on Privacy in the Electronic Society (WPES 2011
, 2011
"... Lowlatency anonymization networks such as Tor and JAP claim to hide the recipient and the content of communications from a local observer, i.e., an entity that can eavesdrop the traffic between the user and the first anonymization node. Especially users in totalitarian regimes strongly depend on su ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Lowlatency anonymization networks such as Tor and JAP claim to hide the recipient and the content of communications from a local observer, i.e., an entity that can eavesdrop the traffic between the user and the first anonymization node. Especially users in totalitarian regimes strongly depend on such networks to freely communicate. For these people, anonymity is particularly important and an analysis of the anonymization methods against various attacks is necessary to ensure adequate protection. In this paper we show that anonymity in Tor and JAP is not as strong as expected so far and cannot resist website fingerprinting attacks under certain circumstances. We first define features for website fingerprinting solely based on volume, time, and direction of the traffic. As a result, the subsequent classification becomes much easier. We apply support vector machines with the introduced features. We are able to improve recognition results of existing works on a given stateoftheart dataset in Tor from 3 % to 55 % and in JAP from 20 % to 80%. The datasets assume a closedworld with 775 websites only. In a next step, we transfer our findings to a more complex and realistic openworld scenario, i.e., recognition of several websites in a set of thousands of random unknown websites. To the best of our knowledge, this work is the first successful attack in the openworld scenario. We achieve a surprisingly high true positive rate of up to 73 % for a false positive rate of 0.05%. Finally, we show preliminary results of a proofofconcept implementation that applies camouflage as a countermeasure to hamper the fingerprinting attack. For JAP, the detection rate decreases from 80 % to 4 % and for Tor it drops from 55 % to about 3%.
A tutorial on νSupport Vector Machines
 APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY
, 2005
"... We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled nSVM, including details of the algorithm and its implementation, theoretical results, and practical applicatio ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled nSVM, including details of the algorithm and its implementation, theoretical results, and practical applications.
Adaptive model generation: An architecture for the deployment of data miningbased intrusion detection systems
 IN
, 2002
"... ..."
Statistical Learning and Kernel Methods in Bioinformatics
 in Bioinformatics,” Artificial Intelligence and Heuristic Methods in Bioinformatics 183, (Eds.) P. Frasconi und R. Shamir, IOS
, 2000
"... We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics. ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics.
Entropy Numbers, Operators and Support Vector Kernels
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
We derive new bounds for the generalization error of feature space machines, such as support vector machines and related regularization networks by obtaining new bounds on their covering numbers. The proofs are based on a viewpoint that is apparently novel in the field of statistical learning theory. The hypothesis class is described in terms of a linear operator mapping from a possibly infinite dimensional unit ball in feature space into a finite dimensional space. The covering numbers of the class are then determined via the entropy numbers of the operator. These numbers, which characterize the degree of compactness of the operator, can be bounded in terms of the eigenvalues of an integral operator induced by the kernel function used by the machine. As a consequence we are able to theoretically explain the effect of the choice of kernel functions on the generalization performance of support vector machines.
A short introduction to learning with kernels
 IN ADVANCED LECTURES ON MACHINE LEARNING, S.MENDELSON
, 2002
"... We briefly describe the main ideas of statistical learning theory, support vector ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector
Support Vector Methods in Learning and Feature Extraction
, 1998
"... The last years have witnessed an increasing interest in Support Vector (SV) machines, which use Mercer kernels for efficiently performing computations in highdimensional spaces. In pattern recognition, the SV algorithm constructs nonlinear decision functions by training a classifier to perform a li ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The last years have witnessed an increasing interest in Support Vector (SV) machines, which use Mercer kernels for efficiently performing computations in highdimensional spaces. In pattern recognition, the SV algorithm constructs nonlinear decision functions by training a classifier to perform a linear separation in some highdimensional space which is nonlinearly related to input space. Recently, we have developed a technique for Nonlinear Principal Component Analysis (Kernel PCA) based on the same types of kernels. This way, we can for instance efficiently extract polynomial features of arbitrary order by computing projections onto principal components in the space of all products of n pixels of images. We explain the idea of Mercer kernels and associated feature spaces, and describe connections to the theory of reproducing kernels and to regularization theory, followed by an overview of the above algorithms employing these kernels. 1. Introduction For the case of twoclass pattern...
Mathematical Programming Approaches To Machine Learning And Data Mining
, 1998
"... Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing separation error and the number of problem features utilized. The support vector machine approach is formulated using various norms to measure the margin of separation. The clustering problem of assigning m points in ndimensional real space to k clusters is formulated as minimizing a piecewiselinear concave function over a polyhedral set. This problem is also formulated in a novel fashion by minimizing the sum of squared distances of data points to nearest cluster planes characterizing the k clusters. The problem of obtaining a parsimonious solution to a linear system where the right hand side vector may be corrupted by noise is formulated as minimizing the system residual plus either the number of nonzero elements in the solution vector or the norm of the solution vector. The feature selection problem, the clustering problem and the parsimonious approximation problem can all be stated as the minimization of a concave function over a polyhedral region and are solved by a theoretically justifiable, fast and finite successive linearization algorithm. Numerical tests indicate the utility and efficiency of these formulations on realworld databases. In particular, the feature selection approach via concave minimization computes a separatingplane based classifier that improves upon the generalization ability of a separating plane computed without feature suppression. This approach produces ii classifiers utilizing fewer original problem features than the support vector machin...
Structured learning and prediction in computer vision
 In Foundations and Trends in Computer Graphics and Vision
, 2011
"... ..."
Application of data mining techniques for remote sensing image analysis
 in Proc. 4th Int. Conf. Hydroinformatics, 2000, [CDROM
"... ABSTRACT: The paper studies the applicability of various data mining techniques on aerial remote sensing imagery for automatic landcover classification. Four techniques are applied, namely the Adaptive Dynamic Kmeans (ADK), Self Organizing Feature Map (SOFM), Machine Learning Induction Algorithm ( ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
ABSTRACT: The paper studies the applicability of various data mining techniques on aerial remote sensing imagery for automatic landcover classification. Four techniques are applied, namely the Adaptive Dynamic Kmeans (ADK), Self Organizing Feature Map (SOFM), Machine Learning Induction Algorithm (C4.5) and Support Vector Machines (SVM). Special attention is drawn to the usefulness of these data mining classification techniques for automatic landcover recognition, that is, for physical interpretation of the classes. A novel, hybrid ADKSOFMSVM data mining procedure suitable for automated landcover cluster analysis is presented. 1