#### DMCA

## Using Machine Learning for Operational Decisions in Adversarial Environments

Citations: | 1 - 0 self |

### Citations

744 | Statistical comparisons of classifiers over multiple data sets.
- Demsar
- 2006
(Show Context)
Citation Context ...) V (x) = 10, c = 0.3. We performed a statistical comparison between our approach and a corresponding classifier p(x) on which it is based using Friedman’s test with the posthoc Bonferroni correction =-=[12]-=-. For all classifier pairs of the form {C, E[OPT (C)]} with C ∈ {Näıve Bayes,Bruckner,Dalvi} and for c ∈ {0.1, 0.3}, V (x) ∈ {1, 2, 10}, our approach is statistically better than the alternative at t... |

224 | On comparing classifiers: Pitfalls to avoid and a recommended approach.
- Salzberg
- 1997
(Show Context)
Citation Context ...our approach with respect to the corresponding classifier. We use the post-hoc Bonferroni test, which does not alter α as α/(k−1) = α when k = 2, as in all of our comparisons. As detailed by Salzberg =-=[13]-=-, the feature criteria were chosen to optimize the performance of Näıve Bayes on the TREC 2005 spam corpus. Feature vectors were generated from the raw emails, and the same criteria were used for eac... |

141 | Adversarial classification.
- Dalvi, Domingos, et al.
- 2004
(Show Context)
Citation Context ... the number of feature values to alter in order to bypass defensive activities has this characteristic, as do models which use a regularization term to reduce the scale of attack manipulation of data =-=[11, 2, 4, 6, 7]-=-. 3 The problem of adversarial tampering of such training data is outside the scope of our work, and can be viewed as an extension of our setup. Suppose that if an attack x, succeeds, the attacker gai... |

82 | Outside the Closed World: On Using Machine Learning for Network Intrusion Detection,”
- Sommer, Paxson
- 2010
(Show Context)
Citation Context ...e reasonable to assume that the attackers are interested in false negatives (i.e., not being detected); 3) methods to date fail to account for operational constraints, making them largely impractical =-=[8]-=-; 4) current methods make strong restrictions on transformation of test data by attackers (e.g., a common assumption is a linear transformation), and 5), current methods are almost universally restric... |

81 | Nightmare at test time: Robust learning by feature deletion.
- Globerson, Roweis
- 2006
(Show Context)
Citation Context ...e response to their activities while achieving some malicious end. The issue of learning in adversarial environments has been addressed from a variety of angles, such as robustness to data corruption =-=[1]-=-, analysis of the problem of manipulating a learning algorithm [2, 3], and design of learning algorithms in adversarial settings [4–7]. The approaches aspiring to adjust learning algorithms to cope wi... |

81 | Playing games for security: an efficient exact algorithm for solving bayesian stackelberg games.
- Paruchuri, Pearce, et al.
- 2008
(Show Context)
Citation Context ... transformation), and 5), current methods are almost universally restricted to deterministic decisions for the learner, and cannot take advantage of the power of randomization in adversarial settings =-=[9]-=-. Most of these limitations are due to the fact that past approaches attempt to modify a learning algorithm to account for adversarial behavior. In contrast, the approach we take separates the problem... |

79 |
Adversarial learning,” in
- Lowd, Meek
- 2005
(Show Context)
Citation Context ... The issue of learning in adversarial environments has been addressed from a variety of angles, such as robustness to data corruption [1], analysis of the problem of manipulating a learning algorithm =-=[2, 3]-=-, and design of learning algorithms in adversarial settings [4–7]. The approaches aspiring to adjust learning algorithms to cope with adversarial response suffer from the following five problems: 1) t... |

52 | ifile: An Application of Machine Learning to E-Mail Filtering.
- Rennie
- 2000
(Show Context)
Citation Context ...eeds the threshold from Proposition 1). This policy is optimal when there are only stationary attackers, as we showed in Proposition 2. We call this “Näıve Ranking”. We used the ifile tool by Rennie =-=[14]-=- to select tokens for the feature vectors. Many of the desirable tokens for the TREC 2005 corpus are specific to the company where the emails were collected. Since our experiments evaluate performance... |

19 | Adversarial machine learning.
- Huang, Joseph, et al.
- 2011
(Show Context)
Citation Context ... The issue of learning in adversarial environments has been addressed from a variety of angles, such as robustness to data corruption [1], analysis of the problem of manipulating a learning algorithm =-=[2, 3]-=-, and design of learning algorithms in adversarial settings [4–7]. The approaches aspiring to adjust learning algorithms to cope with adversarial response suffer from the following five problems: 1) t... |

16 | Stackelberg games for adversarial prediction problems.
- Brückner, Scheffer
- 2011
(Show Context)
Citation Context ... the number of feature values to alter in order to bypass defensive activities has this characteristic, as do models which use a regularization term to reduce the scale of attack manipulation of data =-=[11, 2, 4, 6, 7]-=-. 3 The problem of adversarial tampering of such training data is outside the scope of our work, and can be viewed as an extension of our setup. Suppose that if an attack x, succeeds, the attacker gai... |

14 | Nash equilibria of static prediction games
- BRÜCKNER, SCHEFFER
(Show Context)
Citation Context ... the number of feature values to alter in order to bypass defensive activities has this characteristic, as do models which use a regularization term to reduce the scale of attack manipulation of data =-=[11, 2, 4, 6, 7]-=-. 3 The problem of adversarial tampering of such training data is outside the scope of our work, and can be viewed as an extension of our setup. Suppose that if an attack x, succeeds, the attacker gai... |

9 |
S.: Mining Adversarial Patterns via Regularized Loss Minimization
- Liu, Chawla
(Show Context)
Citation Context ...(x) (note that we augment the utility functions to depend on input vector x). The odds-ratio test used by Dalvi et al. therefore checks p(x) 1− p(x) ≥ UC(−,−)− UC(+,−) UC(+,+)− UC(−,+) = G(x) V (x) . =-=(5)-=- and it is easy to verify that inequality 5 is equivalent to the threshold test in Proposition 1. Consider now a more general setting where PA = 0, but now with a budget constraint. In this context, w... |

4 |
K.: Predictive defense against evolving adversaries
- Colbaugh, Glass
- 2012
(Show Context)
Citation Context |

1 |
cyberweapons, cyber wars: Is there too much of it in the air
- Filshtinskiy
(Show Context)
Citation Context ...ational policy q(·) is known to attackers reflects threats that have significant time and/or resources to probe and respond to defensive measures, a feature characteristic of advanced cyber criminals =-=[10]-=-. We view the data set I of labeled malware instances as representing revealed preferences of a sample of attackers, that is, their preference for input vectors x (if an attacker preferred another inp... |