Results 1  10
of
13
SIA: Secure Information Aggregation in Sensor Networks
, 2003
"... Sensor networks promise viable solutions to many monitoring problems. However, the practical deployment of sensor networks faces many challenges imposed by realworld demands. Sensor nodes often have limited computation and communication resources and battery power. Moreover, in many applications se ..."
Abstract

Cited by 203 (12 self)
 Add to MetaCart
(Show Context)
Sensor networks promise viable solutions to many monitoring problems. However, the practical deployment of sensor networks faces many challenges imposed by realworld demands. Sensor nodes often have limited computation and communication resources and battery power. Moreover, in many applications sensors are deployed in open environments, and hence are vulnerable to physical attacks, potentially compromising the sensor's cryptographic keys. One of the basic and indispensable functionalities of sensor networks is the ability to answer queries over the data acquired by the sensors. The resource constraints and security issues make designing mechanisms for information aggregation in large sensor networks particularly challenging.
Entropy waves, the zigzag graph product, and new constantdegree expanders
, 2002
"... ..."
(Show Context)
On Constructing Locally Computable Extractors and Cryptosystems In The Bounded Storage Model
 Journal of Cryptology
, 2002
"... We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure privatekey cryptosystems in Maurer's bounded storage ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
We consider the problem of constructing randomness extractors which are locally computable, i.e. only read a small number of bits from their input. As recently shown by Lu (CRYPTO `02 ), locally computable extractors directly yield secure privatekey cryptosystems in Maurer's bounded storage model (J. Cryptology, 1992).
Active learning in the nonrealizable case
 NIPS Workshop on Foundations of Active Learning
, 2006
"... Abstract. Most of the existing active learning algorithms are based on the realizability assumption: The learner’s hypothesis class is assumed to contain a target function that perfectly classifies all training and test examples. This assumption can hardly ever be justified in practice. In this pape ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Most of the existing active learning algorithms are based on the realizability assumption: The learner’s hypothesis class is assumed to contain a target function that perfectly classifies all training and test examples. This assumption can hardly ever be justified in practice. In this paper, we study how relaxing the realizability assumption affects the sample complexity of active learning. First, we extend existing results on query learning to show that any active learning algorithm for the realizable case can be transformed to tolerate random bounded rate class noise. Thus, bounded rate class noise adds little extra complications to active learning, and in particular exponential label complexity savings over passive learning are still possible. However, it is questionable whether this noise model is any more realistic in practice than assuming no noise at all. Our second result shows that if we move to the truly nonrealizable model of statistical learning theory, then the label complexity of active learning has the same dependence Ω(1/ǫ 2) on the accuracy parameter ǫ as the passive learning label complexity. More specifically, we show that under the assumption that the best classifier in the learner’s hypothesis class has generalization error at most β> 0, the label complexity of active learning is Ω(β 2 /ǫ 2 log(1/δ)), where the accuracy parameter ǫ measures how close to optimal within the hypothesis class the active learner has to get and δ is the confidence parameter. The implication of this lower bound is that exponential savings should not be expected in realistic models of active learning, and thus the label complexity goals in active learning should be refined. 1
More Efficient PAClearning of DNF with Membership Queries Under the Uniform Distribution
, 1999
"... An efficient algorithm exists for learning disjunctive normal form (DNF) expressions in the uniformdistribution PAC learning model with membership queries [15], but in practice the algorithm can only be applied to small problems. We present several modications to the algorithm that substantially im ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
An efficient algorithm exists for learning disjunctive normal form (DNF) expressions in the uniformdistribution PAC learning model with membership queries [15], but in practice the algorithm can only be applied to small problems. We present several modications to the algorithm that substantially improve its asymptotic efficiency. First, we show how to signicantly improve the time and sample complexity of a key subprogram, resulting in similar improvements in the bounds on the overall DNF algorithm. We also apply known methods to convert the resulting algorithm to an attribute efficient algorithm. Furthermore, we develop techniques for lower bounding the sample size required for PAC learning with membership queries under a xed distribution and apply this technique to the uniformdistribution DNF learning problem. Finally, we present a learning algorithm for DNF that is attribute efficient in its use of random bits.
SIA: secure information aggregation in sensor networks
 Proc. of of ACM SenSys 2003
, 2003
"... ..."
(Show Context)
Towards a Theory of Variable Privacy
, 2003
"... We define "variable privacy" as the use of nonperfect protocols with parameters controlled by Alice. Variable privacy enables Alice to choose the amount of information leaked to Bob, in situations where information revelation bears a privacy cost and also provides a benefit. We propose ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We define "variable privacy" as the use of nonperfect protocols with parameters controlled by Alice. Variable privacy enables Alice to choose the amount of information leaked to Bob, in situations where information revelation bears a privacy cost and also provides a benefit. We propose a framework for the study of variable privacy, using a security perspective to obtain a privacy measure of the binary symmetric randomization protocol (flipping a bit with probability 1#). We define an attack as any sequence of protocol instances that decreases estimation error on a bit beyond that possible with a single instance. Viewing the protocol as a communication channel for the data to be protected, we show that channel codes  i.e. errorcorrecting and errordetecting codes  are particularly e#cient attacks. In particular, they can be more e#cient than the repeated query attack.
NonInteractive Proofs of Proximity
, 2013
"... We initiate a study of noninteractive proofs of proximity. These proofsystems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even re ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We initiate a study of noninteractive proofs of proximity. These proofsystems consist of a verifier that wishes to ascertain the validity of a given statement, using a short (sublinear length) explicitly given proof, and a sublinear number of queries to its input. Since the verifier cannot even read the entire input, we only require it to reject inputs that are far from being valid. Thus, the verifier is only assured of the proximity of the statement to a correct one. Such proofsystems can be viewed as the N P (or more accurately MA) analogue of property testing. We explore both the power and limitations of noninteractive proofs of proximity. We show that such proofsystems can be exponentially stronger than property testers, but are exponentially weaker than the interactive proofs of proximity studied by Rothblum, Vadhan and Wigderson (STOC 2013). In addition, we show a natural problem that has a full and (almost) tight multiplicative tradeoff between the length of the proof and the verifier’s query complexity. On the negative side, we also show that there exist properties for which even a linearlylong (noninteractive) proof of proximity cannot significantly reduce the query complexity.
More Efficient PAClearning of DNF with Membership Queries Under the Uniform Distribution Nader H. Bshouty Technion
"... 2 1 Introduction Jackson [15] gave the first polynomialtime PAC learning algorithm for DNF with membership queries under the uniform distribution. However, the algorithm's time and sample complexity make it impractical for all but relatively small problems. The algorithm is also not particular ..."
Abstract
 Add to MetaCart
2 1 Introduction Jackson [15] gave the first polynomialtime PAC learning algorithm for DNF with membership queries under the uniform distribution. However, the algorithm's time and sample complexity make it impractical for all but relatively small problems. The algorithm is also not particularly efficient in its use of random bits.
ALGORITHMS FOR BAYESIAN NETWORKS
, 2001
"... Graphical models are increasingly popular tools for modeling problems involving uncertainty. They deal with uncertainty by modeling and reasoning about degrees of uncertainty explicitly based on probability theory. Practical models based on graphical models often reach the size of hundreds of varia ..."
Abstract
 Add to MetaCart
Graphical models are increasingly popular tools for modeling problems involving uncertainty. They deal with uncertainty by modeling and reasoning about degrees of uncertainty explicitly based on probability theory. Practical models based on graphical models often reach the size of hundreds of variables. Although a number of ingenious inference algorithms have been developed, the problem of exact belief updating in graphical models is NPhard. Approximate inference schemes may often be the only feasible alternative for large and complex models. The family of stochastic sampling algorithms is a promising subclass of approximate algorithms. Since previous stochastic sampling algorithms cannot converge to reasonable estimates of the posterior probabilities within a reasonable amount of time, in cases with very unlikely evidence, we cannot use the results. This thesis addresses this problem by proposing some new sampling algorithms to do the approximate inference. First, an adaptive importance sampling algorithm for Bayesian networks, AISBN, was developed. It shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently.