Results 1  10
of
207
Solving multiclass learning problems via errorcorrecting output codes
 Journal of Artificial Intelligence Research
, 1995
"... Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass learning ..."
Abstract

Cited by 564 (9 self)
 Add to MetaCart
Multiclass learning problems involve nding a de nition for an unknown function f(x) whose range is a discrete set containing k>2values (i.e., k \classes"). The de nition is acquired by studying collections of training examples of the form hx i;f(x i)i. Existing approaches to multiclass learning problems include direct application of multiclass algorithms such as the decisiontree algorithms C4.5 and CART, application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and application of binary concept learning algorithms with distributed output representations. This paper compares these three approaches to a new technique in which errorcorrecting codes are employed as a distributed output representation. We show that these output representations improve the generalization performance of both C4.5 and backpropagation on a wide range of multiclass learning tasks. We also demonstrate that this approach is robust with respect to changes in the size of the training sample, the assignment of distributed representations to particular classes, and the application of over tting avoidance techniques such as decisiontree pruning. Finally,we show thatlike the other methodsthe errorcorrecting code technique can provide reliable class probability estimates. Taken together, these results demonstrate that errorcorrecting output codes provide a generalpurpose method for improving the performance of inductive learning programs on multiclass problems. 1.
A Pairwise Key PreDistribution Scheme for Wireless Sensor Networks
, 2003
"... this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an indepth analysis of our scheme in terms of network resili ..."
Abstract

Cited by 373 (13 self)
 Add to MetaCart
this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an indepth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smallerscale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain
Understanding FaultTolerant Distributed Systems
 Communications of the ACM
, 1993
"... We propose a small number of basic concepts that can be used to explain the architecture of faulttolerant distributed systems and we discuss a list of architectural issues that we find useful to consider when designing or examining such systems. For each issue we present known solutions and design ..."
Abstract

Cited by 311 (23 self)
 Add to MetaCart
We propose a small number of basic concepts that can be used to explain the architecture of faulttolerant distributed systems and we discuss a list of architectural issues that we find useful to consider when designing or examining such systems. For each issue we present known solutions and design alternatives, we discuss their relative merits and we give examples of systems which adopt one approach or the other. The aim is to introduce some order in the complex discipline of designing and understanding faulttolerant distributed systems. 1 Introduction Computing systems consist of a multitude of hardware and software components that are bound to fail eventually. In many systems, such component failures can lead to unanticipated, potentially disruptive failure behavior and to service unavailability. Some systems are designed to be faulttolerant: they either exhibit a welldefined failure behavior when components fail or mask component failures to users, that is, continue to provid...
Checking Computations in Polylogarithmic Time
, 1991
"... . Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every nondete ..."
Abstract

Cited by 261 (10 self)
 Add to MetaCart
. Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every nondeterministic computational task S(x; y), defined as a polynomial time relation between the instance x, representing the input and output combined, and the witness y can be modified to a task S 0 such that: (i) the same instances remain accepted; (ii) each instance/witness pair becomes checkable in polylogarithmic Monte Carlo time; and (iii) a witness satisfying S 0 can be computed in polynomial time from a witness satisfying S. Here the instance and the description of S have to be provided in errorcorrecting code (since the checker will not notice slight changes). A modification of the MIP proof was required to achieve polynomial time in (iii); the earlier technique yields N O(log log N)...
Logtm: Logbased transactional memory
 in HPCA
, 2006
"... Transactional memory (TM) simplifies parallel programming by guaranteeing that transactions appear to execute atomically and in isolation. Implementing these properties includes providing data version management for the simultaneous storage of both new (visible if the transaction commits) and old (r ..."
Abstract

Cited by 207 (10 self)
 Add to MetaCart
Transactional memory (TM) simplifies parallel programming by guaranteeing that transactions appear to execute atomically and in isolation. Implementing these properties includes providing data version management for the simultaneous storage of both new (visible if the transaction commits) and old (retained if the transaction aborts) values. Most (hardware) TM systems leave old values “in place” (the target memory address) and buffer new values elsewhere until commit. This makes aborts fast, but penalizes (the much more frequent) commits. In this paper, we present a new implementation of transactional memory, Logbased Transactional Memory (LogTM), that makes commits fast by storing old values to a perthread log in cacheable virtual memory and storing new values in place. LogTM makes two additional contributions. First, LogTM extends a MOESI directory protocol to enable both fast conflict detection on evicted blocks and fast commit (using lazy cleanup). Second, LogTM handles aborts in (library) software with little performance penalty. Evaluations running micro and SPLASH2 benchmarks on a 32way multiprocessor support our decision to optimize for commit by showing that only 12 % of transactions abort. 1.
Entropy and Information Theory
, 1990
"... Contents Prologue xi 1 Information Sources 1 1.1 Introduction .............................. 1 1.2 Probability Spaces and Random Variables ............. 1 1.3 Random Processes and Dynamical Systems ............ 5 1.4 Distributions ............................. 6 1.5 Standard Alphabets .............. ..."
Abstract

Cited by 201 (4 self)
 Add to MetaCart
Contents Prologue xi 1 Information Sources 1 1.1 Introduction .............................. 1 1.2 Probability Spaces and Random Variables ............. 1 1.3 Random Processes and Dynamical Systems ............ 5 1.4 Distributions ............................. 6 1.5 Standard Alphabets ......................... 10 1.6 Expectation .............................. 11 1.7 Asymptotic Mean Stationarity ................... 14 1.8 Ergodic Properties .......................... 15 2 Entropy and Information 17 2.1 Introduction .............................. 17 2.2 Entropy and Entropy Rate ..................... 17 2.3 Basic Properties of Entropy ..................... 20 2.4 Entropy Rate ............................. 31 2.5 Conditional Entropy and Information . . ............. 35 2.6 Entropy Rate Revisited ....................... 41 2.7 Relative Entropy Densities ...................... 44 3 The Entropy Ergodic Theorem 47 3.1 Introduction ..........
Biometric Cryptosystems: Issues and Challenges
 Proceedings of the IEEE
, 2004
"... this paper, we present various methods that monolithically bind a cryptographic key with the biometric template of a user stored in the database in such a way that the key cannot be revealed without a successful biometric authentication. We assess the performance of one of these biometric key bindin ..."
Abstract

Cited by 103 (7 self)
 Add to MetaCart
this paper, we present various methods that monolithically bind a cryptographic key with the biometric template of a user stored in the database in such a way that the key cannot be revealed without a successful biometric authentication. We assess the performance of one of these biometric key binding/generation algorithms using the fingerprint biometric. We illustrate the challenges involved in biometric key generation primarily due to drastic acquisition variations in the representation of a biometric identifier and the imperfect nature of biometric feature extraction and matching algorithms. We elaborate on the suitability of these algorithms for the digital rights management systems
A new algorithm for finding minimumweight words in a linear code: application to primitive narrowsense BCH codes of length 511
, 1998
"... : An algorithm for finding smallweight words in large linear codes is developed. It is in particular able to decode random [512,256,57]linear codes in 9 hours on a DEC alpha computer. We determine with it the minimum distance of some binary BCH codes of length 511, which were not known. Keywords ..."
Abstract

Cited by 85 (2 self)
 Add to MetaCart
: An algorithm for finding smallweight words in large linear codes is developed. It is in particular able to decode random [512,256,57]linear codes in 9 hours on a DEC alpha computer. We determine with it the minimum distance of some binary BCH codes of length 511, which were not known. Keywords: errorcorrecting codes, decoding algorithm, minimum weight, random linear codes, BCH codes. (R'esum'e : tsvp) submitted to IEEE Transactions on Information Theory Also with ' Ecole Nationale Sup'erieure de Techniques Avanc'ees, laboratoire LEI, 32 boulevard Victor, F75015 Paris. Laboratoire d'Informatique de l'Ecole Normale Sup'erieure, 45 rue d'Ulm, 75230 Paris Cedex 05 Unite de recherche INRIA Rocquencourt Domaine de Voluceau, Rocquencourt, BP 105, 78153 LE CHESNAY Cedex (France) Telephone : (33 1) 39 63 55 11  Telecopie : (33 1) 39 63 53 Un nouvel algorithme pour trouver des mots de poids minimum dans un code lin'eaire : application aux codes BCH primitifs au sens strict de l...
Selftesting/correcting for polynomials and for approximate functions
 in Proceedings of the 23rd Annual Symposium on Theory of Computing (STOC
, 1991
"... The study of selftesting/correcting programs was introduced in [8] in order to allow one to use program P to compute function f without trusting that P works correctly. A selftester for f estimates the fraction of x for which P (x) = f(x); and a selfcorrector for f takes a program that is correc ..."
Abstract

Cited by 81 (15 self)
 Add to MetaCart
The study of selftesting/correcting programs was introduced in [8] in order to allow one to use program P to compute function f without trusting that P works correctly. A selftester for f estimates the fraction of x for which P (x) = f(x); and a selfcorrector for f takes a program that is correct on most inputs and turns it into a program that is correct on every input with high probability 1. Both access P only as a blackbox and in some precise way are not allowed to compute the function f. Selfcorrecting is usually easy when the function has the random selfreducibility property. One class of such functions that has this property is the class of multivariate polynomials over finite fields [4] [12]. We extend this result in two directions. First, we show that polynomials are random selfreducible over more general domains: specifically, over the rationals and over noncommutative rings. Second, we show that one can get selfcorrectors even when the program satisfies weaker conditions, i.e. when the program has more errors, or when the program behaves in a more adversarial manner by changing the function it computes between successive calls. Selftesting is a much harder task. Previously it was known how to selftest for a few special examples of functions, such as the class of linear functions. We show that one can selftest the whole class of polynomial functions over Zp for prime p.
Good Codes based on Very Sparse Matrices
 Cryptography and Coding. 5th IMA Conference, number 1025 in Lecture Notes in Computer Science
, 1995
"... . We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding proble ..."
Abstract

Cited by 80 (11 self)
 Add to MetaCart
. We present a new family of errorcorrecting codes for the binary symmetric channel. These codes are designed to encode a sparse source, and are defined in terms of very sparse invertible matrices, in such a way that the decoder can treat the signal and the noise symmetrically. The decoding problem involves only very sparse matrices and sparse vectors, and so is a promising candidate for practical decoding. It can be proved that these codes are `very good', in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit. We give experimental results using a free energy minimization algorithm and a belief propagation algorithm for decoding, demonstrating practical performance superior to that of both BoseChaudhuryHocquenghem codes and ReedMuller codes over a wide range of noise levels. We regret that lack of space prevents presentation of all our theoretical and experimental results. The full text of this paper may be found elsewher...