Results 1 - 10
of
131,498
Phylogenetic identification and in situ detection of individual microbial cells without cultivation. Microbiol. Rev
, 1995
"... cultivation.of individual microbial cells without Phylogenetic identification and in situ detection ..."
Abstract
-
Cited by 1070 (29 self)
- Add to MetaCart
cultivation.of individual microbial cells without Phylogenetic identification and in situ detection
Text Chunking using Transformation-Based Learning
, 1995
"... Eric Brill introduced transformation-based learning and showed that it can do part-ofspeech tagging with fairly high accuracy. The same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive "baseNP" chunks. For ..."
Abstract
-
Cited by 509 (0 self)
- Add to MetaCart
Eric Brill introduced transformation-based learning and showed that it can do part-ofspeech tagging with fairly high accuracy. The same method can be applied at a higher level of textual interpretation for locating chunks in the tagged text, including non-recursive "baseNP" chunks
Stable isotope labeling by amino acids in cell culture, SILAC, as a simple and accurate approach to expression proteomics
- Mol. Cell. Proteomics
, 2002
"... The abbreviations used are: SILAC: Stable isotope labeling by amino acids in cell culture, 2DE: two dimensional (isoelectric focusing/SDS-PAGE) gel electrophoresis: ICATTM: isotope-coded affinity tag; MS: mass spectrometry; MALDI-TOF: matrix assisted laser desorption ionization-time of flight; PMF: ..."
Abstract
-
Cited by 581 (23 self)
- Add to MetaCart
: peptide mass fingerprinting; LC-MS: liquid chromatography-MS; Copyright 2002 by The American Society for Biochemistry and Molecular Biology, Inc. Quantitative proteomics has traditionally been performed by 2D gel electrophoresis but recently, mass spectrometric methods based on stable isotope quantitation
High confidence visual recognition of persons by a test of statistical independence
- IEEE Trans. on Pattern Analysis and Machine Intelligence
, 1993
"... Abstruct- A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a samp ..."
Abstract
-
Cited by 596 (8 self)
- Add to MetaCart
Abstruct- A method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a
Face Recognition: A Literature Survey
, 2000
"... ... This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into ..."
Abstract
-
Cited by 1363 (21 self)
- Add to MetaCart
... This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights
Machine Learning in Automated Text Categorization
- ACM COMPUTING SURVEYS
, 2002
"... The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this p ..."
Abstract
-
Cited by 1658 (22 self)
- Add to MetaCart
to this problem is based on machine learning techniques: a general inductive process automatically builds a classifier by learning, from a set of preclassified documents, the characteristics of the categories. The advantages of this approach over the knowledge engineering approach (consisting in the manual
The Protection of Information in Computer Systems
, 1975
"... This tutorial paper explores the mechanics of protecting computer-stored information from unauthorized use or modification. It concentrates on those architectural structures--whether hardware or software--that are necessary to support information protection. The paper develops in three main sections ..."
Abstract
-
Cited by 815 (2 self)
- Add to MetaCart
sections. Section I describes desired functions, design principles, and examples of elementary protection and authentication mechanisms. Any reader familiar with computers should find the first section to be reasonably accessible. Section II requires some familiarity with descriptor-based computer
Goal-directed Requirements Acquisition
- SCIENCE OF COMPUTER PROGRAMMING
, 1993
"... Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated. This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goal ..."
Abstract
-
Cited by 572 (17 self)
- Add to MetaCart
Requirements analysis includes a preliminary acquisition step where a global model for the specification of the system and its environment is elaborated. This model, called requirements model, involves concepts that are currently not supported by existing formal specification languages, such as goals to be achieved, agents to be assigned, alternatives to be negotiated, etc. The paper presents an approach to requirements acquisition which is driven by such higher-level concepts. Requirements models are acquired as instances of a conceptual meta-model. The latter can be represented as a graph where each node captures an abstraction such as, e.g., goal, action, agent, entity, or event, and where the edges capture semantic links between such abstractions. Well-formedness properties on nodes and links constrain their instances - that is, elements of requirements models. Requirements acquisition processes then correspond to particular ways of traversing the meta-model graph to acquire approp...
Fuzzy extractors: How to generate strong keys from biometrics and other noisy data. Technical Report 2003/235, Cryptology ePrint archive, http://eprint.iacr.org, 2006. Previous version appeared at EUROCRYPT 2004
- 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam
, 2004
"... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..."
Abstract
-
Cited by 532 (38 self)
- Add to MetaCart
We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference.
Results 1 - 10
of
131,498