Results 1  10
of
204,148
Machine Learning in Automated Text Categorization
 ACM COMPUTING SURVEYS
, 2002
"... The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach to this p ..."
Abstract

Cited by 1658 (22 self)
 Add to MetaCart
The automated categorization (or classification) of texts into predefined categories has witnessed a booming interest in the last ten years, due to the increased availability of documents in digital form and the ensuing need to organize them. In the research community the dominant approach
Querying Heterogeneous Information Sources Using Source Descriptions
, 1996
"... We witness a rapid increase in the number of structured information sources that are available online, especially on the WWW. These sources include commercial databases on product information, stock market information, real estate, automobiles, and entertainment. We would like to use the data stored ..."
Abstract

Cited by 727 (35 self)
 Add to MetaCart
We witness a rapid increase in the number of structured information sources that are available online, especially on the WWW. These sources include commercial databases on product information, stock market information, real estate, automobiles, and entertainment. We would like to use the data
Symbolic Model Checking without BDDs
, 1999
"... Symbolic Model Checking [3, 14] has proven to be a powerful technique for the verification of reactive systems. BDDs [2] have traditionally been used as a symbolic representation of the system. In this paper we show how boolean decision procedures, like Stalmarck's Method [16] or the Davis ..."
Abstract

Cited by 910 (74 self)
 Add to MetaCart
& Putnam Procedure [7], can replace BDDs. This new technique avoids the space blow up of BDDs, generates counterexamples much faster, and sometimes speeds up the verification. In addition, it produces counterexamples of minimal length. We introduce a bounded model checking procedure for LTL
A Volumetric Method for Building Complex Models from Range Images
, 1996
"... A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robus ..."
Abstract

Cited by 1018 (18 self)
 Add to MetaCart
with one range image at a time, we first scanconvert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a runlength encoding of the volume. To achieve time efficiency, we resample the range image to align
The LargeScale Organization of Metabolic Networks
, 2000
"... In a cell or microorganism the processes that generate mass, energy, information transfer, and cell fate specification are seamlessly integrated through a complex network of various cellular constituents and reactions. However, despite the key role these networks play in sustaining various cellular ..."
Abstract

Cited by 599 (7 self)
 Add to MetaCart
In a cell or microorganism the processes that generate mass, energy, information transfer, and cell fate specification are seamlessly integrated through a complex network of various cellular constituents and reactions. However, despite the key role these networks play in sustaining various cellular functions, their largescale structure is essentially unknown. Here we present the first systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variances in their individual constituents and pathways, these metabolic networks display the same topologic scaling properties demonstrating striking similarities to the inherent organization of complex nonbiological systems. This suggests that the metabolic organization is not only identical for all living organisms, but complies with the design principles of robust and errortolerant networks, and may represent a common blueprint for the largescale organization of interactions among all cellular constituents.
Random Oracles are Practical: A Paradigm for Designing Efficient Protocols
, 1995
"... We argue that the random oracle model  where all parties have access to a public random oracle  provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the ..."
Abstract

Cited by 1643 (75 self)
 Add to MetaCart
We argue that the random oracle model  where all parties have access to a public random oracle  provides a bridge between cryptographic theory and cryptographic practice. In the paradigm we suggest, a practical protocol P is produced by first devising and proving correct a protocol P R for the random oracle model, and then replacing oracle accesses by the computation of an "appropriately chosen" function h. This paradigm yields protocols much more efficient than standard ones while retaining many of the advantages of provable security. We illustrate these gains for problems including encryption, signatures, and zeroknowledge proofs.
A Threshold of ln n for Approximating Set Cover
 JOURNAL OF THE ACM
, 1998
"... Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhar ..."
Abstract

Cited by 778 (5 self)
 Add to MetaCart
Given a collection F of subsets of S = f1; : : : ; ng, set cover is the problem of selecting as few as possible subsets from F such that their union covers S, and max kcover is the problem of selecting k subsets from F such that their union has maximum cardinality. Both these problems are NPhard. We prove that (1 \Gamma o(1)) ln n is a threshold below which set cover cannot be approximated efficiently, unless NP has slightly superpolynomial time algorithms. This closes the gap (up to low order terms) between the ratio of approximation achievable by the greedy algorithm (which is (1 \Gamma o(1)) ln n), and previous results of Lund and Yannakakis, that showed hardness of approximation within a ratio of (log 2 n)=2 ' 0:72 lnn. For max kcover we show an approximation threshold of (1 \Gamma 1=e) (up to low order terms), under the assumption that P != NP .
Reopening the Convergence Debate: A new look at crosscountry growth empirics
 JOURNAL OF ECONOMIC GROWTH
, 1996
"... ..."
The lexical nature of syntactic ambiguity resolution
 Psychological Review
, 1994
"... Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive fr ..."
Abstract

Cited by 556 (23 self)
 Add to MetaCart
Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive from aspects of lexical representation and are resolved by the same processing mechanisms. Reinterpreting syntactic ambiguity resolution as a form of lexical ambiguity resolution obviates the need for special parsing principles to account for syntactic interpretation preferences, reconciles a number of apparently conflicting results concerning the roles of lexical and contextual information in sentence processing, explains differences among ambiguities in terms of ease of resolution, and provides a more unified account of language comprehension than was previously available. One of the principal goals for a theory of language compre third section we consider processing issues: how information is hension is to explain how the reader or listener copes with a processed within the mental lexicon and how contextual inforpervasive ambiguity problem. Languages are structured at mation can influence processing. The central processing mechmultiple levels simultaneously, including lexical, phonological, anism we invoke is the constraint satisfaction process that has morphological, syntactic, and text or discourse levels. At any been realized in interactiveactivation models (e.g., Elman &
PseudoRandom Generation from OneWay Functions
 PROC. 20TH STOC
, 1988
"... Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom gene ..."
Abstract

Cited by 887 (22 self)
 Add to MetaCart
Pseudorandom generators are fundamental to many theoretical and applied aspects of computing. We show howto construct a pseudorandom generator from any oneway function. Since it is easy to construct a oneway function from a pseudorandom generator, this result shows that there is a pseudorandom generator iff there is a oneway function.
Results 1  10
of
204,148