Results 1 - 10
of
2,490
Ontology Learning for the Semantic Web
- IEEE Intelligent Systems
, 2001
"... The Semantic Web relies heavily on the formal ontologies that structure underlying data for the purpose of comprehensive and transportable machine understanding. Therefore, the success of the Semantic Web depends strongly on the proliferation of ontologies, which requires fast and easy engineering o ..."
Abstract
-
Cited by 492 (16 self)
- Add to MetaCart
-structured and fully structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, refinement, and evaluation giving the ontology engineer a wealth of coordinated tools for ontology modeling
The Extraction of Refined Rules from Knowledge-Based Neural Networks
- Machine Learning
, 1993
"... Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge mus ..."
Abstract
-
Cited by 230 (4 self)
- Add to MetaCart
Neural networks, despite their empirically-proven abilities, have been little used for the refinement of existing knowledge because this task requires a three-step process. First, knowledge in some form must be inserted into a neural network. Second, the network must be refined. Third, knowledge
What is the Best Multi-Stage Architecture for Object Recognition?
"... In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where the filter ..."
Abstract
-
Cited by 252 (22 self)
- Add to MetaCart
In many recent object recognition systems, feature extraction stages are generally composed of a filter bank, a non-linear transformation, and some sort of feature pooling layer. Most systems use only one stage of feature extraction in which the filters are hard-wired, or two stages where
Kalman Filter-based Algorithms for Estimating Depth from Image Sequences
, 1989
"... Using known camera motion to estimate depth from image sequences is an important problem in robot vision. Many applications of depth-from-motion, including navigation and manipulation, require algorithms that can estimate depth in an on-line, incremental fashion. This requires a representation that ..."
Abstract
-
Cited by 259 (26 self)
- Add to MetaCart
at the location of a sparse set of features. In this paper, we introduce a new, pixel-based (iconic) algorithm that estimates depth and depth uncertainty at each pixel and incrementally refines these estimates over time. We describe the algorithm and contrast its formulation and performance to that of a feature
WebMate: A Personal Agent for Browsing and Searching
- In Proceedings of the Second International Conference on Autonomous Agents
, 1998
"... The World-Wide Web is developing very fast. Currently, finding useful information on the Web is a time consuming process. In this paper, we present WebMate, an agent that helps users to effectively browse and search the Web. WebMate extends the state of the art in Web-based information retrieval in ..."
Abstract
-
Cited by 239 (10 self)
- Add to MetaCart
in many ways. First, it uses multiple TF-IDF vectors to keep track of user interests in different domains. These domains are automatically learned by WebMate. Second, WebMate uses the Trigger Pair Model to automatically extract keywords for refining document search. Third, during search, the user can
Modular verification of software components in C
- IEEE TRANSACTIONS ON SOFTWARE ENGINEERING
, 2003
"... We present a new methodology for automatic verification of C programs against finite state machine specifications. Our approach is compositional, naturally enabling us to decompose the verification of large software systems into subproblems of manageable complexity. The decomposition reflects the mo ..."
Abstract
-
Cited by 233 (23 self)
- Add to MetaCart
the modularity in the software design. We use weak simulation as the notion of conformance between the program and its specification. Following the abstractverify-refine paradigm, our tool MAGIC first extracts a finite model from C source code using predicate abstraction and theorem proving. Subsequently
Patterns of Search: Analyzing and Modeling Web Query Refinement
, 1998
"... We discuss the construction of probabilistic models centering on temporal patterns of query refinement. Our analyses are derived from a large corpus of Web search queries extracted from server logs recorded by a popular Internet search service. We frame the modeling task in terms of pursuing an ..."
Abstract
-
Cited by 130 (10 self)
- Add to MetaCart
We discuss the construction of probabilistic models centering on temporal patterns of query refinement. Our analyses are derived from a large corpus of Web search queries extracted from server logs recorded by a popular Internet search service. We frame the modeling task in terms of pursuing
Automatically Refining the Wikipedia Infobox Ontology
, 2008
"... The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more se ..."
Abstract
-
Cited by 102 (7 self)
- Add to MetaCart
The combined efforts of human volunteers have recently extracted numerous facts from Wikipedia, storing them as machine-harvestable object-attribute-value triples in Wikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more
Results 1 - 10
of
2,490