Results 1  10
of
205
Comparing Performance of Algorithms for Generating Concept Lattices
 JOURNAL OF EXPERIMENTAL AND THEORETICAL ARTIFICIAL INTELLIGENCE
, 2002
"... Several algorithms that generate the set of all formal concepts and diagram graphs of concept lattices are considered. Some modifications of wellknown algorithms are proposed. Algorithmic complexity of the algorithms is studied both theoretically (in the worst case) and experimentally. Conditions ..."
Abstract

Cited by 91 (8 self)
 Add to MetaCart
Several algorithms that generate the set of all formal concepts and diagram graphs of concept lattices are considered. Some modifications of wellknown algorithms are proposed. Algorithmic complexity of the algorithms is studied both theoretically (in the worst case) and experimentally. Conditions of preferable use of some algorithms are given in terms of density/sparseness of underlying formal contexts. Principles of comparing practical performance of algorithms are discussed.
Pattern Structures and Their Projections
, 2001
"... Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, ..."
Abstract

Cited by 39 (11 self)
 Add to MetaCart
Pattern structures consist of objects with descriptions (called patterns) that allow a semilattice operation on them. Pattern structures arise naturally from ordered data, e.g., from labeled graphs ordered by graph morphisms. It is shown that pattern structures can be reduced to formal contexts, however sometimes processing the former is often more ecient and obvious than processing the latter. Concepts, implications, plausible hypotheses, and classi cations are de ned for data given by pattern structures. Since computation in pattern structures may be intractable, approximations of patterns by means of projections are introduced.
Combining Formal Concept Analysis with Information Retrieval for Concept Location in Source Code
 in Proc. of ICPC'07
, 2007
"... The paper addresses the problem of concept location in source code by presenting an approach which combines Formal Concept Analysis (FCA) and Latent Semantic Indexing (LSI). In the proposed approach, LSI is used to map the concepts expressed in queries written by the programmer to relevant parts of ..."
Abstract

Cited by 37 (16 self)
 Add to MetaCart
The paper addresses the problem of concept location in source code by presenting an approach which combines Formal Concept Analysis (FCA) and Latent Semantic Indexing (LSI). In the proposed approach, LSI is used to map the concepts expressed in queries written by the programmer to relevant parts of the source code, presented as a ranked list of search results. Given the ranked list of source code elements, our approach selects most relevant attributes from these documents and organizes the results in a concept lattice, generated via FCA. The approach is evaluated in a case study on concept location in the source code of Eclipse, an industrial size integrated development environment. The results of the case study show that the proposed approach is effective in organizing different concepts and their relationships present in the subset of the search results. The proposed concept location method outperforms the simple ranking of the search results, reducing the programmers ’ effort. 1.
Comparing Conceptual, Divisive and Agglomerative Clustering for Learning Taxonomies from Text
, 2004
"... The application of clustering methods for automatic taxonomy construction from text requires knowledge about the tradeoff between, (i), their effectiveness (quality of result), (ii), efficiency (runtime behaviour), and, (iii), traceability of the taxonomy construction by the ontology engineer. In t ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
The application of clustering methods for automatic taxonomy construction from text requires knowledge about the tradeoff between, (i), their effectiveness (quality of result), (ii), efficiency (runtime behaviour), and, (iii), traceability of the taxonomy construction by the ontology engineer. In this line, we present an original conceptual clustering method based on Formal Concept Analysis for automatic taxonomy construction and compare it with hierarchical agglomerative clustering and hierarchical divisive clustering.
Applied Lattice Theory: Formal Concept Analysis
 In General Lattice Theory, G. Grätzer editor, Birkhäuser
, 1997
"... then the theory. Thereby, Formal Concept Analysis has created results that may be of interest even without considering the applications by which they were motivated. For proofs, citations, and further details we refer to [2]. 1 Formal contexts and concept lattices A triple (G; M; I) is called a for ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
then the theory. Thereby, Formal Concept Analysis has created results that may be of interest even without considering the applications by which they were motivated. For proofs, citations, and further details we refer to [2]. 1 Formal contexts and concept lattices A triple (G; M; I) is called a formal context if G and M are sets and I ` G\ThetaM is a binary relation between G and M . We call the elements of G objects, those of M attributes, and I the incidence of the context (G; M; I). For A ` G, 1 we define A 0 := fm 2 M j (g; m) 2 I for all g 2 Ag<F12.38
IFMap: An OntologyMapping Method Based on InformationFlow Theory
, 2003
"... In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of ontologies as a means for providing a shared understanding of common domains. But with the generalised use of ..."
Abstract

Cited by 31 (10 self)
 Add to MetaCart
In order to tackle the need of sharing knowledge within and across organisational boundaries, the last decade has seen researchers both in academia and industry advocating for the use of ontologies as a means for providing a shared understanding of common domains. But with the generalised use of large distributed environments such as the World Wide Web came the proliferation of many di#erent ontologies, even for the same or similar domain, hence setting forth a new need of sharingthat of sharing ontologies. In addition, if visions such as the Semantic Web are ever going to become a reality, it will be necessary to provide as much automated support as possible to the task of mapping di#erent ontologies. Although many e#orts in ontology mapping have already been carried out, we have noticed that few of them are based on strong theoretical grounds and on principled methodologies. Furthermore, many of them are based only on syntactical criteria. In this paper we present a theory and method for automated ontology mapping based on channel theory, a mathematical theory of semantic information flow.
New Algorithms for Enumerating All Maximal Cliques
, 2004
"... Abstract. In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n 2) space and the other runs ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
Abstract. In this paper, we consider the problems of generating all maximal (bipartite) cliques in a given (bipartite) graph G = (V, E) with n vertices and m edges. We propose two algorithms for enumerating all maximal cliques. One runs with O(M(n)) time delay and in O(n 2) space and the other runs with O( ∆ 4) time delay and in O(n + m) space, where ∆ denotes the maximum degree of G, M(n) denotes the time needed to multiply two n × n matrices, and the latter one requires O(nm) time as a preprocessing. For a given bipartite graph G, we propose three algorithms for enumerating all maximal bipartite cliques. The first algorithm runs with O(M(n)) time delay and in O(n 2) space, which immediately follows from the algorithm for the nonbipartite case. The second one runs with O( ∆ 3) time delay and in O(n + m) space, and the last one runs with O( ∆ 2) time delay and in O(n + m + N∆) space, where N denotes the number of all maximal bipartite cliques in G and both algorithms require O(nm) time as a preprocessing. Our algorithms improve upon all the existing algorithms, when G is either dense or sparse. Furthermore, computational experiments show that our algorithms for sparse graphs have significantly good performance for graphs which are generated randomly and appear in realworld problems. 1
Ontology Research and Development. Part 2  a Review of Ontology Mapping and Evolving
, 2002
"... This is the second of a twopart paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a shared conceptualization. Ontology itself is not a static model so that it must have the potential to capture ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
This is the second of a twopart paper to review ontology research and development, in particular, ontology mapping and evolving. Ontology is defined as a formal explicit specification of a shared conceptualization. Ontology itself is not a static model so that it must have the potential to capture changes of meanings and relations. As such, mapping and evolving ontologies is part of an essential task of ontology learning and development. Ontology mapping is concerned with reusing existing ontologies, expanding and combining them by some means and enabling a larger pool of information and knowledge in different domains to be integrated to support new communication and use. Ontology evolving, likewise, is concerned with maintaining existing ontologies and extending them as appropriate when new information or knowledge is acquired. It is apparent from the reviews that current research into semiautomatic or automatic ontology research in all the three aspects of generation, mapping and evolving have so far achieved limited success. Expert
Leveraging WebServices and PeertoPeer Networks
 In Proc. of the 15th Int. Conf. on Advanced Information Systems Engineering (CAiSE 2003
, 2002
"... Peeroriented computing is an attempt to weave interconnected machines into the fabric of the Internet. Serviceoriented computing (exemplified by webservices), on the other hand, is an attempt to provide a loosely coupled paradigm for distributed processing. In this paper we present an eventn ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Peeroriented computing is an attempt to weave interconnected machines into the fabric of the Internet. Serviceoriented computing (exemplified by webservices), on the other hand, is an attempt to provide a loosely coupled paradigm for distributed processing. In this paper we present an eventnotification based architecture and formal framework towards unifying these two computing paradigms to provide essential functions required for automating ebusiness applications and facilitating service publication, discovery and exchange.
Latticebased Information Retrieval
 KNOWLEDGE ORGANIZATION
, 2000
"... A latticebased model for information retrieval has been suggested in the 1960's but has been seen as a theoretical possibility hard to practically apply ever since. This paper attempts to revive the lattice model and demonstrate its applicability in an information retrieval system, FaIR, that in ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
A latticebased model for information retrieval has been suggested in the 1960's but has been seen as a theoretical possibility hard to practically apply ever since. This paper attempts to revive the lattice model and demonstrate its applicability in an information retrieval system, FaIR, that incorporates a graphical representation of a faceted thesaurus. It shows how Boolean queries can be latticetheoretically related to the concepts of the thesaurus and visualized within the thesaurus display. An advantage of FaIR is that it allows for a high level of transparency of the system which can be controlled by the user.