Results 1 - 10
of
1,504
Discovering Linkage Points over Web Data
"... A basic step in integration is the identification of linkage points, i.e., finding attributes that are shared (or related) between data sources, and that can be used to match records or entities across sources. This is usually performed using a match operator, that associates attributes of one datab ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
for effective and efficient identification of linkage points over Web data. We experimentally evaluate the effectiveness of our proposed algorithms in real-world integration scenarios in several domains.
LabelMe: A Database and Web-Based Tool for Image Annotation
, 2008
"... We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sha ..."
Abstract
-
Cited by 679 (46 self)
- Add to MetaCart
We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant
ANALYSIS OF WIRELESS SENSOR NETWORKS FOR HABITAT MONITORING
, 2004
"... We provide an in-depth study of applying wireless sensor networks (WSNs) to real-world habitat monitoring. A set of system design requirements were developed that cover the hardware design of the nodes, the sensor network software, protective enclosures, and system architecture to meet the require ..."
Abstract
-
Cited by 1490 (19 self)
- Add to MetaCart
the requirements of biologists. In the summer of 2002, 43 nodes were deployed on a small island off the coast of Maine streaming useful live data onto the web. Although researchers anticipate some challenges arising in real-world deployments of WSNs, many problems can only be discovered through experience. We
DBpedia -- A Crystallization Point for the Web of Data
, 2009
"... The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier ..."
Abstract
-
Cited by 374 (36 self)
- Add to MetaCart
that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity
A fast and flexible statistical model for large-scale population genotype data: Applications to inferring missing genotypes and haplotype phase
- American Journal of Human Genetics
, 2005
"... We present a statistical model for patterns of genetic variation in samples of unrelated individuals from natural populations. This model is based on the idea that, over short regions, haplotypes in a population tend to cluster into groups of similar haplotypes. To capture the fact that, because of ..."
Abstract
-
Cited by 408 (10 self)
- Add to MetaCart
We present a statistical model for patterns of genetic variation in samples of unrelated individuals from natural populations. This model is based on the idea that, over short regions, haplotypes in a population tend to cluster into groups of similar haplotypes. To capture the fact that, because
Open information extraction from the web
- IN IJCAI
, 2007
"... Traditionally, Information Extraction (IE) has focused on satisfying precise, narrow, pre-specified requests from small homogeneous corpora (e.g., extract the location and time of seminars from a set of announcements). Shifting to a new domain requires the user to name the target relations and to ma ..."
Abstract
-
Cited by 373 (39 self)
- Add to MetaCart
and to manually create new extraction rules or hand-tag new training examples. This manual labor scales linearly with the number of target relations. This paper introduces Open IE (OIE), a new extraction paradigm where the system makes a single data-driven pass over its corpus and extracts a large set
A framework for clustering evolving data streams. In:
- Proc of VLDB’03,
, 2003
"... Abstract The clustering problem is a difficult problem for the data stream domain. This is because the large volumes of data arriving in a stream renders most traditional algorithms too inefficient. In recent years, a few one-pass clustering algorithms have been developed for the data stream proble ..."
Abstract
-
Cited by 359 (36 self)
- Add to MetaCart
algorithm requires much greater functionality in discovering and exploring clusters over different portions of the stream. The widely used practice of viewing data stream clustering algorithms as a class of onepass clustering algorithms is not very useful from an application point of view. For example, a
World-wide web: The information universe
- Electronic Netw0rkinn:Research. Applications and Policy, 8(2), Westport k ~ spring
, 1992
"... The World-Wide Web (W 3) initiative is a practical project to bring a global information universe into existence using available technology. This article describes the aims, data model, and protocols needed to implement the “web”, and compares them with various contemporary systems. The Dream Pick u ..."
Abstract
-
Cited by 226 (1 self)
- Add to MetaCart
The World-Wide Web (W 3) initiative is a practical project to bring a global information universe into existence using available technology. This article describes the aims, data model, and protocols needed to implement the “web”, and compares them with various contemporary systems. The Dream Pick
Pixy: A Static Analysis Tool for Detecting Web Application Vulnerabilities (Short Paper)
- IN 2006 IEEE SYMPOSIUM ON SECURITY AND PRIVACY
, 2006
"... The number and the importance of Web applications have increased rapidly over the last years. At the same time, the quantity and impact of security vulnerabilities in such applications have grown as well. Since manual code reviews are time-consuming, error-prone and costly, the need for automated so ..."
Abstract
-
Cited by 212 (23 self)
- Add to MetaCart
solutions has become evident. In this paper, we address the problem of vulnerable Web applications by means of static source code analysis. More precisely, we use flow-sensitive, interprocedural and context-sensitive data flow analysis to discover vulnerable points in a program. In addition, alias
Discovering and maintaining links on the web of data
- In The Semantic Web – ISWC 2009: 8th International Semantic Web Conference
, 2009
"... Abstract. The Web of Data is built upon two simple ideas: Employ the RDF data model to publish structured data on the Web and to create explicit data links between entities within different data sources. This pa-per presents the Silk – Linking Framework, a toolkit for discovering and maintaining dat ..."
Abstract
-
Cited by 88 (3 self)
- Add to MetaCart
Abstract. The Web of Data is built upon two simple ideas: Employ the RDF data model to publish structured data on the Web and to create explicit data links between entities within different data sources. This pa-per presents the Silk – Linking Framework, a toolkit for discovering and maintaining
Results 1 - 10
of
1,504