Results 11 - 20
of
47
PAV ontology: provenance, authoring and versioning
, 2016
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. ..."
Abstract
- Add to MetaCart
(Show Context)
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters.
Distributed under Creative Commons CC-BY 4.0 Decentralized provenance-aware publishing with nanopublications
"... ABSTRACT Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exis ..."
Abstract
- Add to MetaCart
ABSTRACT Publication and archival of scientific results is still commonly considered the responsability of classical publishing companies. Classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. In particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. In this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. Based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an RDF-based format to represent scientific data. We show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the Semantic Web in general. Our evaluation of the current network shows that this system is efficient and reliable.
Informatic system for a global tissue-fluid biorepository with a graph theory-oriented graphical user interface
, 2014
"... The Richard Floor Biorepository supports collaborative studies of extracellular vesicles (EVs) found in human fluids and tissue specimens. The current emphasis is on biomarkers for central nervous system neoplasms but its structure may serve as a template for collaborative EV translational studies i ..."
Abstract
- Add to MetaCart
The Richard Floor Biorepository supports collaborative studies of extracellular vesicles (EVs) found in human fluids and tissue specimens. The current emphasis is on biomarkers for central nervous system neoplasms but its structure may serve as a template for collaborative EV translational studies in other fields. The informatic system provides specimen inventory tracking with bar codes assigned to specimens and containers and projects, is hosted on globalized cloud computing resources, and embeds a suite of shared documents, calendars, and video-conferencing features. Clinical data are recorded in relation to molecular EV attributes and may be tagged with terms drawn from a network of externally maintained ontologies thus offering expansion of the system as the field matures. We fashioned the graphical user interface (GUI) around a web-based data visualization package. This system is now in an early stage of deployment, mainly focused on specimen tracking and clinical, laboratory, and imaging data capture in support of studies to optimize detection and analysis of brain tumourspecific mutations. It currently includes 4,392 specimens drawn from 611 subjects, the majority with brain tumours. As EV science evolves, we plan biorepository changes which may reflect multi-institutional collaborations, proteomic interfaces, additional biofluids, changes in operating procedures and kits for specimen handling, novel procedures for detection of tumour-specific EVs, and for RNA extraction and changes in the taxonomy of EVs. We have used an ontology-driven data model and web-
Solicited review(s): Name Surname, University, Country
"... Abstract. Evidence-based policy is policy informed by rigorously established objective evidence. An important aspect of evidence-based policy is the use of scientifically rigorous studies to identify programs and practices capable of improving policy relevant outcomes. Statistics represent a crucial ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Evidence-based policy is policy informed by rigorously established objective evidence. An important aspect of evidence-based policy is the use of scientifically rigorous studies to identify programs and practices capable of improving policy relevant outcomes. Statistics represent a crucial means to determine whether progress is made towards policy targets. In May 2010, the European Commission adopted the Digital Agenda for Europe, a strategy to take advantage of the potential offered by the rapid progress of digital technologies. The Digital Agenda contains commitments to undertake a number of specific policy actions intended to stimulate a circle of investment in and usage of digital technologies. It identifies 13 key performance targets. In order to chart the progress of both the announced policy actions and the key performance targets a scoreboard is published, thus allowing the monitoring and benchmarking of the main developments of information society in European countries. In addition to these human-readable browsing, visualization and exploration methods, machine-readable access facilitating re-usage and interlinking of the underlying data is provided by means of RDF and Linked Open Data. We sketch the transformation process from raw data up to rich, interlinked RDF, describe its publishing and the lessons learned.
2 1
"... Abstract. One of the current issues in the bioinformatics domain is to identify genomic variations underlying the complex diseases. There are millions of genetic variations as well as environmental factors that may cause human diseases. Semantic web interlinks diverse data that may reveal many hidde ..."
Abstract
- Add to MetaCart
Abstract. One of the current issues in the bioinformatics domain is to identify genomic variations underlying the complex diseases. There are millions of genetic variations as well as environmental factors that may cause human diseases. Semantic web interlinks diverse data that may reveal many hidden relations and can be utilized for personalized medicine. This requires discovering relationships between phenotypes and genotypes, to answer how the genotype of an individual affects his/her health. Additionally, through identification of genomic variations based on an individual’s genotype we can predict the response to a selected drug therapy and accordingly suggest treatment or drug regimes. A personalized medicine knowledgebase can interlink genotypic variations and its possible somatic changes that effects drug targets to pick best treatment and drug regimens for individuals. Such a knowledgebase may help to identify the factors that best explain the association between genotype and phenotype. We’ve used SPARQL queries to weight factors which link the genotype and phenotype via indirect relationships, and the paths of relationships. A personalized medicine knowledgebase build with the presented approach can interlink genotypic variations and its possible somatic changes that effects drug targets to pick best treatment and drug regimens for individuals, and may help to identify the factors that best explain the association between genotype and phenotype.
1 eNanoMapper: Opportunities and challenges in using ontologies to enable data integration for nanomaterial risk assessment
"... Engineered nanomaterials (ENMs) are being developed to meet specific application needs in diverse domains across the engineering and biomedical sciences (e.g. drug delivery). However, accompanying the exciting proliferation of novel nanomaterials is a challenging race to understand and predict their ..."
Abstract
- Add to MetaCart
(Show Context)
Engineered nanomaterials (ENMs) are being developed to meet specific application needs in diverse domains across the engineering and biomedical sciences (e.g. drug delivery). However, accompanying the exciting proliferation of novel nanomaterials is a challenging race to understand and predict their possibly detrimental effects on human health and the environment. The eNanoMapper project (www.enanomapper.net) is creating a pan-European compu-tational infrastructure for toxicological data management for ENMs, based on semantic web standards and ontologies. Here, we describe our strategy to adopt and extend ontolo-gies in support of data integration for eNanoMapper. ENM safety is at the boundary between engineering and the life sciences, and at the boundary between molecular granularity and bulk granularity. This creates challenges for the defini-tion of key entities in the domain, which we will also discuss. 1
RESEARCH ARTICLE Machines first, humans se i
"... im tha so no in try process involved scouring the peer reviewed literature, a handful. The inherently low scalability of scientists ’ time Clark et al. Journal of Cheminformatics (2015) 7:9 DOI 10.1186/s13321-015-0057-7hard-won experimental result would have its chance toQC, Canada Full list of auth ..."
Abstract
- Add to MetaCart
(Show Context)
im tha so no in try process involved scouring the peer reviewed literature, a handful. The inherently low scalability of scientists ’ time Clark et al. Journal of Cheminformatics (2015) 7:9 DOI 10.1186/s13321-015-0057-7hard-won experimental result would have its chance toQC, Canada Full list of author information is available at the end of the articleeither online through paywalls or physically within the walls of a library, and in some cases perusing privately col-lected data on the subject [8]. The reuse of such data may require data licensing and we have suggested some rules that could be helpful [9]. Despite the major shift that is trending right now, there is an important caveat: many of the hosts of online is in stark contrast with the ever increasing ability of soft-ware algorithms to assimilate vast quantities of data and deliver meaningful insights that could not have been observed by more traditional means. The ability for a well-designed informatics platform to productively use as much data as can be made available means that in principle every publicly available scientific data point that is relevant to a machine learning algorithm’s domain should be injected into the training set. Were this ideal state of affairs to be achieved, it would mean that every * Correspondence:
REVIEW Open Access Integrated Bio-Search
"... challenges and trends for the integration, search and comprehensive processing of biological information ..."
Abstract
- Add to MetaCart
(Show Context)
challenges and trends for the integration, search and comprehensive processing of biological information
RESEARCH Open Access
"... Docosahexaenoic acid attenuates the early inflammatory response following spinal cord injury in mice: in-vivo and in-vitro studies ..."
Abstract
- Add to MetaCart
(Show Context)
Docosahexaenoic acid attenuates the early inflammatory response following spinal cord injury in mice: in-vivo and in-vitro studies