Results 1 - 10
of
974,360
FAST VOLUME RENDERING USING A SHEAR-WARP FACTORIZATION OF THE VIEWING TRANSFORMATION
, 1995
"... Volume rendering is a technique for visualizing 3D arrays of sampled data. It has applications in areas such as medical imaging and scientific visualization, but its use has been limited by its high computational expense. Early implementations of volume rendering used brute-force techniques that req ..."
Abstract
-
Cited by 541 (2 self)
- Add to MetaCart
Volume rendering is a technique for visualizing 3D arrays of sampled data. It has applications in areas such as medical imaging and scientific visualization, but its use has been limited by its high computational expense. Early implementations of volume rendering used brute-force techniques that require on the order of 100 seconds to render typical data sets on a workstation. Algorithms with optimizations that exploit coherence in the data have reduced rendering times to the range of ten seconds but are still not fast enough for interactive visualization applications. In this thesis we present a family of volume rendering algorithms that reduces rendering times to one second. First we present a scanline-order volume rendering algorithm that exploits coherence in both the volume data and the image. We show that scanline-order algorithms are fundamentally more efficient than commonly-used ray casting algorithms because the latter must perform analytic geometry calculations (e.g. intersecting rays with axis-aligned boxes). The new scanline-order algorithm simply streams through the volume and the image in storage order. We describe variants of the algorithm for both parallel and perspective projections and
PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment
, 2000
"... Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led t ..."
Abstract
-
Cited by 495 (12 self)
- Add to MetaCart
Researchers in the ontology-design field have developed the content for ontologies in many domain areas. Recently, ontologies have become increasingly common on the WorldWide Web where they provide semantics for annotations in Web pages. This distributed nature of ontology development has led to a large number of ontologies covering overlapping domains. In order for these ontologies to be reused, they first need to be merged or aligned to one another. The processes of ontology alignment and merging are usually handled manually and often constitute a large and tedious portion of the sharing process. We have developed and implemented PROMPT, an algorithm that provides a semi-automatic approach to ontology merging and alignment. PROMPT performs some tasks automatically and guides the user in performing other tasks for which his intervention is required.
The SWISS-MODEL Workspace: A web-based environment for protein structure homology modelling
- BIOINFORMATICS
, 2005
"... Motivation: Homology models of proteins are of great interest for planning and analyzing biological experiments when no experimental three-dimensional structures are available. Building homology models requires specialized programs and up-to-date sequence and structural databases. Integrating all re ..."
Abstract
-
Cited by 555 (5 self)
- Add to MetaCart
databases necessary for modelling are accessible from the workspace and are updated in regular intervals. Tools for template selection, model building, and structure quality evaluation can be invoked from within the workspace. Workflow and usage of the workspace are illustrated by modelling human Cyclin A1
Modeling Strategic Relationships for Process Reengineering
, 1995
"... Existing models for describing a process (such as a business process or a software development process) tend to focus on the \what " or the \how " of the process. For example, a health insurance claim process would typically be described in terms of a number of steps for assessing and appr ..."
Abstract
-
Cited by 545 (40 self)
- Add to MetaCart
Existing models for describing a process (such as a business process or a software development process) tend to focus on the \what " or the \how " of the process. For example, a health insurance claim process would typically be described in terms of a number of steps for assessing
Assessing coping strategies: A theoretically based approach
- Journal of Personality and Social Psychology
, 1989
"... We developed a multidimensional coping inventory to assess the different ways in which people respond to stress. Five scales (of four items each) measure conceptually distinct aspects of problem-focused coping (active coping, planning, suppression of competing activities, restraint coping, seek-ing ..."
Abstract
-
Cited by 610 (5 self)
- Add to MetaCart
We developed a multidimensional coping inventory to assess the different ways in which people respond to stress. Five scales (of four items each) measure conceptually distinct aspects of problem-focused coping (active coping, planning, suppression of competing activities, restraint coping, seek-ing
A Compositional Approach to Performance Modelling
, 1996
"... Performance modelling is concerned with the capture and analysis of the dynamic behaviour of computer and communication systems. The size and complexity of many modern systems result in large, complex models. A compositional approach decomposes the system into subsystems that are smaller and more ea ..."
Abstract
-
Cited by 746 (102 self)
- Add to MetaCart
Performance modelling is concerned with the capture and analysis of the dynamic behaviour of computer and communication systems. The size and complexity of many modern systems result in large, complex models. A compositional approach decomposes the system into subsystems that are smaller and more
Learning probabilistic relational models
- In IJCAI
, 1999
"... A large portion of real-world data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract
-
Cited by 619 (31 self)
- Add to MetaCart
of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related
Base-calling of automated sequencer traces using phred. I. Accuracy Assessment
- GENOME RES
, 1998
"... The availability of massive amounts of DNA sequence information has begun to revolutionize the practice of biology. As a result, current large-scale sequencing output, while impressive, is not adequate to keep pace with growing demand and, in particular, is far short of what will be required to obta ..."
Abstract
-
Cited by 1602 (4 self)
- Add to MetaCart
The availability of massive amounts of DNA sequence information has begun to revolutionize the practice of biology. As a result, current large-scale sequencing output, while impressive, is not adequate to keep pace with growing demand and, in particular, is far short of what will be required to obtain the 3-billion-base human genome sequence by the target date of 2005. To reach this goal, improved automation will be essential, and it is particularly important that human involvement in sequence data processing be significantly reduced or eliminated. Progress in this respect will require both improved accuracy of the data processing software and reliable accuracy measures to reduce the need for human involvement in error correction and make human review more efficient. Here, we describe one step toward that goal: a base-calling program for automated sequencer traces, phred, with improved accuracy. phred appears to be the first base-calling program to achieve a lower error rate than the ABI software, averaging 40%–50 % fewer errors in the data sets examined independent of position in read, machine running conditions, or sequencing chemistry.
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract
-
Cited by 554 (18 self)
- Add to MetaCart
Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled
Modeling and simulation of genetic regulatory systems: A literature review
- JOURNAL OF COMPUTATIONAL BIOLOGY
, 2002
"... In order to understand the functioning of organisms on the molecular level, we need to know which genes are expressed, when and where in the organism, and to which extent. The regulation of gene expression is achieved through genetic regulatory systems structured by networks of interactions between ..."
Abstract
-
Cited by 729 (15 self)
- Add to MetaCart
DNA, RNA, proteins, and small molecules. As most genetic regulatory networks of interest involve many components connected through interlocking positive and negative feedback loops, an intuitive understanding of their dynamics is hard to obtain. As a consequence, formal methods and computer tools
Results 1 - 10
of
974,360