Results 1  10
of
334
Dobbie Z: Processing of gene expression data generated by quantitative realtime RTPCR. Biotechniques 2002, 32(6):13721374, 1376, 13781379. doi:10.1186/147121211168 Cite this article as: Beese et al.: Effect of cAMP derivates on assembly and mainten
 Inclusion in PubMed, CAS, Scopus and Google Scholar • Research
"... Realtime quantitative PCR represents a highly sensitive and powerful technique for the quantitation of nucleic acids. It has a tremendous potential for the highthroughput analysis of gene expression in research and routine diagnostics. However, the major hurdle is not the practical performance of ..."
Abstract

Cited by 92 (0 self)
 Add to MetaCart
(Show Context)
Realtime quantitative PCR represents a highly sensitive and powerful technique for the quantitation of nucleic acids. It has a tremendous potential for the highthroughput analysis of gene expression in research and routine diagnostics. However, the major hurdle is not the practical performance of the experiments themselves but rather the efficient evaluation and the mathematical and statistical analysis of the enormous amount of data gained by this technology, as these functions are not included in the software provided by the manufacturers of the detection systems. In this work, we focus on the mathematical evaluation and analysis of the data generated by realtime quantitative PCR, the calculation of the final results, the propagation of experimental variation of the measured values to the final results, and the statistical analysis. We developed a Microsoft ® Excel ®based software application coded in Visual Basic for Applications, called QGene, which addresses these points. QGene manages and expedites the planning, performance, and evaluation of realtime quantitative PCR experiments, as well as the mathematical and statistical analysis, storage, and graphical presentation of the data. The QGene software application is a tool to cope with complex realtime quantitative PCR experiments at a highthroughput scale and considerably expedites and rationalizes the experimental setup, data analysis, and data management while ensuring highest reproducibility.
Continuity analysis of programs
 SIGPLAN Not
"... We present an analysis to automatically determine if a program represents a continuous function, or equivalently, if infinitesimal changes to its inputs can only cause infinitesimal changes to its outputs. The analysis can be used to verify the robustness of programs whose inputs can have small amou ..."
Abstract

Cited by 41 (13 self)
 Add to MetaCart
(Show Context)
We present an analysis to automatically determine if a program represents a continuous function, or equivalently, if infinitesimal changes to its inputs can only cause infinitesimal changes to its outputs. The analysis can be used to verify the robustness of programs whose inputs can have small amounts of error and uncertainty— e.g., embedded controllers processing slightly unreliable sensor data, or handheld devices using slightly stale satellite data. Continuity is a fundamental notion in mathematics. However, it is difficult to apply continuity proofs from real analysis to functions that are coded as imperative programs, especially when they use diverse data types and features such as assignments, branches, and loops. We associate data types with metric spaces as opposed to just sets of values, and continuity of typed programs is phrased in terms of these spaces. Our analysis reduces questions about continuity
GADT: A Probability Space ADT for Representing and Querying the Physical World
 In ICDE
, 2002
"... Large sensor networks are being widely deployed for measurement, detection, and monitoring applications. Many of these applications involve database systems to store and process data from the physical world. This data has inherent measurement uncertainties that are properly represented by continuous ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
(Show Context)
Large sensor networks are being widely deployed for measurement, detection, and monitoring applications. Many of these applications involve database systems to store and process data from the physical world. This data has inherent measurement uncertainties that are properly represented by continuous probability distribution functions (pdf's). We introduce a new objectrelational data type, the Gaussian ADT GADT, that models physical data as gaussian pdf's, and we show that existing index structures can be used as fast access methods for GADT data. We also present a measuretheoretic model of probabilistic data and evaluate GADT in its light.
Threedimensional segmentation and growthrate estimation of small pulmonary nodules in helical CT images
 IEEE Trans. Med. Imaging
"... permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Cornell University’s products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes o ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
(Show Context)
permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Cornell University’s products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to
Quantifying uncertainty in tracer‐based hydrograph separations
 Water Resour. Res
, 1998
"... Abstract. A method is presented for quantifying the uncertainty in two and threecomponent racerbased hydrograph separations. The method relates the uncertainty in computed mixing fractions to both the tracer concentrations used to perform the hydrograph separation and the uncertainties in those c ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A method is presented for quantifying the uncertainty in two and threecomponent racerbased hydrograph separations. The method relates the uncertainty in computed mixing fractions to both the tracer concentrations used to perform the hydrograph separation and the uncertainties in those concentrations. A twocomponent example and a threecomponent example illustrate the application of the method. The threecomponent example yields uncertainty results very similar to those from a previously published Monte Carlo analysis and requires less computation. 1.
A linear approximation method for the Shapley value
 Artificial Intelligence
"... The Shapley value is a key solution concept for coalitional games in general and voting games in particular. Its main advantage is that it provides a unique and fair solution, but its main drawback is the complexity of computing it (e.g for voting games this complexity is #Pcomplete). However, give ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
The Shapley value is a key solution concept for coalitional games in general and voting games in particular. Its main advantage is that it provides a unique and fair solution, but its main drawback is the complexity of computing it (e.g for voting games this complexity is #Pcomplete). However, given the importance of the Shapley value and voting games, a number of approximation methods have been developed to overcome this complexity. Among these, Owen’s multilinear extension method is the most time efficient, being linear in the number of players. Now, in addition to speed, the other key criterion for an approximation algorithm is its approximation error. On this dimension, the multilinear extension method is less impressive. Against this background, this paper presents a new approximation algorithm, based on randomization, for computing the Shapley value of voting games. This method has time complexity linear in the number of players, but has an approximation error that is, on average, lower than Owen’s. In addition to this comparative study, we empirically evaluate the error for our method and show how the different parameters of the voting game affect it. Specifically, we show the following effects. First, as the number of players in a voting game increases, the average percentage error decreases. Second, as the quota increases, the average percentage error decreases. Third, the error is different for players with different weights; players with weight closer to the mean weight have a lower error than those with weight further away. We then extend our approximation to the more general kmajority voting games and show that, for n players, the method has time complexity O(k2n) and the upper bound on its approximation error is O(k2/√n).
Outline of a Theory of Strongly Semantic Information
 Floridi, L. 1999, Philosophy and Computing – An Introduction (London
, 2003
"... This paper outlines a quantitative theory of strongly semantic information (TSSI) based on truthvalues rather than probability distributions. The main hypothesis supported in the paper is that (i) the classic quantitative theory of weakly semantic information (TWSI) is based on probability distribu ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
This paper outlines a quantitative theory of strongly semantic information (TSSI) based on truthvalues rather than probability distributions. The main hypothesis supported in the paper is that (i) the classic quantitative theory of weakly semantic information (TWSI) is based on probability distributions because (ii) it assumes that truthvalues supervene on information, yet (iii) this principle is too weak and generates a wellknown semantic paradox, whereas (iv) TSSI, according to which information encapsulates truth, can avoid the paradox and is more in line with the standard conception of what counts as information. After a brief introduction, section two outlines the semantic paradox entailed by TWSI, analysing it in terms of an initial conflict between two requisites of a quantitative theory of semantic information. In section three, three criteria of information equivalence are used to provide a taxonomy of quantitative approaches to semantic information and introduce TSSI. In section four, some further desiderata that should be fulfilled by a quantitative TSSI are explained. From section five to section seven, TSSI is developed on the basis of a calculus of truthvalues and semantic discrepancy with respect to a given situation. In section eight, it is shown how TSSI succeeds in solving the paradox. Section nine summarises the main results of the paper and indicates some future developments.
Minimal Subset Evaluation: Rapid Warmup for Simulated Hardware State
 In Proceedings of the 2001 International Conference on Computer Design
, 2001
"... This paper introduces minimal subset evaluation (MSE) as a way to reduce time spent on largestructure warmup during the fastforwarding portion of processor simulations. Warm up is commonly used prior to fulldetail simulation to avoid coldstart bias in large structures like caches and branch pred ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
This paper introduces minimal subset evaluation (MSE) as a way to reduce time spent on largestructure warmup during the fastforwarding portion of processor simulations. Warm up is commonly used prior to fulldetail simulation to avoid coldstart bias in large structures like caches and branch predictors. Unfortunately, warm up can be very time consuming, often representing 50% or more of total simulation time. Previous techniques have used the entire fastforward interval to obtain accurate warm up, which may be prohibitive for large parameterspace searches, or chosen a short but adhoc warmup length that reduces simulation time but may sacrifice accuracy. MSE probabilistically determines a minimally sufficient fraction of the set of fastforward transactions that must be executed for warm up to accurately produce state as it would have appeared had the entire fastforward interval been used for warm up. The paper describes the mathematical underpinnings of MSE and demonstrates its effectiveness for both singlelargesample and multiplesample simulation styles. In our experiments, MSE yields errors of less than 1% in IPC measurements with cycleaccurate simulation, while reducing simulation times by an average factor of two or more. 1
Use of the weighted histogram analysis method for the analysis of simulated and parallel tempering simulations
 2009 by the authors; licensee Molecular Diversity Preservation International
"... Abstract: The growing adoption of generalizedensemble algorithms for biomolecular simulation has resulted in a resurgence in the use of the weighted histogram analysis method (WHAM) to make use of all data generated by these simulations. Unfortunately, the original presentation of WHAM by Kumar et ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract: The growing adoption of generalizedensemble algorithms for biomolecular simulation has resulted in a resurgence in the use of the weighted histogram analysis method (WHAM) to make use of all data generated by these simulations. Unfortunately, the original presentation of WHAM by Kumar et al. is not directly applicable to data generated by these methods. WHAM was originally formulated to combine data from independent samplings of the canonical ensemble, whereas many generalizedensemble algorithms sample from mixtures of canonical ensembles at different temperatures. Sorting configurations generated from a parallel tempering simulation by temperature obscures the temporal correlation in the data and results in an improper treatment of the statistical uncertainties used in constructing the estimate of the density of states. Here we present variants of WHAM, STWHAM and PTWHAM, derived with the same set of assumptions, that can be directly applied to several generalized ensemble algorithms, including simulated tempering, parallel tempering (better known as replicaexchange among temperatures), and replicaexchange simulated tempering. We present methods that explicitly capture the considerable temporal correlation in sequentially generated configurations using autocorrelation analysis. This allows estimation of the statistical uncertainty in WHAM estimates of expectations for the canonical ensemble. We test the method with a onedimensional model system and then apply it to the estimation of potentials of mean force from parallel tempering simulations of the alanine dipeptide in both implicit and explicit solvent. 1.
A randomized method for the Shapley value for the voting game
 IN THE SIXTH INTERNATIONAL JOINT ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS 2007
, 2007
"... The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
The Shapley value is one of the key solution concepts for coalition games. Its main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time. In particular, the problem of finding this value for the voting game is known to be #Pcomplete in the general case. However, in this paper, we show that there are some specific voting games for which the problem is computationally tractable. For other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value. The time complexity of this method is linear in the number of players. We also show, through empirical studies, that the percentage error for the proposed method is always less than 20 % and, in most cases, less than 5%.