Results 11 - 20
of
116
Table 5. Statistical analysis of the performance of Prof using different three states decompositionsa
1999
"... In PAGE 9: ... They translated E as ~E!,Has~H!, and the rest into ~C!, including EE and HHHH. Table5 shows the results. With this decomposition, Prof achieves an accuracy per residue of 77.... ..."
Table 1. SM-prof classi cation of cache line accesses [1]
1996
"... In PAGE 3: ...1 Using CLARISSA The clarissa tool is based on [5]. Input parameters include cache line size, class threshold (the N value in Table1 ), phase type (barrier or time-slot), time- slot length and overlap. A classi cation system is needed for summarising the wealth of data.... In PAGE 3: ... A classi cation system is needed for summarising the wealth of data. Table1 gives the classi cation used in the SM-prof performance debugging tool, which reports cache line access for xed time-slots in terms of read or write accesses and the number of CPUs involved [1]. In clarissa,an enhanced version of this categorisation is used, where the sharing categories... ..."
Cited by 1
Table 1. Run Nt Nn TWB Adap. Prof. Size PRF Docs
2002
"... In PAGE 2: ... Table1 : Run parameters for RMIT runs submitted. 4.... In PAGE 2: ... 4. Results A comparison of results of the runs shown in Table1 is shown in Table 2 (these are taken from the official NIST TDT evaluation). Statistics displayed are topic weighted and macroaveraged.... ..."
Cited by 2
Table 3. Statistical analysis of all the classifiers forming the second stage of Prof a
1999
"... In PAGE 5: ... Using such a procedure, it is possible to boost the GOR method to 71.4% ~using the per-residue accuracy! for the unbal- anced trained network and to 70% for the balanced one, which represents an improvement of 2% over linear discrimination and more than 5% over any individual GOR algorithm; the Sov is also improved ~ Table3 !. The increase of the global accuracy is ex- plained by the fact that the subset of residues without consensus is predicted correctly at 61% after the neural network step, which represents an improvement of 7% on this subset.... In PAGE 5: ... The architecture of these networks is the same as the one used for single sequences. This produces different classifiers whose characteristics are shown in Table3 . Their accuracies per residue are at ;71%, which represents an improvement of 5% over the neural networks using only single sequences, as in the case of GOR.... In PAGE 6: ...6 and 72.5%, respectively, which represents an im- provement of 2% over NN-GOR and 2 to 3% over the neural network using a standard profile ~profile 1 or 2!~ Table3 !. It is also an improvement of more than 7 to 8% over the neural network using only single sequences.... ..."
Table 1: The input data used by the ANNs in the different ProfNet methods.
2006
Table 3 Profile and structural parameters for the hygroscopic and monohydrate phases of -lactose obtained with FullProf after Rietveld refinements.
Results 11 - 20
of
116