Results 1 - 10
of
18,689
Table 1: Features of verification tools
"... In PAGE 25: ...Table1 , a brief comparison of verification tools that where used for para- metric verification of PGM protocol is shown. Acknowledgements The work presented in the paper was made at laboratory LIAFA under supervi- sion of professor Ahmed Bouajjani and with collaboration of Mihaela Sighireanu from the same institute.... ..."
Table 1: Classi cation of some replay debuggers. Two criteria were omitted: the failure of the replay (criterion 6) is not a concern in most debugger papers, but we believe that most debuggers are of the type 6b, and all debuggers are of type 7a regarding the class of instrumented instructions (criterion 7).
1996
Cited by 3
Table 1: Classi cation of some replay debuggers. Two criteria were omitted: the failure of the replay (criterion 6) is not a concern in most debugger papers, but we believe that most debuggers are of the type 6b, and all debuggers are of type 7a regarding the class of instrumented instructions (criterion 7).
1996
Cited by 3
Table 1. Experimental results comparing our approach implemented in S-VeT againt BLAST verification tool
"... In PAGE 8: ... For comparison in this paper, we use the same options. Table1 shows the comparison of various benchmarks running BLAST against our approach. Size of the program can be defined by the number of branching conditions used.... ..."
Table 2. Key characteristics of different MPI verification approaches
"... In PAGE 22: ... Apart from MARMOT and Umpire all of them use distributed checking, just like TAC correctness checking. Table2 is an updated version of the same table in [10], taking into account more recent publications and adding information about TAC correctness check- ing. What sets TAC correctness checking apart from all other tools is that it covers all the important MPI-1 checks and pays special attention to usability: it is the only tool which provides special support for investigating problems inter- actively with a debugger and that uses debug information, function names and stack back traces to provide problem reports that directly map to the source code.... ..."
Table 1: A selection of verification tools
"... In PAGE 2: ... The strongest postcondition (weakest precondition) can then be compared to the start state (final state) given in the specification using a first-order theorem prover. Representative state-of-art systems (see also Table1 ) that employ the em- bedding approach include the Spec# system and the KeY system. The first, although referred to by its authors as a static program verifier is an advanced verification condition generator with a theorem prover backend for C# programs annotated with specifications written in a language called Spec#.... ..."
Table 2 Results of the verifications performed in this paper.
"... In PAGE 16: ...1. The results of these verifications are summarised in Table2 , with the runtime again given as hours:minutes:seconds.... ..."
Table 1: Verification performance for selected circuits.
"... In PAGE 5: ... Next, the performance of the presented technique was measured for a number of IBM internal circuits. The tests were based on the verification tool Verity and performed on a RS/6000 workstation model 390, the results are shown in Table1 . The second and third column report the design complexity in terms of the number inputs, outputs, gates, and transistors.... In PAGE 6: ... Overall, the proposed engine greatly extends the class of designs which can be handled automatically. For example, with the exception of D9000, none of the industrial designs of Table1 could be verified in a rea- sonable amount of time using a BDD engine only. 8 Conclusions The paper presents a new method to perform functional comparison of combinational circuits using BDDs, circuit graph hashing, cutpoint guessing, and false negative elimination.... ..."
Results 1 - 10
of
18,689