• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 111,688
Next 10 →

Table 1 Representative performance and are results for a s et of multimedia benchmarks

in Organizing Committee: Program Committee:
by Aneesh Aggarwal David Albonesi (cornell, Babak Falsafi (cmu, Paolo Faraboschi (hp, Rajiv Gupta (arizona, Sudhanva Gurumurthi (uva, Mohamed Zahran Mary Hall (isi, Mary Lou Soffa (uva 2006
"... In PAGE 9: ... Methodology This section describes the evaluation of the design methodology presented in previous sections. An application set, shown in Table1 , is selected from a wide range of media applications related to video compression, color processing, and image processing. Key compute intensive kernels from this application set is chosen for implementation.... In PAGE 9: ... The generated hardware is synthesized and mapped onto a Xilinx Virtex-4 FPGA, and the quality metrics of the produced bitstream (area, clock frequency) are recorded to assess the Pareto-optimality of the design B. Discussion The results of Table1 show the total number of FPGA, the average I/O bandwidth in bytes per cycle between the data path and the stream interfaces, and the clock frequency in MHz after synthesis. These results enforce our initial premise that template-based approach can produce fast and area efficient designs.... In PAGE 13: ...Table1 . Benchmark applications evaluated.... In PAGE 13: ...cc 4.0.2, Binutils 2.16, and Newlib 1.14.0 that target variations of the 32-bits MIPS I [16] ISA; integer division is implemented in software, and for now interrupts are not supported. Using the 20 embedded benchmark applications described in Table1 , we evaluate our compiler techniques for generating custom code for varying soft processor architec- tures. We use the SPREE system [8] to generate a wide range of soft processor architectures (full details are available in a previous publication [17]).... ..."

Table 5: Performance of micro benchmarks. All benchmarks were run with and without the storage IDS functionality. Each number represents the average of 1000 trials in milliseconds.

in Storage-based intrusion detection: watching storage activity for suspicious behavior
by Advisor Prof Ganger, Adam G. Pennington, Adam G. Pennington, John D. Strunk, John Linwood Griffin, Craig A. N. Soules, Garth R. Goodson, Gregory R. Ganger 2003
"... In PAGE 23: ... Microbenchmarks on specific filesystem actions help explain the overheads. Table5 shows results for the most expensive operations, which all affect the namespace. The performance differences are caused by redundancy in the implementation.... ..."
Cited by 31

Table 2: Average accuracy on Benchmark Datasets. The number in parenthesis represents the rela- tive rank of each of the algorithms (performance-wise) in the corresponding dataset

in Multiple instance learning for computer aided diagnosis
by Glenn Fung, Murat Dundar, Balaji Krishnapuram, R. Bharat Rao 2006
"... In PAGE 7: ... Results for mi-SVM, MI-SVM and EM-DD are taken from [15]. Table2 shows that CH-FD is comparable to other techniques on all datasets, even though it ignores the negative bag information. Furthermore, CH-FD appears to be the most stable of the algorithms, at least on these 5 datasets, achieving the most consistent performance as indicated by the Aver- age Rank column.... ..."
Cited by 2

Table 2: Average accuracy on Benchmark Datasets. The number in parenthesis represents the rela- tive rank of each of the algorithms (performance-wise) in the corresponding dataset

in Multiple instance learning for computer aided diagnosis
by Glenn Fung, Murat Dundar, Balaji Krishnapuram, R. Bharat Rao 2006
"... In PAGE 7: ... Results for mi-SVM, MI-SVM and EM-DD are taken from [15]. Table2 shows that CH-FD is comparable to other techniques on all datasets, even though it ignores the negative bag information. Furthermore, CH-FD appears to be the most stable of the algorithms, at least on these 5 datasets, achieving the most consistent performance as indicated by the Aver- age Rank column.... ..."
Cited by 2

Table 4: Performance of micro benchmarks. All benchmarks were run with and without the storage IDS func- tionality. Each number represents the average of 1000 trials in milliseconds.

in Storage-based intrusion detection: watching storage activity for suspicious behavior
by Adam G. Pennington, John D. Strunk, John Linwood Griffin, Craig A. N. Soules, Garth R. Goodson, Gregory R. Ganger 2003
"... In PAGE 17: ...ionality. Each number represents the average of 1000 trials in milliseconds. Microbenchmarks on specific filesystem actions help explain the overheads. Table4 shows results for the most expensive operations, which all affect the namespace. The performance dif- ferences are caused by redundancy in the implementation.... ..."
Cited by 31

Table 1. The average numbers of collision checks performed by the PRM planners for the versions of Hwang and Ahuja benchmark problem. Each data point represents an average of at least 15 runs.

in Adaptive Strategies for Probabilistic Roadmap Construction
by P. Isto, J. Tuominen, M. Mäntylä 1997
Cited by 1

Table 5: The average numbers of collision checks performed by the uni-heuristic PRM planners for the versions of Hwang and Ahuja benchmark problem. Each data point represents an average of at least 15 runs.

in ADAPTIVE PROBABILISTIC ROADMAP CONSTRUCTION WITH MULTI-HEURISTIC LOCAL PLANNING
by Pekka Isto

Table 7. Speed Benchmarks for Various Compression Algorithms Representing Approximate Time in Seconds to Perform 100 Queries Across a Random Set of 50 000 Molecules from ChemDB (5 Million Similarity Calculations) with Nhash ) 230 Using Binary Fingerprints and Tanimoto Similarity Measurea

in Lossless Compression of Chemical Fingerprints Using Integer Entropy Codes Improves Storage and Retrieval
by Pierre Baldi, Ryan W. Benz, Daniel S. Hirschberg, S. Joshua Swamidass 2007
"... In PAGE 9: ... Thus, the only issue left to address is the speed of decoding and computing similarity measures across large numbers of molecules. Speed benchmarks are given in Table7 comparing the performance in seconds of various compression algorithms when computing 5 million Tanimoto similarity measures using binary fingerprints. All compression schemes are implemented using byte-arithmetic and run on the same 2.... ..."

Table 4: Benchmark suite

in Register Deprivation Measurements
by Manuel E. Benitez, Manuel E. Benitez, Jack W. Davidson, Jack W. Davidson 1993
"... In PAGE 21: ...4.7 Test Suite The 13-program benchmark suite described in Table4 was used to perform all of the register deprivation experi- ments shown here. These benchmarks were chosen to represent the kinds of codes that consume most of the cycles in a professional development and educational environment.... ..."
Cited by 4

Table 2: Benchmark Summary. (B represents Billion) HIGH-MEM Benchmarks LOW-MEM Benchmarks

in Microarchitecture-Based Introspection: A Technique for Transient-Fault Tolerance in Microprocessors
by Moinuddin K. Qureshi, Moinuddin K. Qureshi, Onur Mutlu, Onur Mutlu, Yale N. Patt, Yale N. Patt 2005
"... In PAGE 12: ... We perform our studies by fast-forwarding the initial part of each benchmark and simulating it for 200M instructions using the reference input set. Table2 shows the category, the type, and the number of instructions fast-forwarded for each benchmark. Table 2: Benchmark Summary.... ..."
Cited by 7
Next 10 →
Results 1 - 10 of 111,688
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University