• Documents
  • Authors
  • Tables

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 58,359
Next 10 →

Table 1Table 1 lists different pipeline techniques according to their minimal data cycle. Some of the techniques, e.g. wave-pipelining, are not applicable to SR/FIFO implementations, since they cannot be stopped by the input control signal and cannot store all internal states (some waves disappear). Wave-pipelining is latch- less technique that supports very high throughput, where only the final result at the pipeline output is sampled. The

in Fast Asynchronous Shift Register for Bit-Serial Communication
by Rostislav (reuven Dobkin, Ran Ginosar, Avinoam Kolodny 2006
"... In PAGE 2: ...Table1 : Data Cycle Mapping For Bit-Serial Versions of Several Pipelines Name Data Cycle (# of FO4 Inv. Delays) Data Cycle (# of Transitions) Family Reference PCHB 18.... In PAGE 2: ... These numbers are scaled for FO4 inverter delays, based on the FO3 NAND delay model provided by the ITRS [18] [18]. They are used to compute data cycle in terms of the number of FO4 inverter delays in Table1 Table 1. Next, we introduce single gate-delay shift-register that meets the high-rate requirements.... ..."
Cited by 2

Table VII. Reference passing (ms) and its overhead (%) in a quad-core processor environment. Reference GDP+ GDP GDP+LGC

in A Non-Blocking Reference Listing Algorithm for Mobile Active Object Garbage Collection
by Wei-jen Wang, Carlos A. Varela

Table 3 Optimization of conventional high-performance processors vs. ReAl

in THE REAL COMPUTER ARCHITECTURE – AN INTRODUCTION 1 The ReAl Computer Architecture ReAl = Resource-Algebra
by An Introduction, Prof Dr, Wolfgang Matthes 2006
"... In PAGE 22: ...Table3... ..."

Table 1. MPI-IO demands for correctness and high-performance and DAFS capabilities

in MPI-IO on DAFS over VIA: Implementation and Performance Evaluation
by Jiesheng Wu, Dhabaleswar K. Panda
"... In PAGE 3: ...2. Memory management A mismatch between MPI-IO and DAFS, not shown in Table1 , is as follows: as a basic requirement for memory- to-memory networks, all read or write buffers are required to be in registered memory regions in DAFS. To enable ap- plications to flexibly manage their buffers by their buffer ac- cess patterns, memory registration is exported to the DAFS applications by DAFS memory management APIs.... ..."

Table 2. Lines of code used in implementing high-performance server applications described in Section 5, not counting white- space or comments.

in Abstract Expressing and Exploiting Concurrency in Networked Applications with Aspen ∗
by unknown authors
"... In PAGE 8: ... 6.4 Language Usability Table2 reports the number of lines of code required to implement the various high-performance web servers used in the benchmark tests. (Apache is not included, but its code length is far greater.... ..."

Table 1. OPEN SGI/CRAY HIGH PERFORMANCE COMPUTING SYSTEM

in Towards Petabytes of High Performance Storage at Los Alamos
by Gary Lee Rgl, Gary Lee, Gary Grider, Mark Roschke, Lynn Jones
"... In PAGE 5: ... Configuration at Los Alamos The High Performance Computing Environment at Los Alamos includes SGI/Crays and HPSS in both the secure and open networks. As shown in Table1 , the open SGI/Crays are configured as nodes with n x 32 MIPS R10K processors, where n=1-4, for a total of 768 processors and 192 GB of memory. As shown in Table 2, the secure SGI/Crays are configured as nodes with 64 MIPS R10K processors for a total of 1024 processors and 256 GB of memory.... ..."

Table 2. SECURE SGI/CRAY HIGH PERFORMANCE COMPUTING SYSTEM

in Towards Petabytes of High Performance Storage at Los Alamos
by Gary Lee Rgl, Gary Lee, Gary Grider, Mark Roschke, Lynn Jones
"... In PAGE 5: ... As shown in Table 1, the open SGI/Crays are configured as nodes with n x 32 MIPS R10K processors, where n=1-4, for a total of 768 processors and 192 GB of memory. As shown in Table2 , the secure SGI/Crays are configured as nodes with 64 MIPS R10K processors for a total of 1024 processors and 256 GB of memory. These configurations are periodically changed by splitting or merging nodes.... ..."

Table 1: The latency and bandwidth of the memory system of a high performance computer.

in A case for intelligent RAM: IRAM
by David Patterson, Thomas Anderson, Neal Cardwell, Richard Fromm, Kimberly Keeton, Christoforos Kozyrakis, I Thomas, Katherine Yelick 1997
"... In PAGE 2: ... System architects have attempted to bridge the processor-memory performance gap by introducing deeper and deeper cache memory hierarchies; unfortunately, this makes the memory latency even longer in the worst case. For example, Table1 shows CPU and memory performance in a recent high performance computer system. Note that the main memory latency in this system is a factor of four larger than the raw DRAM access time; this difference is due to the time to drive the address off the microprocessor, the time to multiplex the addresses to the DRAM, the time to turn around the bidirectional data bus, the overhead of the memory controller, the latency of the SIMM connectors, and the time to drive the DRAM pins first with the address and then with the return data.... In PAGE 11: ... The system measured included a third level 4 MB cache off chip. Table1 on page 3 describes the chip and system. If we were designing an IRAM ... In PAGE 12: ...or the speed of SRAM in a DRAM process of 1.1 to 1.3 times slower. Finally, the time to main memory should be 5 to 10 times faster in IRAM than the 253 ns of the Alpha system in Table1 .... ..."
Cited by 41

Table 2: Problem tested with no noise using High Performance Fortran

in Computational Issues in Damping Identification for Large Scale Problems
by Deborah Pilkey Kevin, Kevin P. Roe, Daniel J. Inman 1997
"... In PAGE 5: ... After that, the execution time increases at a much higher rate than the iterative method until it no longer fits in memory. The maximum speedup obtained ( Table2 ) for: n = 100 is 1.70 using 4 processors, n = 250 is 3.... ..."
Cited by 2

Table 1. A comparison of the three high performance computing platforms

in Computing Pool: a Simplified and Practical Computational Grid Model
by Peng Liu, Yao Shi, San-li Li
"... In PAGE 7: ...or applications can be reduced from 0.9 hour to 0.2 hour, or utilization of each supercomputer can be increased from 50% to 83% without prolonging waiting time. Although computing pool is a simple idea, it is worth emphasizing its advantages (as shown in Table1 ) and importance here. There are some grid applications have been developed, tried, and then abandoned, since they are designed for the future.... ..."
Next 10 →
Results 1 - 10 of 58,359
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University