• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 62,520
Next 10 →

Table 1: Workpackages and tasks in High-Performance Computing IV.

in unknown title
by unknown authors
"... In PAGE 4: ...The project also defines a number of tasks to achieve the objectives above, all of them with a duration of 3 years. Table1 lists the tasks and includes information about the executing subproject ... In PAGE 5: ...8 Memory management in the database engine 1 11 AA2.9 Database engine, operating system and architecture 1 11 Table1 (continuation): Workpackages and tasks in High-Performance Computing IV. 2 Success degree in achieving project objectives Measuring success in a big project like High-Performance Computing IV is not a trivial task.... ..."

Table 1: Workpackages and tasks in High-Performance Computing IV.

in unknown title
by unknown authors
"... In PAGE 4: ...The project also defines a number of tasks to achieve the objectives above, all of them with a duration of 3 years. Table1 lists the tasks and includes information about the executing subproject ... In PAGE 5: ...8 Memory management in the database engine 1 11 AA2.9 Database engine, operating system and architecture 1 11 Table1 (continuation): Workpackages and tasks in High-Performance Computing IV. 2 Success degree in achieving project objectives Measuring success in a big project like High-Performance Computing IV is not a trivial task.... ..."

Table 1 (continuation): Workpackages and tasks in High-Performance Computing IV.

in unknown title
by unknown authors
"... In PAGE 4: ...The project also defines a number of tasks to achieve the objectives above, all of them with a duration of 3 years. Table1 lists the tasks and includes information about the executing subproject as well as the objective that the task contributes. Task Title Subproject Contribution to objective: WP1: Computer architecture AC1.... In PAGE 4: ...7 MPI scalability 1 5 SS1.8 Definition and implementation of GRID programming models 1 5 Table1 : Workpackages and tasks in High-Performance Computing IV. ... ..."

Table 1 (continuation): Workpackages and tasks in High-Performance Computing IV.

in unknown title
by unknown authors
"... In PAGE 4: ...The project also defines a number of tasks to achieve the objectives above, all of them with a duration of 3 years. Table1 lists the tasks and includes information about the executing subproject as well as the objective that the task contributes. Task Title Subproject Contribution to objective: WP1: Computer architecture AC1.... In PAGE 4: ...7 MPI scalability 1 5 SS1.8 Definition and implementation of GRID programming models 1 5 Table1 : Workpackages and tasks in High-Performance Computing IV. ... ..."

Table 1. 1 High-performance computing platforms

in High Performance Automatic Image Registration For Remote Sensing
by Prachya Chalermwat

Table 1Table 1 lists different pipeline techniques according to their minimal data cycle. Some of the techniques, e.g. wave-pipelining, are not applicable to SR/FIFO implementations, since they cannot be stopped by the input control signal and cannot store all internal states (some waves disappear). Wave-pipelining is latch- less technique that supports very high throughput, where only the final result at the pipeline output is sampled. The

in Fast Asynchronous Shift Register for Bit-Serial Communication
by Rostislav (reuven Dobkin, Ran Ginosar, Avinoam Kolodny 2006
"... In PAGE 2: ...Table1 : Data Cycle Mapping For Bit-Serial Versions of Several Pipelines Name Data Cycle (# of FO4 Inv. Delays) Data Cycle (# of Transitions) Family Reference PCHB 18.... In PAGE 2: ... These numbers are scaled for FO4 inverter delays, based on the FO3 NAND delay model provided by the ITRS [18] [18]. They are used to compute data cycle in terms of the number of FO4 inverter delays in Table1 Table 1. Next, we introduce single gate-delay shift-register that meets the high-rate requirements.... ..."
Cited by 2

Table 1 High Performance Computing Technologies for Embedded Systems

in Lx: A Technology Platform for Customizable VLIW Embedded Processing
by Paolo Faraboschi, Geoffrey Brown, Joseph A. Fisher, Giuseppe Desoli, Fred (Mark Owen) Homewood
"... In PAGE 2: ...2 Competing Technologies It is important to compare customizable VLIW architectures to other competing high-performance computing technologies in the embedded space. Table1 summarizes the situation and shows how the advantages of high performance, ease of use and flexibil- ity uniquely position this technology. This is particularly true in a world where time-to-market is rapidly becoming the dominant factor in the success of a new technology.... ..."

Table 1. OPEN SGI/CRAY HIGH PERFORMANCE COMPUTING SYSTEM

in Towards Petabytes of High Performance Storage at Los Alamos
by Gary Lee Rgl, Gary Lee, Gary Grider, Mark Roschke, Lynn Jones
"... In PAGE 5: ... Configuration at Los Alamos The High Performance Computing Environment at Los Alamos includes SGI/Crays and HPSS in both the secure and open networks. As shown in Table1 , the open SGI/Crays are configured as nodes with n x 32 MIPS R10K processors, where n=1-4, for a total of 768 processors and 192 GB of memory. As shown in Table 2, the secure SGI/Crays are configured as nodes with 64 MIPS R10K processors for a total of 1024 processors and 256 GB of memory.... ..."

Table 1: The latency and bandwidth of the memory system of a high performance computer.

in A case for intelligent RAM: IRAM
by David Patterson, Thomas Anderson, Neal Cardwell, Richard Fromm, Kimberly Keeton, Christoforos Kozyrakis, I Thomas, Katherine Yelick 1997
"... In PAGE 2: ... System architects have attempted to bridge the processor-memory performance gap by introducing deeper and deeper cache memory hierarchies; unfortunately, this makes the memory latency even longer in the worst case. For example, Table1 shows CPU and memory performance in a recent high performance computer system. Note that the main memory latency in this system is a factor of four larger than the raw DRAM access time; this difference is due to the time to drive the address off the microprocessor, the time to multiplex the addresses to the DRAM, the time to turn around the bidirectional data bus, the overhead of the memory controller, the latency of the SIMM connectors, and the time to drive the DRAM pins first with the address and then with the return data.... In PAGE 11: ... The system measured included a third level 4 MB cache off chip. Table1 on page 3 describes the chip and system. If we were designing an IRAM ... In PAGE 12: ...or the speed of SRAM in a DRAM process of 1.1 to 1.3 times slower. Finally, the time to main memory should be 5 to 10 times faster in IRAM than the 253 ns of the Alpha system in Table1 .... ..."
Cited by 41

Table 1. A comparison of the three high performance computing platforms

in Computing Pool: a Simplified and Practical Computational Grid Model
by Peng Liu, Yao Shi, San-li Li
"... In PAGE 7: ...or applications can be reduced from 0.9 hour to 0.2 hour, or utilization of each supercomputer can be increased from 50% to 83% without prolonging waiting time. Although computing pool is a simple idea, it is worth emphasizing its advantages (as shown in Table1 ) and importance here. There are some grid applications have been developed, tried, and then abandoned, since they are designed for the future.... ..."
Next 10 →
Results 1 - 10 of 62,520
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University