• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,687
Next 10 →

Bootstrapping trust in commodity computers.

by Bryan Parno , Jonathan M Mccune , Adrian Perrig - In IEEE Symposium on Security and Privacy (S&P), , 2010
"... Abstract Trusting a computer for a security-sensitive task (such as checking email or banking online) requires the user to know something about the computer's state. We examine research on securely capturing a computer's state, and consider the utility of this information both for improvi ..."
Abstract - Cited by 48 (5 self) - Add to MetaCart
Abstract Trusting a computer for a security-sensitive task (such as checking email or banking online) requires the user to know something about the computer's state. We examine research on securely capturing a computer's state, and consider the utility of this information both

Trust Extension for Commodity Computers

by Bryan Parno
"... ..."
Abstract - Add to MetaCart
Abstract not found

Navier-Stokes Computations on Commodity Computers

by By Veer, Veer N. Vatsa, Thomas R. Faulkner , 1998
"... In this paper we discuss and demonstrate the feasibility of solving high-fidelity, nonlinear computational fluid dynamics (CFD) problems of practical interest on commodity machines, namely Pentium Pro PC's. Such calculations have now become possible due to the progress in computational power an ..."
Abstract - Add to MetaCart
In this paper we discuss and demonstrate the feasibility of solving high-fidelity, nonlinear computational fluid dynamics (CFD) problems of practical interest on commodity machines, namely Pentium Pro PC's. Such calculations have now become possible due to the progress in computational power

A Scalable, Commodity Data Center Network Architecture

by Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat , 2008
"... Today’s data centers may contain tens of thousands of computers with significant aggregate bandwidth requirements. The network architecture typically consists of a tree of routing and switching elements with progressively more specialized and expensive equipment moving up the network hierarchy. Unfo ..."
Abstract - Cited by 466 (18 self) - Add to MetaCart
overall system performance. In this paper, we show how to leverage largely commodity Ethernet switches to support the full aggregate bandwidth of clusters consisting of tens of thousands of elements. Similar to how clusters of commodity computers have largely replaced more specialized SMPs and MPPs, we

Pregel: A system for large-scale graph processing

by Grzegorz Malewicz, Matthew H. Austern, Aart J. C. Bik, James C. Dehnert, Ilan Horn, Naty Leiser, Grzegorz Czajkowski - IN SIGMOD , 2010
"... Many practical computing problems concern large graphs. Standard examples include the Web graph and various social networks. The scale of these graphs—in some cases billions of vertices, trillions of edges—poses challenges to their efficient processing. In this paper we present a computational model ..."
Abstract - Cited by 496 (0 self) - Add to MetaCart
is flexible enough to express a broad set of algorithms. The model has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Distributionrelated details are hidden behind

Commodity Computing Results From the Swiss-Tx project

by Ralf Gruber, Pieter Volgers, Alessandro De Vita, Massimiliano Stengel, The Swiss-tx Team , 2001
"... The aim of the Swiss-Tx project was to build, install, test and use high performance commodity computers. The biggest machine is the 70 Compaq Alpha processors Swiss-T1 machine installed at the EPFL computing centre and running in production mode since July 2000. This parallel MPI computer is well b ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
The aim of the Swiss-Tx project was to build, install, test and use high performance commodity computers. The biggest machine is the 70 Compaq Alpha processors Swiss-T1 machine installed at the EPFL computing centre and running in production mode since July 2000. This parallel MPI computer is well

HPcc as High Performance Commodity Computing on top of . . .

by G. C. Fox, W. Furmanski, T. Haupt, E. Akarsu, H. Ozdemir - OF INTEGRATED JAVA, CORBA, COM AND WEB STANDARDS. PROC. OF EUROPAR 1998 , 1998
"... We review the growing power and capability of commodity computing and communication technologies largely driven by commercial distributed information systems. These systems are built from CORBA, Microsoft's COM, JavaBeans, and rapidly advancing Web approaches. One can abstract these to a three- ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
We review the growing power and capability of commodity computing and communication technologies largely driven by commercial distributed information systems. These systems are built from CORBA, Microsoft's COM, JavaBeans, and rapidly advancing Web approaches. One can abstract these to a three

Xen and the art of virtualization

by Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, Andrew Warfield - IN SOSP , 2003
"... Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100 % binary compatibility at the expense of performance. Others sacrifice security or fun ..."
Abstract - Cited by 2010 (35 self) - Add to MetaCart
Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100 % binary compatibility at the expense of performance. Others sacrifice security

MapReduce: Simplified data processing on large clusters.

by Jeffrey Dean , Sanjay Ghemawat - In Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI-04), , 2004
"... Abstract MapReduce is a programming model and an associated implementation for processing and generating large data sets. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of ..."
Abstract - Cited by 3439 (3 self) - Add to MetaCart
distributed system. Our implementation of MapReduce runs on a large cluster of commodity machines and is highly scalable: a typical MapReduce computation processes many terabytes of data on thousands of machines. Programmers find the system easy to use: hundreds of MapReduce programs have been implemented

Scalable molecular dynamics with NAMD.

by James C Phillips , Rosemary Braun , Wei Wang , James Gumbart , Emad Tajkhorshid , Elizabeth Villa , Christophe Chipot , Robert D Skeel , Laxmikant Kalé , Klaus Schulten - J Comput Chem , 2005
"... Abstract: NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop and la ..."
Abstract - Cited by 849 (63 self) - Add to MetaCart
Abstract: NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD scales to hundreds of processors on high-end parallel platforms, as well as tens of processors on low-cost commodity clusters, and also runs on individual desktop
Next 10 →
Results 1 - 10 of 2,687
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University