Results 1 - 10
of
546
Understanding Code Mobility
- IEEE COMPUTER SCIENCE PRESS
, 1998
"... The technologies, architectures, and methodologies traditionally used to develop distributed applications exhibit a variety of limitations and drawbacks when applied to large scale distributed settings (e.g., the Internet). In particular, they fail in providing the desired degree of configurability, ..."
Abstract
-
Cited by 560 (34 self)
- Add to MetaCart
(Show Context)
The technologies, architectures, and methodologies traditionally used to develop distributed applications exhibit a variety of limitations and drawbacks when applied to large scale distributed settings (e.g., the Internet). In particular, they fail in providing the desired degree of configurability, scalability, and customizability. To address these issues, researchers are investigating a variety of innovative approaches. The most promising and intriguing ones are those based on the ability of moving code across the nodes of a network, exploiting the notion of mobile code. As an emerging research field, code mobility is generating a growing body of scientific literature and industrial developments. Nevertheless, the field is still characterized by the lack of a sound and comprehensive body of concepts and terms. As a consequence, it is rather difficult to understand, assess, and compare the existing approaches. In turn, this limits our ability to fully exploit them in practice, and to further promote the research work on mobile code. Indeed, a significant symptom of this situation is the lack of a commonly accepted and sound definition of the term "mobile code" itself. This paper presents a conceptual framework for understanding code mobility. The framework is centered around a classification that introduces three dimensions: technologies, design paradigms, and applications. The contribution of the paper is twofold. First, it provides a set of terms and concepts to understand and compare the approaches based on the notion of mobile code. Second, it introduces criteria and guidelines that support the developer in the identification of the classes of applications that can leverage off of mobile code, in the design of these applications, and, finally, in the selection of the most appropriate implementation technologies. The presentation of the classification is intertwined with a review of the state of the art in the field. Finally, the use of the classification is exemplified in a case study.
Orca: A language for parallel programming of distributed systems
- IEEE Transactions on Software Engineering
, 1992
"... Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data ..."
Abstract
-
Cited by 332 (46 self)
- Add to MetaCart
(Show Context)
Orca is a language for implementing parallel applications on loosely coupled distributed systems. Unlike most languages for distributed programming, it allows processes on different machines to share data. Such data are encapsulated in data-objects, which are instances of user-defined abstract data types. The implementation of Orca takes care of the physical distribution of objects among the local memories of the processors. In particular, an implementation may replicate and/or migrate objects in order to decrease access times to objects and increase parallelism. This paper gives a detailed description of the Orca language design and motivates the design choices. Orca is intended for applications programmers rather than systems programmers. This is reflected in its design goals to provide a simple, easy to use language that is type-secure and provides clean semantics. The paper discusses three example parallel applications in Orca, one of which is described in detail. It also describes one of the existing implementations, which is based on reliable broadcasting. Performance measurements of this system are given for three parallel applications. The measurements show that significant speedups can be obtained for all three applications. Finally, the paper compares Orca with several related languages and systems. 1.
Maui: Making smartphones last longer with code offload
- In In Proceedings of ACM MobiSys
, 2010
"... This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained requiring full process (or full VM) migra ..."
Abstract
-
Cited by 246 (8 self)
- Add to MetaCart
(Show Context)
This paper presents MAUI, a system that enables fine-grained energy-aware offload of mobile code to the infrastructure. Previous approaches to these problems either relied heavily on programmer support to partition an application, or they were coarse-grained requiring full process (or full VM) migration. MAUI uses the benefits of a managed code environment to offer the best of both worlds: it supports fine-grained code offload to maximize energy savings with minimal burden on the programmer. MAUI decides at runtime which methods should be remotely executed, driven by an optimization engine that achieves the best energy savings possible under the mobile device’s current connectivity constrains. In our evaluation, we show that MAUI enables: 1) a resource-intensive face recognition application that consumes an order of magnitude less energy, 2) a latency-sensitive arcade game application that doubles its refresh rate, and 3) a voice-based language translation application that bypasses the limitations of the smartphone environment by executing unsupported components remotely.
Optimizing the migration of virtual computers
- In Proceedings of the 5th Symposium on Operating Systems Design and Implementation
, 2002
"... Abstract This paper shows how to quickly move the state of a running computer across a network, including the state in its disks, memory, CPU registers, and I/O devices. We call this state a capsule. Capsule state is hardware state, so it includes the entire operating system as well as applications ..."
Abstract
-
Cited by 238 (5 self)
- Add to MetaCart
Abstract This paper shows how to quickly move the state of a running computer across a network, including the state in its disks, memory, CPU registers, and I/O devices. We call this state a capsule. Capsule state is hardware state, so it includes the entire operating system as well as applications and running processes. We have chosen to move x86 computer states because x86 computers are common, cheap, run the software we use, and have tools for migration. Unfortunately, x86 capsules can be large, containing hundreds of megabytes of memory and gigabytes of disk data. We have developed techniques to reduce the amount of data sent over the network: copy-on-write disks track just the updates to capsule disks, "ballooning" zeros unused memory, demand paging fetches only needed blocks, and hashing avoids sending blocks that already exist at the remote end. We demonstrate these optimizations in a prototype system that uses VMware GSX Server virtual machine monitor to create and run x86 capsules. The system targets networks as slow as 384 kbps. Our experimental results suggest that efficient capsule migration can improve user mobility and system management. Software updates or installations on a set of machines can be accomplished simply by distributing a capsule with the new changes. Assuming the presence of a prior capsule, the amount of traffic incurred is commensurate with the size of the update or installation package itself. Capsule migration makes it possible for machines to start running an application within 20 minutes on a 384 kbps link, without having to first install the application or even the underlying operating system. Furthermore, users' capsules can be migrated during a commute between home and work in even less time.
The Amber System: Parallel Programming on a Network of Multiprocessors
- In Proceedings of the 12th ACM Symposium on Operating Systems Principles
, 1989
"... Microprocessor-based shared-memory multiprocessors are becoming widely available and promise to provide cost-effective high-performance computing. This paper describes a programming system called Amber which permits a single application program to use a homogeneous network of multiprocessors in a un ..."
Abstract
-
Cited by 231 (15 self)
- Add to MetaCart
Microprocessor-based shared-memory multiprocessors are becoming widely available and promise to provide cost-effective high-performance computing. This paper describes a programming system called Amber which permits a single application program to use a homogeneous network of multiprocessors in a uniform way, making the network appear to the application as an integrated, non-uniform memory access, shared-memory multiprocessor. This simplifies the development of applications and allows compute-intensive parallel programs to effectively harness the potential of multiple nodes. Amber programs are written using an object-oriented subset of the C++ programming language, supplemented with primitives for managing concurrency and distribution. Amber provides a network-wide shared-object virtual memory in which coherence is provided by hardware means for locally-executing threads, and by software means for remote accesses. Amber runs on top of the Topaz operating system on a network of DEC SRC ...
The Polylith Software Bus
- ACM Transactions on Programming Languages and Systems
, 1991
"... We describe a system called Polylith that helps programmers prepare and interconnect mixed-language software components for execution in heterogeneous environments. Polylith's principal benefit is that programmers are free to implement functional requirements separately from their treatment of ..."
Abstract
-
Cited by 195 (19 self)
- Add to MetaCart
(Show Context)
We describe a system called Polylith that helps programmers prepare and interconnect mixed-language software components for execution in heterogeneous environments. Polylith's principal benefit is that programmers are free to implement functional requirements separately from their treatment of interfacing requirements; this means that once an application has been developed for use in one execution environment (such as a distributed network) it can be adapted for reuse in other environments (such as a shared-memory multiprocessor) by automatic techniques. This flexibility is provided without loss of performance. We accomplish this by creating a new run-time organization for software. An abstract decoupling agent, called the software bus, is introduced between the system components. Heterogeneity in language and architecture is accommodated since program units are prepared to interface directly to the bus, not to other program units. Programmers specify application structure in terms of ...
Midway: Shared Memory Parallel Programming with Entry Consistency for Distributed Memory Multiprocessors
, 1991
"... Distributed memory multiprocessing offers a cost-effective and scalable solution for a large class of scientific and numeric applications. Unfortunately, the performance of current distributed memory programming environments suffers because the frequency of communication between processors can excee ..."
Abstract
-
Cited by 194 (0 self)
- Add to MetaCart
Distributed memory multiprocessing offers a cost-effective and scalable solution for a large class of scientific and numeric applications. Unfortunately, the performance of current distributed memory programming environments suffers because the frequency of communication between processors can exceed that required to ensure a correctly functioning program. Midway is a shared memory parallel programming system which addresses the problem of excessive communication in a distributed memory multiprocessor. Midway programs are written using a conventional MIMD-style programming model executing within a single globally shared memory. Local memories on each processor cache recently used data to counter the effects of network latency. Midway is based on a new model of memory consistency called entry consistency. Entry consistency exploits the relationship between synchronization objects and the data which they protect. Updates to shared data are communicated between processors only when not ...
Designing Distributed Applications with Mobile Code Paradigms
- In Proceedings of the 19th International Conference on Software Engineering
, 1997
"... Large scale distributed systems are becoming of paramount importance, due to the evolution of technology and to the interest of market. Their development, however, is not yet supported by a sound technological and methodological background, as the results developed for small size distributed systems ..."
Abstract
-
Cited by 177 (11 self)
- Add to MetaCart
(Show Context)
Large scale distributed systems are becoming of paramount importance, due to the evolution of technology and to the interest of market. Their development, however, is not yet supported by a sound technological and methodological background, as the results developed for small size distributed systems often do not scale up. Recently, mobile code languages (MCLs) have been proposed as a technological answer to the problem. In this work, we abstract away from the details of these languages by deriving design paradigms exploiting code mobility that are independent of any particular technology. We present such design paradigms, together with a discussion of their features, their application domain, and some hints about the selection of the correct paradigm for a given distributed application. Keywords Mobile code, design paradigms, distributed applications. INTRODUCTION Distributed systems have been investigated for years, but recently research on this subject has gained a new impetus, ...
Supporting Dynamic Data Structures on Distributed-Memory Machines
, 1995
"... this article, we describe an execution model for supporting programs that use pointer-based dynamic data structures. This model uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy ..."
Abstract
-
Cited by 166 (8 self)
- Add to MetaCart
this article, we describe an execution model for supporting programs that use pointer-based dynamic data structures. This model uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy task creation. We intend to exploit this execution model using compiler analyses and automatic parallelization techniques. We have implemented a prototype system, which we call Olden, that runs on the Intel iPSC/860 and the Thinking Machines CM-5. We discuss our implementation and report on experiments with five benchmarks.