Results 1 - 10
of
43
The WISE approach to Electronic Commerce
- International Journal of Computer Systems Science & Engineering, special issue on Flexible Workflow Technology Driving the Networked Economy
, 2000
"... The growing interest in Electronic Commerce practices has lead to a wide variety of models trying to capture the subtleties and complexities of the electronic marketplace. In this paper, we discuss a model based on trading communities, virtual business processes and virtual enterprises. These conc ..."
Abstract
-
Cited by 50 (2 self)
- Add to MetaCart
(Show Context)
The growing interest in Electronic Commerce practices has lead to a wide variety of models trying to capture the subtleties and complexities of the electronic marketplace. In this paper, we discuss a model based on trading communities, virtual business processes and virtual enterprises. These concepts are at the heart of the WISE (Workflow based Internet SErvices) project, where we have used them to drive the design and implementation of software tools for business to business electronic commerce. The paper briefly describes the model and shows how it is being used in practice as part of the WISE research effort. 1 Introduction Electronic commerce is a long established practice among companies which use information and communication technology to drive their everyday business transactions. In fact, some decades old retail chains are the direct result of electronic commerce practices. In spite of this proven success, electronic commerce has not been widely adopted until very rece...
Atomicity and Isolation for Transactional Processes
, 2002
"... this paper, we deal with the problem of atomicity and isolation in the context of processes. We propose a unified model for concurrency control and recovery for processes and show how this model can be implemented in practice, thereby providing a complete framework for developing middleware applicat ..."
Abstract
-
Cited by 48 (19 self)
- Add to MetaCart
this paper, we deal with the problem of atomicity and isolation in the context of processes. We propose a unified model for concurrency control and recovery for processes and show how this model can be implemented in practice, thereby providing a complete framework for developing middleware applications using processes
Building Reliable Web Services Compositions
- IN INTERNATIONAL WORKSHOP WEB SERVICES RESEARCH, STANDARDIZATION, AND DEPLOYMENT
, 2002
"... The recent evolution of internet technologies, mainly guided by the Extensible Markup Language (XML) and its related technologies, are extending the role of the World Wide Web from information interaction to service interaction. This next wave of the internet era is being driven by a concept nam ..."
Abstract
-
Cited by 28 (0 self)
- Add to MetaCart
The recent evolution of internet technologies, mainly guided by the Extensible Markup Language (XML) and its related technologies, are extending the role of the World Wide Web from information interaction to service interaction. This next wave of the internet era is being driven by a concept named Web services. The Web services technology provides the underpinning to a new business opportunity, i.e., the possibility of providing value-added Web services. However, the building of value-added services on this new environment is not a trivial task. Due to the many singularities of the Web service environment, such as the inherent structural and behavioral heterogeneity of Web services, as well as their strict autonomy, it is not possible to rely on the current models and solutions to build and coordinate compositions of Web services. In this paper, we present a framework for building reliable Web service compositions on top of heterogeneous and autonomous Web services.
Infrastructure for information spaces
- In Proceedings of Advances in Databases and Information Systems, 6th East European Conference, ADBIS 2002
, 2002
"... Abstract The amount of stored information is exploding while, at the same time, tools for accessing relevant information are rather under-developed. Usually, all users have a pre-defined view on a global information space and have to access data by the same primitive means. However, a more convenien ..."
Abstract
-
Cited by 19 (18 self)
- Add to MetaCart
(Show Context)
Abstract The amount of stored information is exploding while, at the same time, tools for accessing relevant information are rather under-developed. Usually, all users have a pre-defined view on a global information space and have to access data by the same primitive means. However, a more convenient solution from a user’s point of view considers her/his individual context and interests by mapping the global information space to a personal one. Yet, the organization and personalization of information spaces induces a set of tightly related problems: First, user interfaces have to present personalized information in a user-friendly way and have to be enriched by sophisticated, context-sensitive navigation techniques. Second, the personal information space has to be organized automatically, by exploiting similarities between multimedia documents. Third, in order to allow the user to influence the automatic organization of her/his information space, relevance feedback techniques for multimedia similarity search have to be provided. Finally, taken into account that information is replicated at several sources and is subject to modification, sophisticated coordination mechanisms have to guarantee consistent views on the global information space. In this paper, we introduce the vision of hyperdatabases as the core infrastructure to support these requirements in a holistic way. Moreover, we present the ETHWorld project and its sub-projects in which we apply hyperdatabase concepts for managing and organizing the information space of a virtual campus.
Transactional Coordination Agents for Composite Systems
- International Database Engineering and Applications Symposium (IDEAS '99
, 1999
"... Composite systems are collections of autonomous, heterogeneous, and distributed software applications. In these systems, data dependencies are continuously violated by local operations and therefore, coordination processes are necessary to guarantee overall correctness and consistency. Such coordina ..."
Abstract
-
Cited by 17 (10 self)
- Add to MetaCart
(Show Context)
Composite systems are collections of autonomous, heterogeneous, and distributed software applications. In these systems, data dependencies are continuously violated by local operations and therefore, coordination processes are necessary to guarantee overall correctness and consistency. Such coordination processes must be endowed with some form of execution guarantees, which require the participating subsystems to have certain database functionality (such as atomicity of local operations, order-preservation and either compensation of operations or the deferment of their commit). However, this functionality is not present in many applications and must be implemented by a transactional coordination agent coupled with the application. In this paper, we discuss the requirements to be met by the applications and their associated transactional coordination agents. We identify a minimal set of functionality the applications must provide in order to participate in transactional coordination processes and we also discuss how the missing database functionality can be added to arbitrary applications using transactional coordination agents. Then, we identify the structure of a generic transactional coordination agent and provide an implementation example of a transactional coordination agent tailored to SAP R/3. 1.
WISE: Process based E-Commerce
- IEEE DATA ENGINEERING BULLETIN
, 2001
"... Electronic commerce is a business practice that is experiencing an extraordinary growth. Unfortunately, there is a severe lack of adequate software tools. The WISE project (Workflow based Internet SErvices) at ETH Zurich is an attempt to address this problem by providing a software platform for pro ..."
Abstract
-
Cited by 17 (0 self)
- Add to MetaCart
(Show Context)
Electronic commerce is a business practice that is experiencing an extraordinary growth. Unfortunately, there is a severe lack of adequate software tools. The WISE project (Workflow based Internet SErvices) at ETH Zurich is an attempt to address this problem by providing a software platform for process based business to business electronic commerce. The final objective of the project is to develop a coherent solution for enterprise networks that can be easily and seamlessly deployed in small and medium enterprises. As a first step in this direction, we have developed a simple but powerful model for electronic commerce to be used as the overall design principle [LASS00]. To support this model, we have extended OPERA [Hag99, AHST97], a process support kernel built at ETH that provides basic workflow engine functionality and a number of programming language extensions, with the capability to implement trading communities that interact using virtual bu
Supporting reliable transactional business processes by publish/subscribe techniques
- In TES
, 2001
"... Abstract. Processes have increasingly become an important design principle for complex intra- and inter-organizational e-services. In particular, processes allow to provide value-added services by seamlessly combining existing e-services into a coherent whole, even across corporate boundaries. Proce ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Processes have increasingly become an important design principle for complex intra- and inter-organizational e-services. In particular, processes allow to provide value-added services by seamlessly combining existing e-services into a coherent whole, even across corporate boundaries. Process management approaches support the definition and the execution of predefined processes as distributed applications. They ensure that execution guarantees are observed even in the presence of failures and concurrency. The implementation of a process management execution environment is a challenging task in several aspects. First, the processes to be executed are not necessarily static and follow a predefined pattern but must be generated dynamically (e.g., choosing the best offer in a pre-sales interaction). Second, deferring the execution of some application services in case of overload or unavailability is often not acceptable and must be avoided by exploiting replicated services or even by automatically adding such services, and by monitoring and balancing the load. Third, in order to avoid a bottleneck at the process coordinator level, a centralized implementation must be avoided as much as possible. Hence, a framework is needed which supports both the modularization of the process coordinator’s functionality and the flexibility needed for dynamically generating and adopting processes. In this paper we show how publish/subscribe techniques can be used for the implementation of process management. We show how the overall architecture looks like when using a computer cluster and publish/subscribe components as the basic infrastructure to drive the enactment of processes. In particular we describe how load balancing, process navigation, failure handling, and process monitoring is supported with minimal intervention of a centralized coordinator. 1
How can we support Grid Transactions? Towards Peer-to-Peer Transaction Processing
- IN PROCEEDINGS OF THE SECOND CONFERENCE ON INNOVATIVE DATA SYSTEMS RESEARCH, CIDR 2005
, 2005
"... Today, we witness a merger between Web services and grid technology towards an open grid service infrastructure that especially satisfies the demands of complex computations on huge volumes of data. Such applications are specified as combinations of services and are executed as workflow processes. W ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Today, we witness a merger between Web services and grid technology towards an open grid service infrastructure that especially satisfies the demands of complex computations on huge volumes of data. Such applications are specified as combinations of services and are executed as workflow processes. While transactional support was neglected for (business) workflows, in the grid domain we observe not only a more general usage of workflow technology but also a stronger awareness of transactional guarantees. The rigid database notions of atomicity and isolation are however not suited for composite services in grid applications because of their complexity and duration. Beyond, the level of abstraction in the grid is far above database pages such that two-phase commit combined with two-phase locking as the state-of-the-art for distributed transactions is not adequate. Rather, compensation of services, restarting services, and invoking alternative services are needed. In this context many questions are open. How does the infrastructure detect and handle conflicts? What happens if a service is unavailable? Can we locally decide whether a distributed execution of transactions is globally correct? In this paper, we tackle some of these questions and sketch an approach to ensuring globally correct executions of transactional processes without a global coordinator.
Decentralized coordination of transactional processes in peer-to-peer environments
- In Proc. Fourteenth ACM Conference on Information and Knowledge Management (CIKM 2005
, 2005
"... Business processes executing in peer-to-peer environments usually invoke Web services on different, independent peers. Although peer-to-peer environments inherently lack global control, some business processes nevertheless require global transactional guarantees, i.e., atomicity and isolation applie ..."
Abstract
-
Cited by 12 (3 self)
- Add to MetaCart
(Show Context)
Business processes executing in peer-to-peer environments usually invoke Web services on different, independent peers. Although peer-to-peer environments inherently lack global control, some business processes nevertheless require global transactional guarantees, i.e., atomicity and isolation applied at the level of processes. This paper introduces a new decentralized serialization graph testing protocol to ensure concurrency control and recovery in peer-to-peer environments. The uniqueness of the proposed protocol is that it ensures global correctness without relying on a global serialization graph. Essentially, each transactional process is equipped with partial knowledge that allows the transactional processes to coordinate. Globally correct execution is achieved by communication among dependent transactional processes and the peers they have accessed. In case of failures, a combination of partial backward and forward recovery is applied. Experimental results exhibit a significant performance gain over traditional distributed locking-based protocols with respect to the execution of transactions encompassing Web service requests.
Transaction Synchronization in Knowledge Bases: Concepts, Realization and Quantitative Evaluation
, 1995
"... Large knowledge bases that are intended for applications such as CAD, corporate repositories or process control will have to be shared by multiple users. For these systems to scale up, to give acceptable performance and to exhibit consistent behavior, it is mandatory to synchronize user transactions ..."
Abstract
-
Cited by 11 (8 self)
- Add to MetaCart
(Show Context)
Large knowledge bases that are intended for applications such as CAD, corporate repositories or process control will have to be shared by multiple users. For these systems to scale up, to give acceptable performance and to exhibit consistent behavior, it is mandatory to synchronize user transactions using a concurrency control algorithm. The transactions in knowledge bases often access a large number of entities and perform complex inferences that may last for a long period of time. In such a situation, using conventional concurrency control methods, which require a transaction to hold its locks until it has acquired all the locks it will ever need, do not lead to good performance. This thesis examines the problem of concurrency control for such long transactions in a knowledge base setting. Using a directed graph as a general model of a knowledge base, we develop an algorithm, called the Dynamic Directed Graph (DDG) policy, that allows release of locks by a transaction before it has ...