Results 1 - 10
of
50
An Overview of Standards and Related Technology in Web Services
, 2002
"... The Internet is revolutionizing business by providing an affordable and efficient way to link companies with their partners as well as customers. Nevertheless, there are problems that degrade the profitability of the Internet: closed markets that cannot use each other's services; incompatible a ..."
Abstract
-
Cited by 52 (4 self)
- Add to MetaCart
The Internet is revolutionizing business by providing an affordable and efficient way to link companies with their partners as well as customers. Nevertheless, there are problems that degrade the profitability of the Internet: closed markets that cannot use each other's services; incompatible applications and frameworks that cannot interoperate or built upon each other; difficulties in exchanging business data. Web Services is a new paradigm for e-business that is expected to change the way business applications are developed and interoperate. A Web Service is a self-describing, self-contained, modular application accessible over the web. It exposes an XML interface, it is registered and can be located through a Web Service registry. Finally, it communicates with other services using XML messages over standard Web protocols. This paper presents the Web Service model and gives an overview of existing standards. It then sketches the Web Service life-cycle, discusses related technical challenges and how they are addressed by current standards, commercial products and research efforts. Finally it gives some concluding remarks regarding the state of the art of Web Services.
The RETSINA MAS Infrastructure
- the special joint issue of Autonomous Agents and MAS, Volume 7, Nos. 1 and 2
, 2001
"... RETSINA is an implemented Multi-Agent System infrastructure that has been developed for several years and applied in many domains ranging from financial portfolio management to logistic planning. In this paper, we distill from our experience in developing MASs to clearly define a generic MAS infrast ..."
Abstract
-
Cited by 40 (2 self)
- Add to MetaCart
(Show Context)
RETSINA is an implemented Multi-Agent System infrastructure that has been developed for several years and applied in many domains ranging from financial portfolio management to logistic planning. In this paper, we distill from our experience in developing MASs to clearly define a generic MAS infrastructure as the domain independent and reusable substratum that supports the agents' social interactions. In addition, we show that the MAS infrastructure imposes requirements on an individual agent if the agent is to be a member of a MAS and take advantage of various components of the MAS infrastructure. Although agents are expected to enter a MAS and seamlessly and e ortlessly interact with the agents in the MAS infrastructure, the current state of the art demands agents to be programmed with the knowledge of what infrastructure they will utilize, and what are various fallback and recovery mechanisms that the infrastructure provides. By providing an abstract MAS infrastructure model and a concrete implemented instance of the model, RETSINA, we contribute towards the development of principles and practice to make the MAS infrastructure "invisible" and ubiquitous to the interacting agents.
Matchmaking for autonomous agents in electronic marketplaces
- In Proceedings of the Fifth International Conference on Autonomous Agents
, 2001
"... Matchmaking is the process of mediating demand and supply based on profile information. Matchmaking plays a crucial role in agent-based electronic marketplaces: the problem to be solved is to find the most appropriate agents, products, or services for a task, negotiation, or market transaction. Most ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
(Show Context)
Matchmaking is the process of mediating demand and supply based on profile information. Matchmaking plays a crucial role in agent-based electronic marketplaces: the problem to be solved is to find the most appropriate agents, products, or services for a task, negotiation, or market transaction. Most real-world problems require multidimensional matchmaking, i.e., the ability to combine various dimensions of decision-making to define an overall solution to a matchmaking problem, requiring the interplay of multiple matchmaking algorithms. In addition, in order to be applicable for real-world applications, the matchmaking component must be easily integrated into standard industrial marketplace platforms. The work described in this work aims at deploying agent-based matchmaking for industrial electronic business applications. The main contributions of this work are the following: (i) we provide a configurable framework called GRAPPA (Generic Request Architecture for Passive Provider Agents) which is designed to be adapted to electronic marketplace applications. Using GRAPPA, system designers can easily specify demand and supply profiles as XML objects; (ii) within GRAPPA we provide an extensible library of matchmaking functions (building blocks) that can be used for rapid development of matchmaking solutions that include standard information retrieval algorithms. Areas: artificial market systems and electronic commerce, middle agents, agent-based software engineering
Information Aggregation and Agent Interaction Patterns in InfoSleuth
- In cia99
, 1998
"... The MCC InfoSleuth Project 1 is an agent-based system for information gathering and analysis tasks performed over networks of autonomous information sources. A key motivation of the InfoSleuth system is that real information gathering applications require long-running monitoring and integration of i ..."
Abstract
-
Cited by 16 (5 self)
- Add to MetaCart
(Show Context)
The MCC InfoSleuth Project 1 is an agent-based system for information gathering and analysis tasks performed over networks of autonomous information sources. A key motivation of the InfoSleuth system is that real information gathering applications require long-running monitoring and integration of information at various levels of abstraction. To this end, InfoSleuth agents enable a loose integration of technologies allowing: (1) extraction of semantic concepts from autonomous information sources; (2) registration and integration of semantically annotated information from diverse sources; and (3) temporal monitoring, information routing, and identification of trends appearing across sources in the information network. In this paper we discuss the agents in InfoSleuth applications and the goal-driven interaction patterns that enable them to dynamically organize and cooperate to perform integrated and temporal information-gathering tasks. Keywords: Agent Technology, Information-Gatheri...
e-Services: Current Technologies and Open Issues
- In Proc.ofVLDB-TES 2001
"... Abstract. The Internet changes the way business is conducted. It provides an affordable and easy way to link companies with their incorporating trading and distribution partners as well as customers. However, the Internet's potential is jeopardized by the rising digital anarchy: closed markets ..."
Abstract
-
Cited by 15 (0 self)
- Add to MetaCart
(Show Context)
Abstract. The Internet changes the way business is conducted. It provides an affordable and easy way to link companies with their incorporating trading and distribution partners as well as customers. However, the Internet's potential is jeopardized by the rising digital anarchy: closed markets that cannot use each other's services; incompatible applications and frameworks that cannot interoperate or build upon each other; difficulties in exchanging business data; lack of highly available servers and secure communication. One solution to these problems is a new paradigm for e-business in which a rich array of modular electronic services (called e-services) is accessible by virtually anyone and any device. This new paradigm is currently the focus of the efforts of many researchers and software vendors. This paper presents the e-services architecture, its advantages as opposed to today's applications and gives an overview of evolving standards. It then presents the related technical challenges, the way some of them are addressed by existing technology and the remaining open issues.
Towards Evaluation of Peer-to-Peer-based Distributed Information Management
, 2003
"... Distributed knowledge management systems (DKMS) have been suggested to meet the requirements of today's knowledge management. Peer-to-peer systems offer technical foundations for such distributed systems. To estimate the value of P2P-based knowledge management evaluation criteria that meas ..."
Abstract
-
Cited by 12 (6 self)
- Add to MetaCart
Distributed knowledge management systems (DKMS) have been suggested to meet the requirements of today's knowledge management. Peer-to-peer systems offer technical foundations for such distributed systems. To estimate the value of P2P-based knowledge management evaluation criteria that measure the performance of such DKMS are required. We suggest a concise framework for evaluation of such systems within different usage scenarios. Our approach is based on standard measures from the information retrieval and the databases community. These measures serve as input to a general evaluation function which is used to measure the efficiency of P2P-based KM systems. We describe test scenarios as well as the simulation software and data sets one can use therefor.
An Agent Infrastructure for Knowledge Discovery and Event Detection
, 1999
"... Data mining and data analysis is often a sub-component of a larger knowledge discovery process within an organization. Although toolkits for data mining generally provide some form of support for knowledge discovery, most don't provide the capabilities needed to adequately integrate data mining ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
Data mining and data analysis is often a sub-component of a larger knowledge discovery process within an organization. Although toolkits for data mining generally provide some form of support for knowledge discovery, most don't provide the capabilities needed to adequately integrate data mining activities into the broader scope of the knowledge discovery process. This paper describes the expanded suppport for knowledge discovery being developed within MCC's InfoSleuth system, an agent-based, distributed infrastructure for information gathering and analysis over the Internet. Building upon emerging agent, Internet, and event detection technologies, our knowledge discovery infrastructure dynamically locates and integrates data resources accessible on the Internet, incorporates data analysis tasks into composite event detection activities, and integrates data analysis with continuous monitoring and "smart push" capabilities. We describe the infrastructure in the context of an example from...
A Knowledge-based Framework for Dynamic Semantic Web Services Brokering and Management
- in International Workshop on Web Semantics - WebS 2004
, 2004
"... The concept of automating Web services, specifically the brokering activities, is an active research topic. We need a comprehensive and overarching framework that handles the discovery, differentiation, negotiation and selection processing within the context of workflow management, and addresses the ..."
Abstract
-
Cited by 11 (7 self)
- Add to MetaCart
(Show Context)
The concept of automating Web services, specifically the brokering activities, is an active research topic. We need a comprehensive and overarching framework that handles the discovery, differentiation, negotiation and selection processing within the context of workflow management, and addresses the issues related to Virtual Organizations. The goal is to add semantics to Web services to endow them with capabilities currently lacking in the literature, but necessary for their successful deployment in future systems. This paper references how such a framework, called the KDSWS Framework, addresses in an integrated endto-end manner, the life-cycle of activities involved in brokering and managing Semantic Web Services. The following issues are addressed: semantic specification of services ’ capabilities; brokering the services, workflow management, resource management, interoperation and evolution of the Virtual Organization. 1.
Using Methods of Declarative Logic Programming for Intelligent Information Agents
- TPLP
, 2002
"... At present, the search for specific information on the World Wide Web is faced with several problems, which arise on the one hand from the vast number of information sources available, and on the other hand from their intrinsic heterogeneity, since standards are missing. A promising approach for sol ..."
Abstract
-
Cited by 10 (4 self)
- Add to MetaCart
(Show Context)
At present, the search for specific information on the World Wide Web is faced with several problems, which arise on the one hand from the vast number of information sources available, and on the other hand from their intrinsic heterogeneity, since standards are missing. A promising approach for solving the complex problems emerging in this context is the use of multi-agent systems of information agents, which cooperatively solve advanced information-retrieval problems. This requires advanced capabilities to address complex tasks, such as search and assessment of information sources, query planning, information merging and fusion, dealing with incomplete information, and handling of inconsistency. In this paper, our interest lies in the role which some methods from the field of declarative logic programming can play in the realization of reasoning capabilities for information agents. In particular, we are interested to see in how they can be used, extended, and further developed for the specific needs of this application domain. We review some existing systems and current projects, which typically address information-integration problems. We then focus on declarative knowledge-representation methods, and review and evaluate approaches and methods from logic programming and nonmonotonic reasoning for information agents. We discuss advantages and drawbacks, and point out the possible extensions and open issues. 1
Query Processing and Optimization on the Web
, 2004
"... The advent of the Internet and the Web and their subsequent ubiquity have brought forth opportunities to connect information sources across all types of boundaries (local, regional, organizational, etc.). Examples of such information sources include databases, XML documents, and other unstructured s ..."
Abstract
-
Cited by 9 (1 self)
- Add to MetaCart
The advent of the Internet and the Web and their subsequent ubiquity have brought forth opportunities to connect information sources across all types of boundaries (local, regional, organizational, etc.). Examples of such information sources include databases, XML documents, and other unstructured sources. Uniformly querying those information sources has been extensively investigated. A major challenge relates to query optimization. Indeed, querying multiple information sources scattered on the Web raises several barriers for achieving efficiency. This is due to the characteristics of Web information sources that include volatility, heterogeneity, and autonomy. Those characteristics impede a straightforward application of classical query optimization techniques. They add new dimensions to the optimization problem such as the choice of objective function, selection of relevant information sources, limited query capabilities, and unpredictable events. In this paper, we survey the current research on fundamental problems to efficiently process queries over Web data integration systems. We also outline a classification for optimization techniques and a framework for evaluating them.