Results 1 - 10
of
226
Architectural Styles and the Design of Network-based Software Architectures
, 2000
"...
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The Web has been iteratively developed over the past ten years through a series of modifications to the standards that define its ..."
Abstract
-
Cited by 1119 (1 self)
- Add to MetaCart
The World Wide Web has succeeded in large part because its software architecture has been designed to meet the needs of an Internet-scale distributed hypermedia system. The Web has been iteratively developed over the past ten years through a series of modifications to the standards that define its architecture. In order to identify those aspects of the Web that needed improvement and avoid undesirable modifications, a model for the modern Web architecture was needed to guide its design, definition, and deployment.
Software architecture research investigates methods for determining how best to partition a system, how components identify and communicate with each other, how information is communicated, how elements of a system can evolve independently, and how all of the above can be described using formal and informal notations. My work is motivated by the desire to understand and evaluate the architectural design of network-based application software through principled use of architectural constraints, thereby obtaining the functional, performance, and social properties desired of an architecture. An architectural style is a named, coordinated set of architectural constraints.
This dissertation defines a framework for understanding software architecture via architectural styles and demonstrates how styles can be used to guide the architectural design of network-based application software. A survey of architectural styles for network-based applications is used to classify styles according to the architectural properties they induce on an architecture for distributed hypermedia. I then introduce the Representational State Transfer (REST) architectural style and describe how REST has been used to guide the design and development of the architecture for the modern Web.
REST emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems. I describe the software engineering principles guiding REST and the interaction constraints chosen to retain those principles, contrasting them to the constraints of other architectural styles. Finally, I describe the lessons learned from applying REST to the design of the Hypertext Transfer Protocol and Uniform Resource Identifier standards, and from their subsequent deployment in Web client and server software.
The World-Wide Web
- Communications of the ACM
, 1994
"... Abstract Berners-Lee, T.J., R. Cailliau and J.-F. Groff, The world-wide web, Computer Networks and ISDN Systems 25 (1992) 454-459. This paper describes the World-Wide Web (W3) global information system initiative, its protocols and data formats, and how it is used in practice. It discusses the plet ..."
Abstract
-
Cited by 334 (0 self)
- Add to MetaCart
(Show Context)
Abstract Berners-Lee, T.J., R. Cailliau and J.-F. Groff, The world-wide web, Computer Networks and ISDN Systems 25 (1992) 454-459. This paper describes the World-Wide Web (W3) global information system initiative, its protocols and data formats, and how it is used in practice. It discusses the plethora of different but similar information systems which exist, and how the web unifies them, creating a single information space. We describe the difficulties of information sharing between colleagues, and the basic W3 model of hypertext and searchable indexes. We list the protocols used by W3 and describe a new simple search and retrieve protocol (HTFP), and the SGML style document encoding used. We summarize the current status of the X11, NeXTStep, dumb terminal and other clients, and of the available server and gateway software.
Characterizing Browsing Strategies in the World-Wide Web
- Computer Networks and ISDN Systems
, 1995
"... This paper presents the results of a study conducted at Georgia Institute of Technology that captured client-side user events of NCSA's XMosaic. Actual user behavior, as determined from clientside log file analysis, supplemented our understanding of user navigation strategies as well as provide ..."
Abstract
-
Cited by 278 (4 self)
- Add to MetaCart
This paper presents the results of a study conducted at Georgia Institute of Technology that captured client-side user events of NCSA's XMosaic. Actual user behavior, as determined from clientside log file analysis, supplemented our understanding of user navigation strategies as well as provided real interface usage data. Log file analysis also yielded design and usability suggestions for WWW pages, sites and browsers. The methodology of the study and findings are discussed along with future research directions. Keywords Hypertext Navigation, Log Files, User Modeling Introduction With the prolific growth of the World-Wide Web (WWW) [Berners-Lee et.al, 1992] in the past year there has been an increased demand for an understanding of the WWW audience. Several studies exist that determine demographics and some behavioral characteristics of WWW users via selfselection [Pitkow and Recker 1994a & 1994b]. Though highly informative, such studies only provide high level trends in Web use (e...
The case for geographical push caching.
- Fifth Annual Workshop on Hot Operating Systems,
, 1995
"... ..."
(Show Context)
Harvest: A Scalable, Customizable Discovery and Access System
, 1995
"... Rapid growth in data volume, user base, and data diversity render Internet-accessible information increasingly difficult to use effectively. In this paper we introduce Harvest, a system that provides an integrated set of customizable tools for gathering information from diverse repositories, buil ..."
Abstract
-
Cited by 178 (8 self)
- Add to MetaCart
Rapid growth in data volume, user base, and data diversity render Internet-accessible information increasingly difficult to use effectively. In this paper we introduce Harvest, a system that provides an integrated set of customizable tools for gathering information from diverse repositories, building topic-specific content indexes, flexibly searching the indexes, widely replicating them, and caching objects as they are retrieved across the Internet. The system interoperates with WWW clients and with HTTP,FTP, Gopher, and NetNews information resources. We discuss the design and implementation of Harvest and its subsystems, give examples of its uses, and provide measurements indicating that Harvest can significantly reduce server load, network traffic, and space requirements when building indexes, compared with previous systems. We also discuss several popular indexes wehave built using Harvest, underscoring the customizability and scalability of the system.
Scalable Internet Resource Discovery: Research Problems and Approaches
, 1994
"... Over the past several years, a number of information discovery and access tools have been introduced in the Internet, including Archie, Gopher, Netfind, and WAIS. These tools have become quite popular, and are helping to redefine how people think about wide-area network applications. Yet, they ar ..."
Abstract
-
Cited by 145 (3 self)
- Add to MetaCart
(Show Context)
Over the past several years, a number of information discovery and access tools have been introduced in the Internet, including Archie, Gopher, Netfind, and WAIS. These tools have become quite popular, and are helping to redefine how people think about wide-area network applications. Yet, they are not well suited to supporting the future information infrastructure, which will be characterized by enormous data volume, rapid growth in the user base, and burgeoning data diversity. In this paper we indicate trends in these three dimensions and survey problems these trends will create for current approaches. We then suggest several promising directions of future resource discovery research, along with some initial results from projects carried out by members of the Internet Research Task Force Research Group on Resource Discovery and Directory Service.
Alex -- a global filesystem
- IN PROCEEDINGS OF THE 1992 USENIX FILE SYSTEM WORKSHOP
, 1992
"... The Alex filesystem provides users and applications transparent read access to files in Internet anonymous FTP sites. Today there are thousands of anonymous FTP sites with a total of a few million files and roughly a terabyte of data. The standard approach to accessing these files involves logging i ..."
Abstract
-
Cited by 145 (0 self)
- Add to MetaCart
The Alex filesystem provides users and applications transparent read access to files in Internet anonymous FTP sites. Today there are thousands of anonymous FTP sites with a total of a few million files and roughly a terabyte of data. The standard approach to accessing these files involves logging in to the remote machine. This means that an application can not access remote files and that users do not have any of their aliases or local tools available when connected to a remote site. Users who want to use an application on a remote file must first manually make a local copy of the file. Not only is this inconvenient, it creates two more problems. First, there is no mechanism for automatically updating this local copy when the remote file changes. The users must keep track of where they get their files from and check to see if there are updates, and then fetch these. Second, many different users at the same site may have made copies of the same remote file, thus wasting disk space. Alex addresses the problems with the above approach while maintaining compatibility with the existing FTP protocol so that the large collection of currently available files can be accessed. To get reasonable performance, long term file caching must be used. Thus consistency must be addressed. Traditional solutions to the cache consistency problem do not work in the Internet FTP domain: callbacks are not an
VIKI: Spatial Hypertext Supporting emergent Structure
, 1994
"... The emergent nature of structure is a crucial, but often ignored, constraint on authoring hypertexts. VIKI is a spatial hypertext system that supports the emergent qualities of structure and the abstractions that guide its creation. We have found that a visual/spatial metaphor for hypertext allows p ..."
Abstract
-
Cited by 136 (16 self)
- Add to MetaCart
The emergent nature of structure is a crucial, but often ignored, constraint on authoring hypertexts. VIKI is a spatial hypertext system that supports the emergent qualities of structure and the abstractions that guide its creation. We have found that a visual/spatial metaphor for hypertext allows people to express the nuances of structure, especially ambiguous, partial, or emerging structure, more easily. VIKI supports interpretation of a collected body of materials, a task that becomes increasingly important with the availability of on-line information sources. The tool's data model includes semi-structured objects, collections that provide the basis for spatial navigation, and object composites, all of which may evolve into types. A spatial parser supports this evolution and enhances user interaction with changing, visually apparent organizations.