Results 1 - 10
of
108
Authoritative Sources in a Hyperlinked Environment
- JOURNAL OF THE ACM
, 1999
"... The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and repo ..."
Abstract
-
Cited by 3632 (12 self)
- Add to MetaCart
(Show Context)
The network structure of a hyperlinked environment can be a rich source of information about the content of the environment, provided we have effective means for understanding it. We develop a set of algorithmic tools for extracting information from the link structures of such environments, and report on experiments that demonstrate their effectiveness in a variety of contexts on the World Wide Web. The central issue we address within our framework is the distillation of broad search topics, through the discovery of “authoritative ” information sources on such topics. We propose and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of “hub pages ” that join them together in the link structure. Our formulation has connections to the eigenvectors of certain matrices associated with the link graph; these connections in turn motivate additional heuristics for link-based analysis.
Cluster-Based Scalable Network Services
, 1997
"... This paper has benefited from the detailed and perceptive comments of our reviewers, especially our shepherd Hank Levy. We thank Randy Katz and Eric Anderson for their detailed readings of early drafts of this paper, and David Culler for his ideas on TACC's potential as a model for cluster prog ..."
Abstract
-
Cited by 400 (36 self)
- Add to MetaCart
This paper has benefited from the detailed and perceptive comments of our reviewers, especially our shepherd Hank Levy. We thank Randy Katz and Eric Anderson for their detailed readings of early drafts of this paper, and David Culler for his ideas on TACC's potential as a model for cluster programming. Ken Lutz and Eric Fraser configured and administered the test network on which the TranSend scaling experiments were performed. Cliff Frost of the UC Berkeley Data Communications and Networks Services group allowed us to collect traces on the Berkeley dialup IP network and has worked with us to deploy and promote TranSend within Berkeley. Undergraduate researchers Anthony Polito, Benjamin Ling, and Andrew Huang implemented various parts of TranSend's user profile database and user interface. Ian Goldberg and David Wagner helped us debug TranSend, especially through their implementation of the rewebber
SPAND: Shared Passive Network Performance Discovery
- IN USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS
, 1997
"... In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror s ..."
Abstract
-
Cited by 221 (8 self)
- Add to MetaCart
(Show Context)
In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror sites to chose from. Current techniques to perform these decisions are often ad hoc or poorly designed. The most common solution used today is to require the user to manually make decisions based on their own experience and whatever information is provided by the application. Previous efforts to automate this decision-making process have relied on isolated, active network probes from a host. Unfortunately, this method of making measurements has several problems. Active probing introduces unnecessary network traffic that can quickly become a significant part of the total traffic handled by busy Web servers. Probing from a single host results in less accurate information and more redundant network probes than a system that shares information with nearby hosts. In this paper, we propose a system called SPAND (Shared Passive Network Performance Discovery) that determines network characteristics by making shared, passive measurements from a collection of hosts. In this paper, we show why using passive measurements from a collection of hosts has advantages over using active measurements from a single host. We also show that sharing measurements can significantly increase the accuracy and timeliness of predictions. In addition, we present a initial prototype design of SPAND, the current implementation status of our system, and initial performance results that show the potential benefits of SPAND.
System Design Issues for Internet Middleware Services: Deductions from a Large Client Trace
, 1997
"... In this thesis, we present the analysis of a large client-side web trace gathered from the Home IP service at the University of California at Berkeley. Specifically, we demonstrate the heterogeneity of web clients, the existence of a strong and very predictable diurnal cycle in the clients' we ..."
Abstract
-
Cited by 194 (10 self)
- Add to MetaCart
(Show Context)
In this thesis, we present the analysis of a large client-side web trace gathered from the Home IP service at the University of California at Berkeley. Specifically, we demonstrate the heterogeneity of web clients, the existence of a strong and very predictable diurnal cycle in the clients' web activity, the burstiness of clients' requests at small time scales (but not large time scales, implying a lack of self-similarity), the presence of locality of reference in the clients' requests that is a strong function of the client population size, and the high latency that services encounter when delivering data to clients, implying that services will need to maintain a very large number of simultaneously active requests. We then present system design issues for Internet midd...
Adapting to Network and Client Variation Using Active Proxies: Lessons and Perspectives
- IEEE Personal Communications
, 1998
"... luding screen size, color depth, effective bandwidth, processing power, and ability to handle specific data encodings, e.g., GIF, PostScript, or MPEG. As shown in tables 1 and 2, each type of variation often spans orders of magnitude. High-volume devices such as smart phones [12] and smart two-way p ..."
Abstract
-
Cited by 124 (10 self)
- Add to MetaCart
(Show Context)
luding screen size, color depth, effective bandwidth, processing power, and ability to handle specific data encodings, e.g., GIF, PostScript, or MPEG. As shown in tables 1 and 2, each type of variation often spans orders of magnitude. High-volume devices such as smart phones [12] and smart two-way pagers will soon constitute an increasing fraction of Internet clients, making the variation even more pronounced. These conditions make it difficult for servers to provide a level of service that is appropriate for every client. Application-level adaptation is required to provide a meaningful Internet experience across the range of client capabilities. Although we expect clients to improve over time, there will always be older systems still in use that represent relatively obsolete clients, and the high end will advance roughly in parallel with the low end, effectively maintaining a gap between the two: there will always be a large difference between the very best laptop and t
ScentTrails: Integrating Browsing and Searching on the Web
- ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION
, 2003
"... ..."
(Show Context)
Ontology-based personalized search and browsing
- Web Intelligence and Agent Systems
, 2003
"... This paper has not been submitted elsewhere in identical or similar form, nor will it be during the first three months after its submission to UMUAI. As the number of Internet users and the number of accessible Web pages grows, it is becoming increasingly difficult for users to find documents that a ..."
Abstract
-
Cited by 105 (1 self)
- Add to MetaCart
This paper has not been submitted elsewhere in identical or similar form, nor will it be during the first three months after its submission to UMUAI. As the number of Internet users and the number of accessible Web pages grows, it is becoming increasingly difficult for users to find documents that are relevant to their particular needs. Users must either browse through a large hierarchy of concepts to find the information for which they are looking or submit a query to a publicly available search engine and wade through hundreds of results, most of them irrelevant. The core of the problem is that whether the user is browsing or searching, whether they are an eighth grade student or a Nobel prize winner, the identical information is selected and it is presented the same way. In this paper, we report on research that adapts information navigation based on a user profile structured as a weighted concept hierarchy. A user may create his or her own concept hierarchy and use them for browsing Web sites. Or, the user profile may be created from a reference ontology by ‘watching over the user’s shoulder’ while they browse. We show that these automatically created profiles reflect the user’s interests quite well and they are able to produce moderate improvements when applied to search results. Current work is investigating the interaction between the user profiles and conceptual search wherein documents are indexed by their concepts in addition to their keywords.
Automation and customization of rendered web pages”,
- Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology,
, 2005
"... ABSTRACT On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate ..."
Abstract
-
Cited by 97 (13 self)
- Add to MetaCart
(Show Context)
ABSTRACT On the desktop, an application can expect to control its user interface down to the last pixel, but on the World Wide Web, a content provider has no control over how the client will view the page, once delivered to the browser. This creates an opportunity for end-users who want to automate and customize their web experiences, but the growing complexity of web pages and standards prevents most users from realizing this opportunity. We describe Chickenfoot, a programming system embedded in the Firefox web browser, which enables end-users to automate, customize, and integrate web applications without examining their source code. One way Chickenfoot addresses this goal is a novel technique for identifying page components by keyword pattern matching. We motivate this technique by studying how users name web page components, and present a heuristic keyword matching algorithm that identifies the desired component from the user's name.
Puppeteer: Component-based Adaptation for Mobile Computing
- In Proceedings of the 3rd USENIX Symposium on Internet Technologies and Systems
, 2001
"... Puppeteer is a system for adapting component-based applications in mobile environments. Puppeteer takes advantage of the exported interfaces of these applications and the structured nature of the documents they manipulate to perform adaptation without modifying the applications. The system is struct ..."
Abstract
-
Cited by 97 (10 self)
- Add to MetaCart
(Show Context)
Puppeteer is a system for adapting component-based applications in mobile environments. Puppeteer takes advantage of the exported interfaces of these applications and the structured nature of the documents they manipulate to perform adaptation without modifying the applications. The system is structured in a modular fashion, allowing easy addition of new applications and adaptation policies. Our initial prototype focuses on adaptation to limited bandwidth. It runs on Windows NT, and includes support for a variety of adaptation policies for Microsoft PowerPoint and Internet Explorer 5. We demonstrate that Puppeteer can support complex policies without any modification to the application and with little overhead. To the best of our knowledge, previous implementations of adaptations of this nature have relied on modifying the application. 1
Adapting to Network and Client Variation Using Infrastructural Proxies: Lessons and Perspectives
- IEEE Personal Communications
, 1998
"... many axes, including screen size, color depth, effective bandwidth, processing power, and ability to handle specific data encodings, e.g., GIF, PostScript, or MPEG. As shown in tables 1 and 2, each type of variation often spans orders of magnitude. High-volume devices such as smart phones [12] and s ..."
Abstract
-
Cited by 87 (0 self)
- Add to MetaCart
(Show Context)
many axes, including screen size, color depth, effective bandwidth, processing power, and ability to handle specific data encodings, e.g., GIF, PostScript, or MPEG. As shown in tables 1 and 2, each type of variation often spans orders of magnitude. High-volume devices such as smart phones [12] and smart two-way pagers will soon constitute an increasing fraction of Internet clients, making the variation even more pronounced. These conditions make it difficult for servers to provide a level of service that is appropriate for every client. Application-level adaptation is required to provide a meaningful Internet experience across the range of client capabilities. Despite continuing improvements in client computing power and connectivity, we expect the high end to advance roughly in parallel with the low end, effectively maintaining a gap between the two and therefore the need for application-level adaptation. Platform SPEC92/ Screen Bits/ Memory Size pixel