Results 1  10
of
175
The Cyclical Behavior of Equilibrium Unemployment and Vacancies
 American Economic Review
, 2005
"... This paper argues that a broad class of search models cannot generate the observed businesscyclefrequency fluctuations in unemployment and job vacancies in response to shocks of a plausible magnitude. In the U.S., the vacancyunemployment ratio is 20 times as volatile as average labor productivity ..."
Abstract

Cited by 401 (11 self)
 Add to MetaCart
This paper argues that a broad class of search models cannot generate the observed businesscyclefrequency fluctuations in unemployment and job vacancies in response to shocks of a plausible magnitude. In the U.S., the vacancyunemployment ratio is 20 times as volatile as average labor productivity, while under weak assumptions, search models predict that the vacancyunemployment ratio and labor productivity have nearly the same variance. I establish this claim both using analytical comparative statics in a very general deterministic search model and using simulations of a stochastic version of the model. I show that a shock that changes average labor productivity primarily alters the present value of wages, generating only a small movement along a downward sloping Beveridge curve (unemploymentvacancy locus). A shock to the job destruction rate generates a counterfactually positive correlation between unemployment and vacancies. In both cases, the shock is only slightly amplified and the model exhibits virtually no propagation. I reconcile these findings with an existing literature and argue that the source of the model’s failure is lack of wage rigidity, a consequence of the assumption that wages are determined by Nash bargaining. ∗ This is a major revision of ‘Equilibrium Unemployment Fluctuations’. I thank Daron Acemoglu, Olivier
The Evolution of the Web and Implications for an Incremental Crawler
, 1999
"... In this paper we study how to build an effective incremental crawler. The crawler selectively and incrementally updates its index and/or local collection of web pages, instead of periodically refreshing the collection in batch mode. The incremental crawler can improve the "freshness" of th ..."
Abstract

Cited by 219 (17 self)
 Add to MetaCart
In this paper we study how to build an effective incremental crawler. The crawler selectively and incrementally updates its index and/or local collection of web pages, instead of periodically refreshing the collection in batch mode. The incremental crawler can improve the "freshness" of the collection significantly and bring in new pages in a more timely manner. We first present results from an experiment conducted on more than half million web pages over 4 months, to estimate how web pages evolve over time. Based on these experimental results, we compare various design choices for an incremental crawler and discuss their tradeoffs. We propose an architecture for the incremental crawler, which combines the best design choices.
Synchronizing a database to Improve Freshness
, 1999
"... In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy uptodate. As the size of the data grows, it becomes more di#cult to maintain the copy "fresh," making it crucial to synchronize the copy e#ectively. We define two freshness metrics, chang ..."
Abstract

Cited by 160 (15 self)
 Add to MetaCart
In this paper we study how to refresh a local copy of an autonomous data source to maintain the copy uptodate. As the size of the data grows, it becomes more di#cult to maintain the copy "fresh," making it crucial to synchronize the copy e#ectively. We define two freshness metrics, change models of the underlying data, and synchronization policies. We analytically study how effective the various policies are. We also experimentally verify our analysis, based on data collected from 270 web sites for more than 4 months, and we show that our new policy improves the "freshness" very significantly compared to current policies in use.
Dynamic Power Management for Portable Systems
, 2000
"... Portable systems require long battery lifetime while still delivering high performance. Dynamic power management (DPM) policies trade off the performance for the power consumption at the system level in portable devices. In this work we present the timeindexed SMDP model (TISMDP) that we use to der ..."
Abstract

Cited by 136 (11 self)
 Add to MetaCart
Portable systems require long battery lifetime while still delivering high performance. Dynamic power management (DPM) policies trade off the performance for the power consumption at the system level in portable devices. In this work we present the timeindexed SMDP model (TISMDP) that we use to derive optimal policy for DPM in portable systems. TISMDP model is needed to handle the nonexponential user request interarrival times we observed in practice. We use our policy to control power consumption on three different devices: the SmartBadge portable device [18], the SonyVaio laptop hard disk and WLAN card. Simulation results show large savings for all three devices when using our algorithm. In addition, we measured the power consumption and performance of our algorithm and compared it with other DPM algorithms for laptop hard disk and WLAN card. The algorithm based on our TISMDP model has 1.7 times less power consumption as compared to the default Windows timeout policy for the hard disk and three times less power consumption as compared to the default algorithm for the WLAN card.
Estimating Frequency of Change
 ACM TRANSACTIONS ON INTERNET TECHNOLOGY
, 2000
"... Many online data sources are updated autonomously and independently. In this paper, we make the case for estimating the change frequency of the data, to improve web crawlers, web caches and to help data mining. We first identify various scenarios, where different applications have different requirem ..."
Abstract

Cited by 112 (9 self)
 Add to MetaCart
Many online data sources are updated autonomously and independently. In this paper, we make the case for estimating the change frequency of the data, to improve web crawlers, web caches and to help data mining. We first identify various scenarios, where different applications have different requirements on the accuracy of the estimated frequency. Then we develop several "frequency estimators" for the identified scenarios. In developing the estimators, we analytically show how precise/effective the estimators are, and we show that the estimators that we propose can improve precision significantly.
The TimeRescaling Theorem and Its Application to Neural Spike Train Data Analysis
 NEURAL COMPUTATION
, 2001
"... Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point pro ..."
Abstract

Cited by 75 (15 self)
 Add to MetaCart
Measuring agreement between a statistical model and a spike train data series, that is, evaluating goodness of fit, is crucial for establishing the model’s validity prior to using it to make inferences about a particular neural system. Assessing goodnessoffit is a challenging problem for point process neural spike train models, especially for histogrambased models such as perstimulus time histograms (PSTH) and rate functions estimated by spike train smoothing. The timerescaling theorem is a wellknown result in probability theory, which states that any point process with an integrable conditional intensity function may be transformed into a Poisson process with unit rate. We describe how the theorem may be used to develop goodnessoffit tests for both parametric and histogrambased point process models of neural spike trains. We apply these tests in two examples: a comparison of PSTH, inhomogeneous Poisson, and inhomogeneous Markov interval models of neural spike trains from the sup
Navigating Large Networks with Hierarchies
 In Visualization '93 Conference Proceedings
, 1993
"... This paper is aimed at the exploratory visualization of networks where there is a strength or weight associated with each link, and makes use of any hierarchy present on the nodes to aid the investigation of large networks. It describes a method of placing nodes on the plane that gives meaning to th ..."
Abstract

Cited by 74 (9 self)
 Add to MetaCart
This paper is aimed at the exploratory visualization of networks where there is a strength or weight associated with each link, and makes use of any hierarchy present on the nodes to aid the investigation of large networks. It describes a method of placing nodes on the plane that gives meaning to their relative positions. The paper discusses how linking and interaction principles aid the user in the exploration. Two examples are given; one of electronic mail communication over eight months within a department, another concerned with changes to a large section of a computer program. I. THE PROBLEM It has almost become a clichŽ to start a paper with the observation that the amount of data in the world is growing rapidly, and that current efforts to extract useful information from data lag far behind the ability to create data. However the clichŽ is true, and no less so in the field of network analysis and visualization than in any other. In many areas, scientists are realizing that the tools they have been using are limited in utility when applied to large, informationrich networks. Not only are networks of interest large in terms of size (as measured by number of nodes or links between nodes), but also in terms of the data collected for each node or link. The ability to examine statistics on the nodes and relate them to the network is of crucial importance. Examples of areas in which the analysis of large networks is important include: i. Trade flows. The concern in this area is monitoring imports and exports of various products at several levels; international, interstate and local. Besides examining many types of trade goods, there is also strong interest in spotting temporal patterns. ii. Communication networks. This is an important and wide category, covering not only telecommunication networks, but also electronic mail (email), financial transaction, ATM/bank data transferal and other data distribution networks.
Stochastic Roadmap Simulation: An efficient Representation and Algorithm for Analyzing Molecular Motion
, 2002
"... Classic molecular motion simulation techniques, such as Monte Carlo (MC) simulation, generate motion pathways one at a time and spend most of their time in the local minima of the energy landscape defined over a molecular conformation space. Their high computational cost prevents them from being ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
Classic molecular motion simulation techniques, such as Monte Carlo (MC) simulation, generate motion pathways one at a time and spend most of their time in the local minima of the energy landscape defined over a molecular conformation space. Their high computational cost prevents them from being used to compute ensemble properties; properties requiring the analysis of many pathways. This paper introduces Stochastic Roadmap Simulation (SRS) as a new computational approach for exploring the kinetics of molecular motion by simultaneously examining multiple pathways. These pathways are compactly encoded in a graph, which is constructed by sampling a molecular conformation space at random. This computation, which does not trace any particular pathway explicitly, circumvents the localminima problem. Each edge in the graph represents a potential transition of the molecule and is associated with a probability indicating the likelihood of this transition. By viewing the graph as a # Department of Electrical Engineering, Stanford University, Stanford CA 94305 + Department of Biochemistry, Stanford University, Stanford CA 94305 # Department of Computer Science, Stanford University, Stanford CA 94305 Department of Computer Science, National University of Singapore, Singapore corresponding author. email : latombe@cs.stanford.edu Address: Department of Computer Science, Stanford University, Stanford CA 94305 Phone: (650) 7230350 Fax: (650) 7251449 # Department of EECS, MIT and Department of HST, Harvard Medical School, Cambridge, MA 02138 Markov chain, ensemble properties can be efficiently computed over the entire molecular energy landscape.
Hybrid Bayesian Networks for Reasoning about Complex Systems
, 2002
"... Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inferen ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inference, i.e., infer the hidden state of the system given some noisy observations. For example, we can ask what is the probability that a certain word was pronounced given the readings of our microphone, what is the probability that a submarine is trying to surface given our sonar data, and what is the probability of a valve being open given our pressure and flow readings. Bayesian networks are
EventDriven Power Management
 IEEE TRANS. COMPUTERAIDED DESIGN
, 2001
"... Energy consumption of electronic devices has become a serious concern in recent years. Power management (PM) algorithms aim at reducing energy consumption at the systemlevel by selectively placing components into lowpower states. Formerly, two classes of heuristic algorithms have been proposed for ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
Energy consumption of electronic devices has become a serious concern in recent years. Power management (PM) algorithms aim at reducing energy consumption at the systemlevel by selectively placing components into lowpower states. Formerly, two classes of heuristic algorithms have been proposed for PM: timeout and predictive. Later, a category of algorithms based on stochastic control was proposed for PM. These algorithms guarantee optimal results as long as the system that is power managed can be modeled well with exponential distributions. We show that there is a large mismatch between measurements and simulation results if the exponential distribution is used to model all user request arrivals. We develop two new approaches that better model system behavior for general user request distributions. Our approaches are eventdriven and give optimal results verified by measurements. The first approach we present is based on renewal theory. This model assumes that the decision to transition to lowpower state can be made in only one state. Another method we developed is based on the timeindexed semiMarkov decision process (TISMDP) model. This model has wider applicability because it assumes that a decision to transition into a lowerpower state can be made upon each event occurrence from any number of states. This model allows for transitions into lowpower states from any state, but it is also more complex than our other approach. It is important to note that the results obtained by renewal model are guaranteed to match results obtained by TISMDP model, as both approaches give globally optimal solutions. We implemented our PM algorithms on two different classes of devices: two different hard disks and clientserver wireless local area network systems such as the SmartB...