Results 1  10
of
11
Nearoptimal sensor placements in gaussian processes
 In ICML
, 2005
"... When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in t ..."
Abstract

Cited by 174 (27 self)
 Add to MetaCart
When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in the GP model, and A, D, or Eoptimal design. In this paper, we tackle the combinatorial optimization problem of maximizing the mutual information between the chosen locations and the locations which are not selected. We prove that the problem of finding the configuration that maximizes mutual information is NPcomplete. To address this issue, we describe a polynomialtime approximation that is within (1 − 1/e) of the optimum by exploiting the submodularity of mutual information. We also show how submodularity can be used to obtain online bounds, and design branch and bound search procedures. We then extend our algorithm to exploit lazy evaluations and local structure in the GP, yielding significant speedups. We also extend our approach to find placements which are robust against node failures and uncertainties in the model. These extensions are again associated with rigorous theoretical approximation guarantees, exploiting the submodularity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two realworld data sets.
Overview Identifying patterns in spatial information: a survey of methods
"... Explosive growth in geospatial data and the emergence of new spatial technologies emphasize the need for automated discovery of spatial knowledge. Spatial data mining is the process of discovering interesting and previously unknown, but potentially useful patterns from large spatial databases. The c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Explosive growth in geospatial data and the emergence of new spatial technologies emphasize the need for automated discovery of spatial knowledge. Spatial data mining is the process of discovering interesting and previously unknown, but potentially useful patterns from large spatial databases. The complexity of spatial data and implicit spatial relationships limits the usefulness of conventional data mining techniques for extracting spatial patterns. In this paper, we explore the emerging field of spatial data mining, focusing on different methods to extract patterns from spatial information. We conclude with a look at future
VariationTolerant NonUniform 3D Cache Management in Die Stacked Multicore Processor
"... Process variations in integrated circuits have significant impact on their performance, leakage and stability. This is particularly evident in large, regular and dense structures such as DRAMs. DRAMs are built using minimized transistors with presumably uniform speed in an organized array structure. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Process variations in integrated circuits have significant impact on their performance, leakage and stability. This is particularly evident in large, regular and dense structures such as DRAMs. DRAMs are built using minimized transistors with presumably uniform speed in an organized array structure. Process variation can introduce latency disparity among different memory arrays. With the proliferation of 3D stacking technology, DRAMs become a favorable choice for stacking on top of a multicore processor as a last level cache for large capacity, high bandwidth, and low power. Hence, variations in bank speed creates a unique problem of nonuniform cache accesses in 3D space. In this paper, we investigate cache management techniques for tolerating process variation in a 3D DRAM stacked onto a multicore processor. We modeled the process variation in a 4layer DRAM memory to characterize the latency variations among different banks. As a result, the notion of fast and slow banks from the core’s standpoint is no longer associated with their physical distances with the banks. They are determined by the different bank latencies due to process variation. We develop cache migration schemes that utilizes fast banks while limiting the cost due to migration. Our experiments show that there is a great performance benefit in exploiting fast memory banks through migration. On average, a variationaware management can improve the performance of a workload over the baseline (where one of the slowest bank speed is assumed for all banks) by 17.8%. We are also only 0.45 % away in performance from an ideal memory where no process variation is present.
This
, 2009
"... Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region o ..."
Abstract
 Add to MetaCart
Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region of study. Such point– referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point–referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.
This
, 2010
"... Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region o ..."
Abstract
 Add to MetaCart
Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region of study. Such point– referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point–referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.
This
, 2010
"... Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region o ..."
Abstract
 Add to MetaCart
Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region of study. Such point– referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point–referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.
Generalizing Scales
"... Instead of considering scales to be linearly ordered structures, it is proposed that scales are better conceived of as metrics (dissimilarity matrices). Further, to be considered a scale of typological interest, there should be a significant correlation between a meaningscale and a formscale. This ..."
Abstract
 Add to MetaCart
Instead of considering scales to be linearly ordered structures, it is proposed that scales are better conceived of as metrics (dissimilarity matrices). Further, to be considered a scale of typological interest, there should be a significant correlation between a meaningscale and a formscale. This conceptualisation allows for a fruitful generalization of the concept "scale". As a handson example of the proposals put forward in this paper, the "scale of likelihood of spontaneous occurrence " (Haspelmath 1993) is reanalyzed. This scale describes the prototypical agentivity of the subject of a predicate. 1. Scales as restrictions on formfunction mapping Scales 1 of linguistic structure are one of the more promising avenues of research into the unification of the worldwide linguistic diversity. Although our growing understanding of the diversity of the world’s languages seems to put more and more doubt on many grandiose attempts on universally valid generalizations, the significance of scales for human languages (like the wellknown animacy scale) still appears to stand strong. So, what actually is a scale? A scale seems to be mostly thought of as an asymmetrical onedimensional arrangement (a “total order ” in mathematical parlance) on certain crosslinguistic categories/functions. Put differently, a scale is a linear ordering of functions with a “high end ” and a “low end”. To be a considered an interesting scale, the formal encoding of these functions in actual languages should be related to this linear ordering. In this paper, I will argue that this concept of a scale can be fruitfully generalized. In a very general sense, all linguistic structure consists of forms expressing particular functions. If we find restrictions—across languages— on the kind of forms that are used to express certain functions, then this 1 The term “scale ” is used here synonymously to what is also known as an “implicational hierarchy”, “markedness hierarchy ” or simply “hierarchy ” in linguistics.
This
, 2010
"... Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region o ..."
Abstract
 Add to MetaCart
Scientists and investigators in such diverse fields as geological and environmental sciences, ecology, forestry, disease mapping, and economics often encounter spatially referenced data collected over a fixed set of locations with coordinates (latitude–longitude, Easting–Northing etc.) in a region of study. Such point– referenced or geostatistical data are often best analyzed with Bayesian hierarchical models. Unfortunately, fitting such models involves computationally intensive Markov chain Monte Carlo (MCMC) methods whose efficiency depends upon the specific problem at hand. This requires extensive coding on the part of the user and the situation is not helped by the lack of available software for such algorithms. Here, we introduce a statistical software package, spBayes, built upon the R statistical computing platform that implements a generalized template encompassing a wide variety of Gaussian spatial process models for univariate as well as multivariate point–referenced data. We discuss the algorithms behind our package and illustrate its use with a synthetic and real data example.
Open Access
"... Exposures to fine particulate air pollution and respiratory outcomes in adults using two national datasets: a crosssectional study ..."
Abstract
 Add to MetaCart
Exposures to fine particulate air pollution and respiratory outcomes in adults using two national datasets: a crosssectional study