• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Towards realtime information processing of sensor network data using computationally efficient multi-output gaussian processes (2008)

by M A Osborne, S J Roberts, A Rogers, S D Ramchurn, N R Jennings
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 43
Next 10 →

Gaussian processes for global optimization

by Michael A. Osborne, Roman Garnett, Stephen J. Roberts - in LION , 2009
"... Abstract. We introduce a novel Bayesian approach to global optimization using Gaussian processes. We frame the optimization of both noisy and noiseless functions as sequential decision problems, and introduce myopic and non-myopic solutions to them. Here our solutions can be tailored to exactly the ..."
Abstract - Cited by 41 (7 self) - Add to MetaCart
Abstract. We introduce a novel Bayesian approach to global optimization using Gaussian processes. We frame the optimization of both noisy and noiseless functions as sequential decision problems, and introduce myopic and non-myopic solutions to them. Here our solutions can be tailored to exactly the degree of confidence we require of them. The use of Gaussian processes allows us to benefit from the incorporation of prior knowledge about our objective function, and also from any derivative observations. Using this latter fact, we introduce an innovative method to combat conditioning problems. Our algorithm demonstrates a significant improvement over its competitors in overall performance across a wide range of canonical test problems.
(Show Context)

Citation Context

...in a weighted mixture ∫ p(y⋆ | x⋆, θ, I0 )p(y 0 | x0, θ, I )p(θ | I )dθ p(y ⋆ | x⋆, I0 ) = ∫ p(y0 | x0, θ, I )p(θ | I )dθ ≃ ∑ ρi N ( y⋆; mi(y⋆|I0),Ci(y ⋆|I0) ) , (5) i∈S with weights ρ as detailed in =-=[6]-=-. We use also the sequential formulation of a GP given by [6], a natural fit for our sequential decision problem. After each new function evaluation, we can efficiently update our predictions in light...

Decentralised Coordination of Mobile Sensors Using the Max-Sum Algorithm

by Ruben Stranders, Alessandro Farinelli, Alex Rogers, Nicholas R. Jennings , 2009
"... In this paper, we introduce an on-line, decentralised coordination algorithm for monitoring and predicting the state of spatial phenomena by a team of mobile sensors. These sensors have their application domain in disaster response, where strict time constraints prohibit path planning in advance. Th ..."
Abstract - Cited by 39 (10 self) - Add to MetaCart
In this paper, we introduce an on-line, decentralised coordination algorithm for monitoring and predicting the state of spatial phenomena by a team of mobile sensors. These sensors have their application domain in disaster response, where strict time constraints prohibit path planning in advance. The algorithm enables sensors to coordinate their movements with their direct neighbours to maximise the collective information gain, while predicting measurements at unobserved locations using a Gaussian process. It builds upon the max-sum message passing algorithm for decentralised coordination, for which we present two new generic pruning techniques that result in speed-up of up to 92 % for 5 sensors. We empirically evaluate our algorithm against several on-line adaptive coordination mechanisms, and report a reduction in root mean squared error up to 50 % compared to a greedy strategy.

Kernels for Vector-Valued Functions: a Review

by Mauricio A. Alvarez, Lorenzo Rosasco, Neil D. Lawrence, Mauricio A. Álvarez, Lorenzo Rosasco, Neil D. Lawrence , 2011
"... Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kern ..."
Abstract - Cited by 32 (2 self) - Add to MetaCart
Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.
(Show Context)

Citation Context

...distinct but related problem). In sensor networks, for example, missing signals from certain sensors may be predicted by exploiting their correlation with observed signals acquired from other sensors =-=[65]-=-. In geostatistics, predicting the concentration of heavy pollutant metals, which are expensive to measure, can be done using inexpensive and oversampled variables as a proxy [31]. In computer graphic...

Bounded approximate decentralised coordination using the max-sum algorithm

by Alessandro Farinelli, Alex Rogers, Nick R. Jennings - IN DISTRIBUTED CONSTRAINT REASONING WORKSHOP , 2009
"... In this paper we propose a novel algorithm that provides bounded approximate solutions for decentralised coordination problems. Our approach removes cycles in any general constraint network by eliminating dependencies between functions and variables which have the least impact on the solution qualit ..."
Abstract - Cited by 30 (9 self) - Add to MetaCart
In this paper we propose a novel algorithm that provides bounded approximate solutions for decentralised coordination problems. Our approach removes cycles in any general constraint network by eliminating dependencies between functions and variables which have the least impact on the solution quality. It uses the max-sum algorithm to optimally solve the resulting tree structured constraint network, providing a bounded approximation specific to the particular problem instance. We formally prove that our algorithm provides a bounded ap-proximation of the original problem and we present an empirical evaluation in a synthetic scenario. This shows that the approximate solutions that our algorithm provides are typically within 95 % of the optimum and the approximation ratio that our algorithm provides is typically 1.23.

Sparse convolved Gaussian processes for multi-output regression

by Mauricio Alvarez, Neil D. Lawrence - In Advances in Neural Information Processing Systems 21 , 2009
"... We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent func ..."
Abstract - Cited by 30 (5 self) - Add to MetaCart
We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution process formalism to establish dependencies between output variables, where each latent function is represented as a GP. Based on these latent functions, we establish an approximation scheme using a conditional independence assumption between the output processes, leading to an approximation of the full covariance which is determined by the locations at which the latent functions are evaluated. We show results of the proposed methodology for synthetic data and real world applications on pollution prediction and a sensor network. 1
(Show Context)

Citation Context

...outputs. In geostatistics this is known as cokriging. Whilst cross covariances allow us to improve our predictions of one output given the others because the correlations between outputs are modelled =-=[6, 2, 15, 12]-=- they also come with a computational and storage overhead. The main aim of this paper is to address these overheads in the context of convolution processes [6, 2]. One neat approach to account for non...

Computationally efficient convolved multiple output gaussian processes

by Mauricio A. Álvarez, Neil D. Lawrence, Edward Rasmussen - Journal of Machine Learning Research
"... Recently there has been an increasing interest in regression methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appr ..."
Abstract - Cited by 27 (2 self) - Add to MetaCart
Recently there has been an increasing interest in regression methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies between all the data points and across all the outputs. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform we establish dependencies between output variables. The main drawbacks of this approach are the associated computational and storage demands. In this paper we address these issues. We present different efficient approximations for dependent output Gaussian processes constructed through the convolution formalism. We exploit the conditional independencies present naturally in the model. This leads to a form of the covariance similar in spirit to the so called PITC and FITC approximations for a single output. We show experimental results with synthetic and real data, in particular, we show results in school exams score prediction, pollution prediction and gene expression data.
(Show Context)

Citation Context

...tputs has important applications in several areas. In sensor networks, for example, missing signals from failing sensors may be predicted due to correlations with signals acquired from other sensors (=-=Osborne et al., 2008-=-). In geostatistics, prediction of the concentration of heavy pollutant metals (for example, Copper), that are expensive to measure, can be done using inexpensive and oversampled variables (for exampl...

A Survey on Sensor Networks from a Multi-Agent perspective

by M. Vinyals, J. A. Rodriguez-aguilar, J. Cerquides
"... Sensor networks arise as one of the most promising technologies for the next decades. The recent emergence of small and inexpensive sensors based upon microelectromechanical system (MEMS) ease the development and proliferation of this kind of networks in a wide range of real-world applications. Mult ..."
Abstract - Cited by 26 (0 self) - Add to MetaCart
Sensor networks arise as one of the most promising technologies for the next decades. The recent emergence of small and inexpensive sensors based upon microelectromechanical system (MEMS) ease the development and proliferation of this kind of networks in a wide range of real-world applications. Multi-Agent systems (MAS) have been identified as one of the most suitable technologies to contribute to this domain due to their appropriateness for modeling autonomous self-aware sensors in a flexible way. Firstly, this survey summarizes the actual challenges and research areas concerning sensor networks while identifying the most relevant MAS contributions. Secondly, we propose a taxonomy for sensor networks that classifies them depending on their features (and the research problems they pose). Finally, we identify some open future research directions and opportunities for MAS research. 1.
(Show Context)

Citation Context

...ive sampling by only sensing at the most informative moments and thus avoiding redundant measurements over time. On the other hand, GPs have also been used to model spatial correlations among sensors =-=[46, 47]-=-. Incorporating spatial correlations into the probabilistic model allows agents to perform collective active sensing by avoiding redundant measurements among its neighbours and over time. GPs have bee...

Sequential Bayesian Prediction in the Presence of Changepoints

by Roman Garnett, Michael A. Osborne, Stephen J. Roberts
"... We introduce a new sequential algorithm for making robust predictions in the presence of changepoints. Unlike previous approaches, which focus on the problem of detecting and locating changepoints, our algorithm focuses on the problem of making predictions even when such changes might be present. We ..."
Abstract - Cited by 19 (6 self) - Add to MetaCart
We introduce a new sequential algorithm for making robust predictions in the presence of changepoints. Unlike previous approaches, which focus on the problem of detecting and locating changepoints, our algorithm focuses on the problem of making predictions even when such changes might be present. We introduce nonstationary covariance functions to be used in Gaussian process prediction that model such changes, then proceed to demonstrate how to effectively manage the hyperparameters associated with those covariance functions. By using Bayesian quadrature, we can integrate out the hyperparameters, allowing us to calculate the marginal predictive distribution. Furthermore, if desired, the posterior distribution over putative changepoint locations can be calculated as a natural byproduct of our prediction algorithm. 1.
(Show Context)

Citation Context

... (1) where we have: mθ(y ⋆ |Id) = µ θ (x⋆) + Kθ(x⋆, xd)Kθ(xd, xd) −1 (y d − µ θ (xd)) Cθ(y ⋆|Id) = Kθ(x⋆, x⋆) − Kθ(x⋆, xd)Kθ(xd, xd) −1 Kθ(xd, x⋆). We use the sequential formulation of a GP given by (=-=Osborne et al., 2008-=-) to perform sequential prediction using a moving window. After each new observation, we use rank-one updates to the covariance matrix to efficiently update our predictions in light of the new informa...

Bayesian optimization for sensor set selection

by R. Garnett, M. A. Osborne, S. J. Roberts - In Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks , 2010
"... We consider the problem of selecting an optimal set of sensors, as determined, for example, by the predictive accuracy of the resulting sensor network. Given an underlying metric between pairs of set elements, we introduce a natural metric between sets of sensors for this task. Using this metric, we ..."
Abstract - Cited by 19 (0 self) - Add to MetaCart
We consider the problem of selecting an optimal set of sensors, as determined, for example, by the predictive accuracy of the resulting sensor network. Given an underlying metric between pairs of set elements, we introduce a natural metric between sets of sensors for this task. Using this metric, we can construct covariance functions over sets, and thereby perform Gaussian process inference over a function whose domain is a power set. If the function has additional inputs, our covariances can be readily extended to incorporate them—allowing us to consider, for example, functions over both sets and time. These functions can then be optimized using Gaussian process global optimization (GPGO). We use the root mean squared error (RMSE) of the predictions made using a set of sensors at a particular time as an example of such a function to be optimized; the optimal point specifies the best choice of sensor locations. We demonstrate the resulting method by dynamically selecting the best subset of a given set of weather sensors for the prediction of the air temperature across the United Kingdom.
(Show Context)

Citation Context

...hted mixture p( y⋆ | zd, I ) R p(y⋆ | zd, θ, I )p(zd | θ, I )p( θ | I ) dθ = R p(zd | θ, I )p(θ | I )dθ ≃ X i∈S ρi N ` y ⋆ ; m(y ⋆ |zd, θi, I) ,C(y ⋆ |zd, θi, I) ´ , (9) with weights ρ as detailed in =-=[10]-=-. We also use the sequential formulation of a GP given by [10], a natural fit for our sequential decision problem. After each new function evaluation, we can efficiently update our predictions in ligh...

Fast Sensor Placement Algorithms for Fusion-based Target Detection

by Zhaohui Yuan, et al.
"... Mission-critical target detection imposes stringent performance requirements for wireless sensor networks, such as high detection probabilities and low false alarm rates. Data fusion has been shown as an effective technique for improving system detection performance by enabling efficient collaborati ..."
Abstract - Cited by 15 (8 self) - Add to MetaCart
Mission-critical target detection imposes stringent performance requirements for wireless sensor networks, such as high detection probabilities and low false alarm rates. Data fusion has been shown as an effective technique for improving system detection performance by enabling efficient collaboration among sensors with limited sensing capability. Due to the high cost of network deployment, it is desirable to place sensors at optimal locations to achieve maximum detection performance. However, for sensor networks employing data fusion, optimal sensor placement is a non-linear optimization problem with prohibitive computational complexity. In this paper, we present fast sensor placement algorithms based on a probabilistic data fusion model. Simulation results show that our algorithms can meet the desired detection performance with a small number of sensors while achieving up to 7-fold speedup over the optimal algorithm.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University