Results 11  20
of
113
1Human Activity Recognition Process Using 3D Posture Data
"... Accepted version It is advisable to refer to the publisher’s version if you intend to cite from the work. Publisher: IEEE ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Accepted version It is advisable to refer to the publisher’s version if you intend to cite from the work. Publisher: IEEE
Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies
 Journal of Computational Physics In press, available online
"... ar ..."
(Show Context)
Optimal data split methodology for model validation
 Proceedings of the World Congress on Engineering and Computer Science 2011 Vol. II, WCECS 2011
, 2011
"... Abstract—The decision to incorporate crossvalidation into validation processes of mathematical models raises an immediate question – how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract—The decision to incorporate crossvalidation into validation processes of mathematical models raises an immediate question – how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition of the data subject to certain constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to predictions of a given quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This framework also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decisionmaker, who understands and can quantify the cost of model failure; and the computational scientists, who strive to determine if the model satisfies both the modeler’s and decisionmaker’s requirements. We also note that our framework is quite general, and may be applied to a wide range of problems. Here, we illustrate it through a specific example involving a data reduction model for an ICCD camera from a shocktube experiment located at the NASA Ames Research Center (ARC). Index Terms—Model validation, quantity of interest, Bayesian inference I.
Generalised Density Forecast Combinations∗
, 2013
"... Density forecast combinations are becoming increasingly popular as a means of improving forecast ‘accuracy’, as measured by a scoring rule. In this paper we generalise this literature by letting the combination weights follow more general schemes. Sieve estimation is used to optimise the score of th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Density forecast combinations are becoming increasingly popular as a means of improving forecast ‘accuracy’, as measured by a scoring rule. In this paper we generalise this literature by letting the combination weights follow more general schemes. Sieve estimation is used to optimise the score of the generalised density combination where the combination weights depend on the variable one is trying to forecast. Specific attention is paid to the use of piecewise linear weight functions that let the weights vary by region of the density. We analyse these schemes theoretically, in Monte Carlo experiments and in an empirical study. Our results show that the generalised combinations outperform their linear counterparts.
An Enhanced Features Extractor for a Portfolio of Constraint Solvers. http://www.cs.unibo.it/ ~amadini/sac_2014.pdf
 In SAC
, 2014
"... Recent research has shown that a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. The solver selection is usually done by means of (un)supervised learning techniques which exploit features extracted from the problem specifica ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Recent research has shown that a single arbitrarily efficient solver can be significantly outperformed by a portfolio of possibly slower onaverage solvers. The solver selection is usually done by means of (un)supervised learning techniques which exploit features extracted from the problem specification. In this paper we present an useful and flexible framework that is able to extract an extensive set of features from a Constraint (Satisfaction/Optimization) Problem defined in possibly different modeling languages: MiniZinc, FlatZinc or XCSP. We also report some empirical results showing that the performances that can be obtained using these features are effective and competitive with state of the art CSP portfolio techniques. 1.
Oracle inequalities for crossvalidation type procedures
"... We prove oracle inequalities for three different type of adaptation procedures inspired by crossvalidation and aggregation. These procedures are then applied to the construction of Lasso estimators and aggregation with exponential weights with datadriven regularization and temperature parameters, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We prove oracle inequalities for three different type of adaptation procedures inspired by crossvalidation and aggregation. These procedures are then applied to the construction of Lasso estimators and aggregation with exponential weights with datadriven regularization and temperature parameters, respectively. We also prove oracle inequalities for the crossvalidation procedure itself under some convexity assumptions. Key words: adaptation, aggregation, crossvalidation, sparsity
Multiplesource crossvalidation
"... Crossvalidation is an essential tool in machine learning and statistics. The typical procedure, in which data points are randomly assigned to one of the test sets, makes an implicit assumption that the data are exchangeable. A common case in which this does not hold is when the data come from multi ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Crossvalidation is an essential tool in machine learning and statistics. The typical procedure, in which data points are randomly assigned to one of the test sets, makes an implicit assumption that the data are exchangeable. A common case in which this does not hold is when the data come from multiple sources, in the sense used in transfer learning. In this case it is common to arrange the crossvalidation procedure in a way that takes the source structure into account. Although common in practice, this procedure does not appear to have been theoretically analysed. We present new estimators of the variance of the crossvalidation, both in the multiplesource setting and in the standard iid setting. These new estimators allow for much more accurate confidence intervals and hypothesis tests to compare algorithms. 1.
Declarative merging of and reasoning about decision diagrams
 Workshop on Constraint Based Methods for Bioinformatics (WCB 2011
"... Abstract. Decision diagrams (DDs) are a popular means for decision making, e.g., in clinical guidelines. Some applications require to integrate multiple related yet different diagrams into a single one, for which algorithms have been developed. However, existing merging tools are monolithic, applica ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Decision diagrams (DDs) are a popular means for decision making, e.g., in clinical guidelines. Some applications require to integrate multiple related yet different diagrams into a single one, for which algorithms have been developed. However, existing merging tools are monolithic, applicationtailored programs with no clear interface to the actual merging procedures, which makes their reuse hard if not impossible. We present a general, declarative framework for merging and manipulating decision diagram tasks based on a belief set merging framework. Its modular architecture hides details of the merging algorithm and supports preand userdefined merging operators, which can be flexibly arranged in merging plans to express complex merging tasks. Changing and restructuring merging tasks becomes easy, and relieves the user from (repetitive) manual integration to focus on experimenting with different merging strategies, which is vital for applications, as discussed for an example from DNA classification. Our framework supports also reasoning over DDs using answer set programming (ASP), which allows to drive the merging process and select results based on the application needs. 1
ShortTerm Power Forecasting Model for Photovoltaic Plants Based on Historical Similarity
, 2013
"... energies ..."
(Show Context)
Optimal and Robust Price Experimentation: Learning by Lottery
"... This paper studies optimal price learning for one or more items. We introduce the Schrödinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper studies optimal price learning for one or more items. We introduce the Schrödinger price experiment (SPE) which superimposes classical price experiments using lotteries, and thereby extracts more information from each customer interaction. If buyers are perfectly rational we show that there exist SPEs that in the limit of infinite superposition learn optimally and exploit optimally. We refer to the new resulting mechanism as the hopeful mechanism (HM) since although it is incentive compatible, buyers can deviate with extreme consequences for the seller at very little cost to themselves. For realworld settings we propose a robust version of the approach which takes the form of a Markov decision process where the actions are functions. We provide approximate policies motivated by the best of sampled set (BOSS) algorithm coupled with approximate Bayesian inference. Numerical studies show that the proposed method significantly increases seller revenue compared to classical price experimentation, even for the singleitem case. 1