Results 1  10
of
15
Fast exact inference for recursive cardinality models
 In Uncertainty in Artificial Intelligence
, 2012
"... Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is wellunderstood, with efficient computations taking O(D log D) time. Yet ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Cardinality potentials are a generally useful class of high order potential that affect probabilities based on how many of D binary variables are active. Maximum a posteriori (MAP) inference for cardinality potential models is wellunderstood, with efficient computations taking O(D log D) time. Yet efficient marginalization and sampling have not been addressed as thoroughly in the machine learning community. We show that there exists a simple algorithm for computing marginal probabilities and drawing exact joint samples that runs in O(D log 2 D) time, and we show how to frame the algorithm as efficient belief propagation in a low order treestructured model that includes additional auxiliary variables. We then develop a new, more general class of models, termed Recursive Cardinality models, which take advantage of this efficiency. Finally, we show how to do efficient exact inference in models composed of a tree structure and a cardinality potential. We explore the expressive power of Recursive Cardinality models and empirically demonstrate their utility. 1
FieldFailure Predictions Based on Failuretime Data with Dynamic Covariate Information
"... Modern technological developments such as smart chips, sensors and wireless networks, have changed many data collection processes. For example, there are more and moreproductsbeingproducedwithautomaticdatacollecting devicesthat trackhowand under which environments the products are being used. While ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Modern technological developments such as smart chips, sensors and wireless networks, have changed many data collection processes. For example, there are more and moreproductsbeingproducedwithautomaticdatacollecting devicesthat trackhowand under which environments the products are being used. While there is a tremendous amount of dynamicdatabeingcollected, therehasbeen little research onusingsuchdata to provide more accurate reliability information for products and systems. Motivated by a warrantyprediction application, this paper focuses on using failuretime data with dynamic covariate information to make fieldfailure predictions. We provide a general framework for prediction using failuretime data with dynamic covariate information. The dynamic covariate information is incorporated into the failuretime distribution through a cumulative exposure model. We develop a procedure to predict fieldfailure returns up to a specified future time. This procedure accounts for unittounit variability in the covariate process. We also define a metric to quantify the improvements in prediction accuracy obtained by using dynamic information. We conducted simulations to study the effect of different sources of covariate process variability on predictions. We also provide some discussion of future opportunities for using dynamic data.
1 GREVE: Genomic Recurrent Event ViEwer to assist the identifica tion of patterns across individual cancer samples
"... Motivation: GREVE has been developed to assist with the identification of recurrent genomic aberrations across Cancer samples. The exact characterization of such aberrations remains a challenge despite the availability of increasing amount of data, from SNParray to Next Generation Sequencing. Furth ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Motivation: GREVE has been developed to assist with the identification of recurrent genomic aberrations across Cancer samples. The exact characterization of such aberrations remains a challenge despite the availability of increasing amount of data, from SNParray to Next Generation Sequencing. Furthermore genomic aberrations in cancer are especially difficult to handle because they are, by nature, unique to the patients. However their recurrence in specific regions of the genome have been shown to reflect their relevance in the development of tumors. GREVE makes use of previously characterized events to identify such regions and focus any further analysis. Availability: GREVE is available via a web interface and opensource application
Author manuscript, published in "Interdisciplinary Science for Innovative Air Traffic Management (2013)" Computational Methods for Probabilistic Inference of Sector Congestion in Air Traffic Management
, 2013
"... Abstract. This article addresses the issue of computing the expected cost functions from a probabilistic model of the air traffic flow and capacity management. The ClenshawCurtis quadrature is compared to MonteCarlo algorithms defined specifically for this problem. By tailoring the algorithms to t ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This article addresses the issue of computing the expected cost functions from a probabilistic model of the air traffic flow and capacity management. The ClenshawCurtis quadrature is compared to MonteCarlo algorithms defined specifically for this problem. By tailoring the algorithms to this model, we reduce
Thales Air Systems
, 2013
"... Abstract—We investigate a method to deal with congestion of sectors and delays in the tactical phase of air traffic flow and capacity management. It relies on temporal objectives given for every point of the flight plans and shared among the controllers in order to create a collaborative environment ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We investigate a method to deal with congestion of sectors and delays in the tactical phase of air traffic flow and capacity management. It relies on temporal objectives given for every point of the flight plans and shared among the controllers in order to create a collaborative environment. This would enhance the transition from the network view of the flow management to the local view of air traffic control. Uncertainty is modeled at the trajectory level with temporal information on the boundary points of the crossed sectors and then, we infer the probabilistic occupancy count. Therefore, we can model the accuracy of the trajectory prediction in the optimization process in order to fix some safety margins. On the one hand, more accurate is our prediction; more efficient will be the proposed solutions, because of the tighter safety margins. On the other hand, when uncertainty is not negligible, the proposed solutions will be more robust to disruptions. Furthermore, a multiobjective algorithm is used to find the tradeoff between the delays and congestion, which are antagonist in airspace with high traffic density. The flow management position can choose manually, or automatically with a preferencebased algorithm, the adequate solution. This method is tested against two instances, one with 10 flights and 5 sectors and one with 300 flights and 16 sectors. I.
Under consideration for Knowledge and Information Systems Sliding Windows over Uncertain Data Streams
, 2013
"... Abstract. Uncertain data streams can have tuples with both value and existential uncertainty. A tuple has value uncertainty when it can assume multiple possible values. A tuple is existentially uncertain when the sum of the probabilities of its possible values is less than 1. A situation where exis ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Uncertain data streams can have tuples with both value and existential uncertainty. A tuple has value uncertainty when it can assume multiple possible values. A tuple is existentially uncertain when the sum of the probabilities of its possible values is less than 1. A situation where existential uncertainty can arise is when applying relational operators to streams with value uncertainty. Several prior works have focused on querying and mining data streams with both value and existential uncertainty. However, none of them have studied, in depth, the implications of existential uncertainty on sliding window processing, even though it naturally arises when processing uncertain data. In this work, we study the challenges arising from existential uncertainty, more specifically the management of countbased sliding windows, which are a basic building block of stream processing applications. We extend the semantics of sliding window to define the novel concept of uncertain sliding windows, and provide both exact and approximate algorithms for managing windows under existential uncertainty. We also show how current stateoftheart techniques for answering similarity join queries can be easily adapted to be used with uncertain sliding windows. We evaluate our proposed techniques under a variety of configurations using real data. The results show that the algorithms used to maintain uncertain sliding windows can efficiently operate while providing a high quality approximation in query answering. In addition, we show that
Confidence Interval Procedures for System Reliability and Applications to CompetingRisk Models
"... System reliability depends on the reliability of the system’s components and the structure of the system. For example, in a competingrisk model, the system fails when the weakest component fails. The reliability function and the quantile function of a complicated system are two important metrics fo ..."
Abstract
 Add to MetaCart
System reliability depends on the reliability of the system’s components and the structure of the system. For example, in a competingrisk model, the system fails when the weakest component fails. The reliability function and the quantile function of a complicated system are two important metrics for characterizing the system’s reliability. When there are data available at the component level, the system reliability can be estimated by using the component level information. Confidence intervals (CI) are needed to quantify the statistical uncertainty in the estimation. Obtaining system reliability CI procedures with good properties is not straightforward, especially when the system structure is complicated. In this paper, we develop a general procedure for constructing a CI for the system failuretime quantile function by using the implicit delta method. We also develop general procedures for constructing a CI for the cdf of the system. We show that the recommended procedures are asymptotically valid and have good statistical properties. We conduct simulations to study the finitesamplecoverage properties of the proposedproceduresand compare them with existing procedures. We apply the proposed procedures to three applications; two applications in competingrisk models and an application with a koutofs system. The paper concludes with some discussion and an outline of areas for future research.
ML pdf
"... Nowadays, many consumer products are designed and manufactured so that the probability of failure during the technological life of the product is small. Most product units in the field retire before they fail. Even though the number of failures of such products is small, there is still a need to mod ..."
Abstract
 Add to MetaCart
(Show Context)
Nowadays, many consumer products are designed and manufactured so that the probability of failure during the technological life of the product is small. Most product units in the field retire before they fail. Even though the number of failures of such products is small, there is still a need to model and predict field failures for purposes of risk assessment in applications that involve safety. Challenges in modeling and predictions of failures arise because the retirement times are often unknown, few failures have been reported, and there are delays in field failure reporting. Motivated by an application to assess the risk of failure for a particular product, we develop a statistical prediction procedure that considers the impact of product retirements and reporting delays. Based on the developed method, we provide the point predictions for cumulative number of reported failures over a future time period and corresponding prediction intervals to quantify uncertainty. We also conduct sensitivity analysis to assess the effects of different assumptions on failuretime and retirement distributions.
Computational Methods for Probabilistic Inference of Sector Congestion in Air Traffic Management
"... Abstract. This article addresses the issue of computing the expected cost functions from a probabilistic model of the air traffic flow and capacity management. The ClenshawCurtis quadrature is compared to MonteCarlo algorithms defined specifically for this problem. By tailoring the algorithms to ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This article addresses the issue of computing the expected cost functions from a probabilistic model of the air traffic flow and capacity management. The ClenshawCurtis quadrature is compared to MonteCarlo algorithms defined specifically for this problem. By tailoring the algorithms to this model, we reduce the computational burden in order to simulate real instances. The study shows that the MonteCarlo algorithm is more sensible to the amount of uncertainty in the system, but has the advantage to return a result with the associated accuracy on demand. The performances for both approaches are comparable for the computation of the expected cost of delay and the expected cost of congestion. Finally, this study shows some evidences that the simulation of the proposed probabilistic model is tractable for realistic instances.
1Thales Air Systems
"... Abstract—We investigate a method to deal with congestion of sectors and delays in the tactical phase of air traffic flow and capacity management. It relies on temporal objectives given for every point of the flight plans and shared among the controllers in order to create a collaborative environment ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We investigate a method to deal with congestion of sectors and delays in the tactical phase of air traffic flow and capacity management. It relies on temporal objectives given for every point of the flight plans and shared among the controllers in order to create a collaborative environment. This would enhance the transition from the network view of the flow management to the local view of air traffic control. Uncertainty is modeled at the trajectory level with temporal information on the boundary points of the crossed sectors and then, we infer the probabilistic occupancy count. Therefore, we can model the accuracy of the trajectory prediction in the optimization process in order to fix some safety margins. On the one hand, more accurate is our prediction; more efficient will be the proposed solutions, because of the tighter safety margins. On the other hand, when uncertainty is not negligible, the proposed solutions will be more robust to disruptions. Furthermore, a multiobjective algorithm is used to find the tradeoff between the delays and congestion, which are antagonist in airspace with high traffic density. The flow management position can choose manually, or automatically with a preferencebased algorithm, the adequate solution. This method is tested against two instances, one with 10 flights and 5 sectors and one with 300 flights and 16 sectors. I.