Results 1  10
of
1,608
New results in linear filtering and prediction theory
 TRANS. ASME, SER. D, J. BASIC ENG
, 1961
"... A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this "variance equation " completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary sta ..."
Abstract

Cited by 607 (0 self)
 Add to MetaCart
in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed sidebyside. Properties of the variance equation are of great interest
Approximating the Stochastic Knapsack Problem: The Benefit of Adaptivity
, 2005
"... We consider a stochastic variant of the NPhard 0/1 knapsack problem in which item values are deterministic and item sizes are independent random variables with known, arbitrary distributions. Items are placed in the knapsack sequentially, and the act of placing an item in the knapsack instantiates ..."
Abstract
 Add to MetaCart
by an optimal nonadaptive policy. We bound the adaptivity gap of the Stochastic Knapsack problem by demonstrating a polynomialtime algorithm that computes a nonadaptive policy whose expected value approximates that of an optimal adaptive policy to within a factor of 4. We also devise a polynomial
The Dynamic and Stochastic Knapsack Problem
 Operations Research
, 1998
"... The Dynamic and Stochastic Knapsack Problem #DSKP# is de#ned as follows: Items arrive according toaPoisson process in time. Each item has a demand #size# for a limited resource #the knapsack# and an associated reward. The resource requirements and rewards are jointly distributed according to a kn ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
The Dynamic and Stochastic Knapsack Problem #DSKP# is de#ned as follows: Items arrive according toaPoisson process in time. Each item has a demand #size# for a limited resource #the knapsack# and an associated reward. The resource requirements and rewards are jointly distributed according to a
Improved Approximation Results for Stochastic Knapsack Problems
"... In the stochastic knapsack problem, we are given a set of items each associated with a probability distribution on sizes and a profit, and a knapsack of unit capacity. The size of an item is revealed as soon as it is inserted into the knapsack, and the goal is to design a policy that maximizes the e ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
the expected profit of items that are successfully inserted into the knapsack. The stochastic knapsack problem is a natural generalization of the classical knapsack problem, and arises in many applications, including bandwidth allocation, budgeted learning, and scheduling. An adaptive policy for stochastic
Online learning for matrix factorization and sparse coding
, 2010
"... Sparse coding—that is, modelling data vectors as sparse linear combinations of basis elements—is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the largescale matrix factorization problem that consists of learning the basis set in order to ad ..."
Abstract

Cited by 330 (31 self)
 Add to MetaCart
to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, nonnegative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations
The Dynamic and Stochastic Knapsack Problem with Deadlines
 Operations Research
, 1996
"... In this paper a dynamic and stochastic model of the wellknown knapsack problem is developed and analyzed. The problem is motivated by a wide variety of realworld applications. Objects of random weight and reward arrive according to a stochastic process in time. The weights and rewards associated w ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
In this paper a dynamic and stochastic model of the wellknown knapsack problem is developed and analyzed. The problem is motivated by a wide variety of realworld applications. Objects of random weight and reward arrive according to a stochastic process in time. The weights and rewards associated
The sample average approximation method for stochastic discrete optimization
 SIAM Journal on Optimization
, 2001
"... Abstract. In this paper we study a Monte Carlo simulation based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and consequently the expected value function is approximated by the corresponding sample average function. The ob ..."
Abstract

Cited by 213 (21 self)
 Add to MetaCart
. The obtained sample average optimization problem is solved, and the procedure is repeated several times until a stopping criterion is satisfied. We discuss convergence rates and stopping rules of this procedure and present a numerical example of the stochastic knapsack problem. Key words. Stochastic
Adaptivity and Approximation for Stochastic Packing Problems
"... We study stochastic variants of Packing Integer Programs (PIP) the problems of finding a maximumvalue 0/1 vector x satisfying Ax < = b, with A and b nonnegative. Many combinatorial problems belong to this broad class, including the knapsack problem, maximum clique, stable set, matching, hyper ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
We study stochastic variants of Packing Integer Programs (PIP) the problems of finding a maximumvalue 0/1 vector x satisfying Ax < = b, with A and b nonnegative. Many combinatorial problems belong to this broad class, including the knapsack problem, maximum clique, stable set, matching
The WakeSleep Algorithm for Unsupervised Neural Networks
, 1995
"... We describe an unsupervised learning algorithm for a multilayer network of stochastic neurons. Bottomup "recognition" connections convert the input into representations in successive hidden layers and topdown "generative" connections reconstruct the representation in one layer ..."
Abstract

Cited by 283 (39 self)
 Add to MetaCart
, neurons are driven by generative connections and recognition connections are adapted to increase the probability that they would produce the correct activity vector in the layer above. Supervised learning algorithms for multilayer neural networks face two problems: They require a teacher to specify
Results 1  10
of
1,608