Results 1  10
of
71
Insufficiency of linear coding in network information flow
 IEEE TRANSACTIONS ON INFORMATION THEORY (REVISED JANUARY
, 2005
"... It is known that every solvable multicast network has a scalar linear solution over a sufficiently large finitefield alphabet. It is also known that this result does not generalize to arbitrary networks. There are several examples in the literature of solvable networks with no scalar linear solutio ..."
Abstract

Cited by 91 (14 self)
 Add to MetaCart
It is known that every solvable multicast network has a scalar linear solution over a sufficiently large finitefield alphabet. It is also known that this result does not generalize to arbitrary networks. There are several examples in the literature of solvable networks with no scalar linear solution over any finite field. However, each example has a linear solution for some vector dimension greater than one. It has been conjectured that every solvable network has a linear solution over some finitefield alphabet and some vector dimension. We provide a counterexample to this conjecture. We also show that if a network has no linear solution over any finite field, then it has no linear solution over any finite commutative ring with identity. Our counterexample network has no linear solution even in the more general algebraic context of modules, which includes as special cases all finite rings and Abelian groups. Furthermore, we show that the network coding capacity of this network is strictly greater than the maximum linear coding capacity over any finite field (exactly 10 % greater), so the network is not even asymptotically linearly solvable. It follows that, even for more general versions of linearity such as convolutional coding, filterbank coding, or linear time sharing, the network has no linear solution.
Maximum Likelihood Discriminant Feature Spaces
 in Proc. ICASSP
, 2000
"... Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. In recent years, there has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal withinclass covariance constraint. This paper ..."
Abstract

Cited by 71 (17 self)
 Add to MetaCart
Linear discriminant analysis (LDA) is known to be inappropriate for the case of classes with unequal sample covariances. In recent years, there has been an interest in generalizing LDA to heteroscedastic discriminant analysis (HDA) by removing the equal withinclass covariance constraint. This paper presents a new approach to HDA by defining an objective function which maximizes the class discrimination in the projected subspace while ignoring the rejected dimensions. Moreover, we will investigate the link between discrimination and the likelihood of the projected samples and show that HDA can be viewed as a constrained ML projection for a full covariance gaussian model, the constraint being given by the maximization of the projected betweenclass scatter volume. It will be shown that, under diagonal covariance gaussian modeling constraints, applying a diagonalizing linear transformation (MLLT) to the HDA space results in increased classification accuracy even though HDA alone actually...
The matrix cookbook
, 2006
"... What is this? These pages are a collection of facts (identities, approximations, inequalities, relations,...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference. Disclaimer: The identities, approximations and re ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
What is this? These pages are a collection of facts (identities, approximations, inequalities, relations,...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference. Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at
Uncertainty decoding for noise robust speech recognition
 in Proc. Interspeech
, 2004
"... This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration. It has not been submitted in whole or in part for a degree at any other university. Some of the work has been published previously in conference proceedings ..."
Abstract

Cited by 36 (12 self)
 Add to MetaCart
This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration. It has not been submitted in whole or in part for a degree at any other university. Some of the work has been published previously in conference proceedings
A framework for validation of computer models
, 2002
"... In this paper, we present a framework that enables computer model evaluation oriented towards answering the question: Does the computer model adequately represent reality? The proposed validation framework is a sixstep procedure based upon Bayesian statistical methodology. The Bayesian methodology ..."
Abstract

Cited by 35 (11 self)
 Add to MetaCart
In this paper, we present a framework that enables computer model evaluation oriented towards answering the question: Does the computer model adequately represent reality? The proposed validation framework is a sixstep procedure based upon Bayesian statistical methodology. The Bayesian methodology is particularly suited to treating the major issues associated with the validation process: quantifying multiple sources of error and uncertainty in computer models; combining multiple sources of information; and updating validation assessments as new information is acquired. Moreover, it allows inferential statements to be made about predictive error associated with model predictions in untested situations. The framework is implemented in two test bed models (a vehicle crash model and a resistance
Integrated variance reduction strategies for simulation
 Operations Research
, 1996
"... We develop strategies for integrated use of certain wellknown variance reduction techniques to estimate a mean response in a finitehorizon simulation experiment. The building blocks for these integrated variance reduction strategies are the techniques of conditional expectation, correlation induc ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
We develop strategies for integrated use of certain wellknown variance reduction techniques to estimate a mean response in a finitehorizon simulation experiment. The building blocks for these integrated variance reduction strategies are the techniques of conditional expectation, correlation induction (including antithetic variates and Latin hypercube sampling), and control variates; and all pairings of these techniques are examined. For each integrated strategy, we establish sufficient conditions under which that strategy will yield a smaller response variance than its constituent variance reduction techniques will yield individually. We also provide asymptotic variance comparisons between many of the methods discussed, with emphasis on integrated strategies that incorporate Latin hypercube sampling. An experimental performance evaluation reveals that in the simulation of stochastic activity networks, substantial variance reductions can be achieved with these integrated strategies. Both the theoretical and experimental results indicate that superior performance is obtained via joint application of the techniques of conditional expectation and Latin hypercube sampling. Subject classifications: Simulation, efficiency: conditioning, control variates, correlation inArea of review: Simulation.
Survey of decision field theory
, 2002
"... This article summarizes the cumulative progress of a cognitivedynamical approach to decision making and preferential choice called decision field theory. This review includes applications to (a) binary decisions among risky and uncertain actions, (b) multiattribute preferential choice, (c) multia ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
This article summarizes the cumulative progress of a cognitivedynamical approach to decision making and preferential choice called decision field theory. This review includes applications to (a) binary decisions among risky and uncertain actions, (b) multiattribute preferential choice, (c) multialternative preferential choice, and (d) certainty equivalents such as prices. The theory provides natural explanations for violations of choice principles including strong stochastic transitivity, independence of irrelevant alternatives, and regularity. The theory also accounts for the relation between choice and decision time, preference reversals between choice and certainty equivalents, and preference reversals under time pressure. Comparisons with other dynamic models of decisionmaking and other random utility models of preference are discussed.
ASAP3: A batch means procedure for steadystate simulation analysis
 ACM Transactions on Modeling and Computer Simulation
, 2005
"... We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidenceinterval estimators for the expected response of a steadystate simulation. ASAP3 is a sequential procedure designed to produce a confidenceinterval estimator that satisfies userspecif ..."
Abstract

Cited by 26 (18 self)
 Add to MetaCart
We introduce ASAP3, a refinement of the batch means algorithms ASAP and ASAP2, that delivers point and confidenceinterval estimators for the expected response of a steadystate simulation. ASAP3 is a sequential procedure designed to produce a confidenceinterval estimator that satisfies userspecified requirements on absolute or relative precision as well as coverage probability. ASAP3 operates as follows: the batch size is progressively increased until the batch means pass the ShapiroWilk test for multivariate normality; and then ASAP3 fits a firstorder autoregressive (AR(1)) time series model to the batch means. If necessary, the batch size is further increased until the autoregressive parameter in the AR(1) model does not significantly exceed 0.8. Next, ASAP3 computes the terms of an inverse CornishFisher expansion for the classical batch means tratio based on the AR(1) parameter estimates; and finally ASAP3 delivers a correlationadjusted confidence interval based on this expansion. Regarding not only conformance to the precision and coverageprobability requirements but also the mean and variance of the halflength of the delivered confidence interval, ASAP3 compared favorably to other batch means procedures (namely,
A Neural Network Primer
, 1994
"... Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in parallel) the information provided by its sy ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in parallel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron, and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notio...
Efficient Particle Filtering for Multiple Target Tracking with Application to Tracking in Structured Images
, 2002
"... For many dynamic estimation problems involving nonlinear and/or nonGaussian models, particle filtering offers improved performance at the expense of computational effort. This paper describes a scheme for efficiently tracking multiple targets using particle filters. The tracking of the individual t ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
For many dynamic estimation problems involving nonlinear and/or nonGaussian models, particle filtering offers improved performance at the expense of computational effort. This paper describes a scheme for efficiently tracking multiple targets using particle filters. The tracking of the individual targets is made efficient through the use of RaoBlackwellisation. The tracking of multiple targets is made practicable using QuasiMonte Carlo integration. The efficiency of the approach is illustrated on synthetic data.