Results 1 
3 of
3
Using Difficulty of Prediction to Decrease Computation: Fast Sort, Priority Queue and Convex Hull on Entropy Bounded Inputs
"... There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashi ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
There is an upsurge in interest in the Markov model and also more general stationary ergodic stochastic distributions in theoretical computer science community recently (e.g. see [Vitter,KrishnanSl], [Karlin,Philips,Raghavan92], [Raghavan9 for use of Markov models for online algorithms, e.g., cashing and prefetching). Their results used the fact that compressible sources are predictable (and vise versa), and showed that online algorithms can improve their performance by prediction. Actual page access sequences are in fact somewhat compressible, so their predictive methods can be of benefit. This paper investigates the interesting idea of decreasing computation by using learning in the opposite way, namely to determine the difficulty of prediction. That is, we will ap proximately learn the input distribution, and then improve the performance of the computation when the input is not too predictable, rather than the reverse. To our knowledge,
DATA CONDITIONING METHODS FOR AERO ENGINE SIGNALS AND THEIR INFLUENCE ON THE ACCURACY OF FATIGUE LIFE USAGE MONITORING RESULTS
"... During the design of monitoring systems for aircraft engines it is common to include algorithmic provisions to reduce the anticipated noise content of the input signals by applying some sort of filters at a suitable stage of processing. A tacit assumption is often made that such filtering will make ..."
Abstract
 Add to MetaCart
(Show Context)
During the design of monitoring systems for aircraft engines it is common to include algorithmic provisions to reduce the anticipated noise content of the input signals by applying some sort of filters at a suitable stage of processing. A tacit assumption is often made that such filtering will make the system more robust with respect to noise or even data errors. The elimination of uncorrelated noise in a signal emanating from a physical process may have a large effect on the predictability of this signal, thus enabling high data compression rates in a dedicated or embedded flight data recording system. However, little is known about the influence of filtering the input into engine fatigue life usage calculations on the outcome of various models used in present monitoring systems. A simplified, yet realistic mathematical model is used to describe the thermal response, stress and fatigue behavior of fracture critical parts in an aero engine compressor. Using this model, the consequences of applying digital recursive filters to recorded engine data are investigated. The analysis concentrates on statistical methods to assess the accuracy. From the results some guidelines are derived that allow a more systematic selection of filter parameters when a predefined accuracy of the fatigue life usage results is required.
Comparison of Several Smoothing Methods in Statistical Language Model
"... With the development of computer technology and the appearance of huge training text corpus, the performance of language model has improved a lot recently. But its intrinsic sparse data problem still exists. This paper investigates several smoothing methods in the application of Chinese continuous s ..."
Abstract
 Add to MetaCart
(Show Context)
With the development of computer technology and the appearance of huge training text corpus, the performance of language model has improved a lot recently. But its intrinsic sparse data problem still exists. This paper investigates several smoothing methods in the application of Chinese continuous speech recognition. We compare the performance of different methods, particularly in the situation of pruned language model and conclude that the KneserNey strategy is better for the model without pruning while its performance decreases for the pruned language model. 1.