Results 1  10
of
28
RateDistortion Optimized Streaming of Packetized Media
 IEEE Trans. Multimedia
, 2001
"... This paper addresses the problem of streaming packetized media ..."
Abstract

Cited by 225 (12 self)
 Add to MetaCart
This paper addresses the problem of streaming packetized media
Evaluating retrieval performance using clickthrough data
, 2003
"... This paper proposes a new method for evaluating the quality of retrieval functions. Unlike traditional methods that require relevance judgments by experts or explicit user feedback, it is based entirely on clickthrough data. This is a key advantage, since clickthrough data can be collected at very l ..."
Abstract

Cited by 53 (7 self)
 Add to MetaCart
This paper proposes a new method for evaluating the quality of retrieval functions. Unlike traditional methods that require relevance judgments by experts or explicit user feedback, it is based entirely on clickthrough data. This is a key advantage, since clickthrough data can be collected at very low cost and without overhead for the user. Taking an approach from experiment design, the paper proposes an experiment setup that generates unbiased feedback about the relative quality of two search results without explicit user feedback. A theoretical analysis shows that the method gives the same results as evaluation with traditional relevance judgments under mild assumptions. An empirical analysis verifies that the assumptions are indeed justified and that the new method leads to conclusive results in a WWW retrieval study. 1
Learning interpretable SVMs for biological sequence classification
 BMC BIOINFORMATICS
, 2005
"... We propose novel algorithms for solving the socalled Support Vector Multiple Kernel Learning problem and show how they can be used to understand the resulting support vector decision function. While classical kernelbased algorithms (such as SVMs) are based on a single kernel, in Multiple Kernel Le ..."
Abstract

Cited by 32 (9 self)
 Add to MetaCart
We propose novel algorithms for solving the socalled Support Vector Multiple Kernel Learning problem and show how they can be used to understand the resulting support vector decision function. While classical kernelbased algorithms (such as SVMs) are based on a single kernel, in Multiple Kernel Learning a quadraticallyconstraint quadratic program is solved in order to find a sparse convex combination of a set of support vector kernels. We show how this problem can be cast into a semiinfinite linear optimization problem which can in turn be solved efficiently using a boostinglike iterative method in combination with standard SVM optimization algorithms. The proposed method is able to deal with thousands of examples while combining hundreds of kernels within reasonable time. In the second part we show how this technique can be used to understand the obtained decision function in order to extract biologically relevant knowledge about the sequence analysis problem at hand. We consider the problem of splice site identification and combine string kernels at different sequence positions and with various substring (oligomer) lengths. The proposed algorithm computes a sparse weighting over the length and the substring, highlighting which substrings are important for discrimination. Finally, we propose a bootstrap scheme in order to reliably identify a few statistically significant positions, which can then be used for further analysis such as consensus finding.
Adjusting the Outputs of a Classifier to New a Priori Probabilities May Significantly Improve Classification Accuracy: Evidence from a MultiClass Problem in Remote Sensing
 NEURAL COMPUTATION
, 2001
"... In the present study, we introduce a simple iterative procedure that allows to correct the outputs of a classifier with respect to the new a priori probabilities of a new data set to be scored, even when these new a priori probabilities are unknown in advance. We also show that a significant i ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
In the present study, we introduce a simple iterative procedure that allows to correct the outputs of a classifier with respect to the new a priori probabilities of a new data set to be scored, even when these new a priori probabilities are unknown in advance. We also show that a significant increase in classification accuracy can be observed when using this procedure properly. More specifically, by applying the correcting procedure to the outputs of a simple logistic regression model, we observe an increase of 5.8% of classification rate on a di#cult realworld multiclass problem  the automatic labeling of geographical maps based on remote sensing information. Moreover,
Articulatory Methods for Speech Production and Recognition
, 1996
"... roductionbased knowledge into the recognition framework. By using an explicit timedomain articulatory model of the mechanisms of coarticulation, it is hoped to obtain a more accurate model of contextual effects in the acoustic signal, while using fewer parameters than traditional acousticallydri ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
roductionbased knowledge into the recognition framework. By using an explicit timedomain articulatory model of the mechanisms of coarticulation, it is hoped to obtain a more accurate model of contextual effects in the acoustic signal, while using fewer parameters than traditional acousticallydriven approaches. Separate articulatory and acoustic models are provided, and in each case the parameters of the models are automatically optimised over a training data set. A predictive statisticallybased model of coarticulation is described, and found to yield improved articulatory modelling accuracy compared with Xray articulatory traces. Parameterised acoustic vectors are synthesised by a set of artificial neural networks, and the resulting acoustic representations are used to rescore Nbest recognition hypothesis lists produced by an HMMbased recogniser. The system is evaluated on two test databases, one including speakerspecific Xray training data and the other aco
A Theoretical Study on Expert Fusion Strategies
, 2000
"... We look at a single point in the feature space, two classes, and L classiers estimating the posterior probability for class ! 1 . Assuming that the estimates are independent and identically distributed (normal or uniform), we give formulas for the classication error for the following fusion methods: ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We look at a single point in the feature space, two classes, and L classiers estimating the posterior probability for class ! 1 . Assuming that the estimates are independent and identically distributed (normal or uniform), we give formulas for the classication error for the following fusion methods: average, minimum, maximum, median, majority vote and oracle. Keywords Classier combination, theoretical error, expert fusion, order statistics, majority vote, independent classiers. I. Introduction Classier combination has received considerable attention in the past decade and is now an established pattern recognition ospring. Recently, the focus has been shifting from practical heuristic solution of the combination problem towards explaining why combination methods and strategies work so well and in what cases some methods are better than others. Let D = fD 1 ; : : : ; DL g be a set (pool/committee/ensemble) of classiers, also regarded as \experts". By combining the individual outpu...
2003): “Bidding for the Future: Signaling in Auctions with an Aftermarket
 Journal of Economic Theory
"... Abstract. This paper considers auctions in which bidders compete for an advantage in future strategic interactions. Examples include bidding for patented innovations that reduce production costs, takeover battles, and the auctioning of licenses to operate in new markets (e.g. the recent spectrum auc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. This paper considers auctions in which bidders compete for an advantage in future strategic interactions. Examples include bidding for patented innovations that reduce production costs, takeover battles, and the auctioning of licenses to operate in new markets (e.g. the recent spectrum auctions). We show that when bidders have an incentive to exaggerate their private information, equilibrium bids are biased upwards as bidders try to signal via the winning bid. Signaling is most prominent in secondprice auctions where equilibrium bids can be \above value, " and may diverge to in nity for a strategic improvement everyone agrees is negligible. In English and rstprice auctions, signaling is necessarily less extreme as the winning bidder incurs the cost of her signaling choice. Hence there is no strategic equivalence between the secondprice and English auction in this independent privateinformation context (although revenue equivalence holds). In the English auction, the winner increases the winning bid after everyone else has dropped out. The opportunity to signal via the winning bid lowers bidders ' expected payo s and raises the seller's expected revenue, giving sellers an incentive to conceal information they may have about bidders ' private valuations. Losers ' pro ts are una ected by the
Learning More Powerful Test Statistics for ClickBased Retrieval Evaluation
"... Interleaving experiments are an attractive methodology for evaluating retrieval functions through implicit feedback. Designed as a blind and unbiased test for eliciting a preference between two retrieval functions, an interleaved ranking of the results of two retrieval functions is presented to the ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Interleaving experiments are an attractive methodology for evaluating retrieval functions through implicit feedback. Designed as a blind and unbiased test for eliciting a preference between two retrieval functions, an interleaved ranking of the results of two retrieval functions is presented to the users. It is then observed whether the users click more on results from one retrieval function or the other. While it was shown that such interleaving experiments reliably identify the better of the two retrieval functions, the naive approach of counting all clicks equally leads to a suboptimal test. We present new methods for learning how to score different types of clicks so that the resulting test statistic optimizes the statistical power of the experiment. This can lead to substantial savings in the amount of data required for reaching a target confidence level. Our methods are evaluated on an operational search engine over a collection of scientific articles.
Human Visual Search Does Not Maximize the Post Saccadic Probability of Identifying Targets
, 2011
"... Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the e ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively.
Limitations of human 3Dforce discrimination
 University of Munich
, 2006
"... Internetbased telepresence and teleaction systems require packetbased transmission of haptic data and typically generate high packet rates between operator and teleoperator. This leads to the necessity of packet rate reduction techniques. The socalled deadband approach presented earlier by the au ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Internetbased telepresence and teleaction systems require packetbased transmission of haptic data and typically generate high packet rates between operator and teleoperator. This leads to the necessity of packet rate reduction techniques. The socalled deadband approach presented earlier by the authors uses a psychophysically motivated scheme based on Weber’s difference threshold (just noticable difference JND) where force sample values are only transmitted if the change exceeds this threshold. This approach has been extended to three dimensions resulting in an additional perceptual domain namely force direction. An experimental evaluation with human subjects was conducted in order to examine the change of the JND in 3D when force magnitude and force direction are combined. Our results show that the extension into dimensions yields to an increased JND in certain cases. Thus, higher compression ratios of haptic data and reduction in number of packets sent over the network can be reached. 1.