Results 11  20
of
1,524
Learning to Construct Knowledge Bases from the World Wide Web
, 2000
"... The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would ena ..."
Abstract

Cited by 206 (4 self)
 Add to MetaCart
The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledgebased inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs. The first is an ontology that defines the classes (e.g., company, person, employee, product) and relations (e.g., employed_by, produced_by) of interest when creating the knowledge base. The second is a set of training data consisting of labeled regions of hypertext that represent instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This article describes our general a...
A Survey of Socially Interactive Robots
, 2002
"... This paper reviews "socially interactive robots": robots for which social humanrobot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the di#erent forms of "social robots". We ..."
Abstract

Cited by 205 (24 self)
 Add to MetaCart
This paper reviews "socially interactive robots": robots for which social humanrobot interaction is important. We begin by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the di#erent forms of "social robots". We then present a taxonomy of design methods and system components used to build socially interactive robots. Finally, we describe the impact of these these robots on humans and discuss open issues. An expanded version of this paper, which contains a survey and taxonomy of current applications, is available as a technical report[61].
Efficient retrieval of similar time sequences under time warping
 in Data Engineering
, 1998
"... ..."
Hidden Markov processes
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finite ..."
Abstract

Cited by 174 (3 self)
 Add to MetaCart
Abstract—An overview of statistical and informationtheoretic aspects of hidden Markov processes (HMPs) is presented. An HMP is a discretetime finitestate homogeneous Markov chain observed through a discretetime memoryless invariant channel. In recent years, the work of Baum and Petrie on finitestate finitealphabet HMPs was expanded to HMPs with finite as well as continuous state spaces and a general alphabet. In particular, statistical properties and ergodic theorems for relative entropy densities of HMPs were developed. Consistency and asymptotic normality of the maximumlikelihood (ML) parameter estimator were proved under some mild conditions. Similar results were established for switching autoregressive processes. These processes generalize HMPs. New algorithms were developed for estimating the state, parameter, and order of an HMP, for universal coding and classification of HMPs, and for universal decoding of hidden Markov channels. These and other related topics are reviewed in this paper. Index Terms—Baum–Petrie algorithm, entropy ergodic theorems, finitestate channels, hidden Markov models, identifiability, Kalman filter, maximumlikelihood (ML) estimation, order estimation, recursive parameter estimation, switching autoregressive processes, Ziv inequality. I.
A stochastic model of humanmachine interaction for learning dialogue strategies
 IEEE Trans on Speech and Audio Processing
"... ..."
Improving timbre similarity: How high is the sky
 Results in Speech and Audio Sciences
"... Abstract. We report on experiments done in an attempt to improve the performance of a music similarity measure which we introduced earlier. The technique aims at comparing music titles on the basis of their global “timbre”, which has many applications in the field of Music Information Retrieval. Suc ..."
Abstract

Cited by 148 (14 self)
 Add to MetaCart
Abstract. We report on experiments done in an attempt to improve the performance of a music similarity measure which we introduced earlier. The technique aims at comparing music titles on the basis of their global “timbre”, which has many applications in the field of Music Information Retrieval. Such measures of timbre similarity have seen a growing interest lately, and every contribution (including ours) is yet another instantiation of the same basic pattern recognition architecture, only with different algorithm variants and parameters. Most give encouraging results with a little effort, and imply that nearperfect results would just extrapolate by finetuning the algorithms ’ parameters. However, such systematic testing over large, interdependent parameter spaces is both difficult and costly, as it requires to work on a whole general metadatabase architecture. This paper contributes in two ways to the current state of the art. We report on extensive tests over very many parameters and algorithmic variants, either already envisioned in the literature or not. This leads to an improvement over existing algorithms of about 15 % Rprecision. But most importantly, we describe many variants that surprisingly do not lead to any substancial improvement. Moreover, our simulations suggest the existence of a “glass ceiling ” at Rprecision about 65 % which cannot probably be overcome by pursuing such variations on the same theme.
Polynomial Splines and Their Tensor Products in Extended Linear Modeling
 Ann. Statist
, 1997
"... ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to m ..."
Abstract

Cited by 142 (14 self)
 Add to MetaCart
ANOVA type models are considered for a regression function or for the logarithm of a probability function, conditional probability function, density function, conditional density function, hazard function, conditional hazard function, or spectral density function. Polynomial splines are used to model the main effects, and their tensor products are used to model any interaction components that are included. In the special context of survival analysis, the baseline hazard function is modeled and nonproportionality is allowed. In general, the theory involves the L 2 rate of convergence for the fitted model and its components. The methodology involves least squares and maximum likelihood estimation, stepwise addition of basis functions using Rao statistics, stepwise deletion using Wald statistics, and model selection using BIC, crossvalidation or an independent test set. Publically available software, written in C and interfaced to S/SPLUS, is used to apply this methodology to...
Signal modeling techniques in speech recognition
 PROCEEDINGS OF THE IEEE
, 1993
"... We have seen three important trends develop in the last five years in speech recognition. First, heterogeneous parameter sets that mix absolute spectral information with dynamic, or timederivative, spectral information, have become common. Second, similariry transform techniques, often used to norm ..."
Abstract

Cited by 132 (5 self)
 Add to MetaCart
We have seen three important trends develop in the last five years in speech recognition. First, heterogeneous parameter sets that mix absolute spectral information with dynamic, or timederivative, spectral information, have become common. Second, similariry transform techniques, often used to normalize and decorrelate parameters in some computationally inexpensive way, have become popular. Third, the signal parameter estimation problem has merged with the speech recognition process so that more sophisticated statistical models of the signal’s spectrum can be estimated in a closedloop manner. In this paper, we review the signal processing components of these algorithms. These algorithms are presented as part of a unified view of the signal parameterization problem in which there are three major tasks: measurement, transformation, and statistical modeling. This paper is by no means a comprehensive survey of all possible techniques of signal modeling in speech recognition. There are far too many algorithms in use today to make an exhaustive survey feasible (and cohesive). Instead, this paper is meant to serve as a tutorial on signal processing in stateoftheart speech recognition systems and to review those techniques most commonly used. In keeping with this goal, a complete mathematical description of each algorithm has been included in the paper.
Continuous Probabilistic Transforms for Voice Conversion
 IEEE Trans. on Acoustics, Speech and Signal Processing
, 1998
"... ..."
Progressive Image Coding for Noisy Channels
 IEEE SIGNAL PROCESSING LETTERS
, 1997
"... We cascade an existing image coder with carefully chosen error control coding, and thus produce a progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques. The main idea is to trade off the available transmission rate ..."
Abstract

Cited by 125 (9 self)
 Add to MetaCart
We cascade an existing image coder with carefully chosen error control coding, and thus produce a progressive image compression scheme whose performance on a noisy channel is significantly better than that of previously known techniques. The main idea is to trade off the available transmission rate between source coding and channel coding in an efficient manner. This coding system is easy to implement and has acceptably low complexity. Furthermore, effectively no degradation due to channel noise can be detected; instead, the penalty paid due to channel noise is a reduction in source coding resolution. Detailed numerical comparisons are given that can serve as benchmarks for comparisons with future encoding schemes. For example, for the 512 512 Lena image, at a transmission rate of 1 b/pixel, and for binary symmetric channels with bit error probabilities 03 , 02 , and 01 , the proposed system outperforms previously reported results by at least 2.6, 2.8, and 8.9 dB, respectively.