Results 1 
8 of
8
Enterprise modeling
, 1998
"... ... This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual enterprise effort at the University of Toronto. ..."
Abstract

Cited by 146 (6 self)
 Add to MetaCart
(Show Context)
... This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual enterprise effort at the University of Toronto.
Nonextensive Information Theoretic Kernels on Measures
, 2009
"... Positive definite kernels on probability measures have been recently applied to classification problems involving text, images, and other types of structured data. Some of these kernels are related to classic information theoretic quantities, such as (Shannon’s) mutual information and the JensenSha ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Positive definite kernels on probability measures have been recently applied to classification problems involving text, images, and other types of structured data. Some of these kernels are related to classic information theoretic quantities, such as (Shannon’s) mutual information and the JensenShannon (JS) divergence. Meanwhile, there have been recent advances in nonextensive generalizations of Shannon’s information theory. This paper bridges these two trends by introducing nonextensive information theoretic kernels on probability measures, based on new JStype divergences. These new divergences result from extending the the two building blocks of the classical JS divergence: convexity and Shannon’s entropy. The notion of convexity is extended to the wider concept of qconvexity, for which we prove a Jensen qinequality. Based on this inequality, we introduce JensenTsallis (JT) qdifferences, a nonextensive generalization of the JS divergence, and define a kth order JT qdifference between stochastic processes. We then define a new family of nonextensive mutual information kernels, which allow weights to be assigned to their arguments, and which includes the Boolean, JS, and linear kernels as particular cases. Nonextensive string kernels are also defined that generalize the pspectrum kernel. We illustrate the performance of
Nonextensive Entropic Kernels
, 2008
"... Positive definite kernels on probability measures have been recently applied in structured data classification problems. Some of these kernels are related to classic information theoretic quantities, such as mutual information and the JensenShannon divergence. Meanwhile, driven by recent advances i ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Positive definite kernels on probability measures have been recently applied in structured data classification problems. Some of these kernels are related to classic information theoretic quantities, such as mutual information and the JensenShannon divergence. Meanwhile, driven by recent advances in Tsallis statistics, nonextensive generalizations of Shannon’s information theory have been proposed. This paper bridges these two trends. We introduce the JensenTsallis qdifference, a generalization of the JensenShannon divergence. We then define a new family of nonextensive mutual information kernels, which allow weights to be assigned to their arguments, and which includes the Boolean, JensenShannon, and linear kernels as particular cases. We illustrate the performance of these kernels on text categorization tasks.
Nonextensive Generalizations of the JensenShannon Divergence
, 2008
"... Convexity is a key concept in information theory, namely via the many implications of Jensen’s inequality, such as the nonnegativity of the KullbackLeibler divergence (KLD). Jensen’s inequality also underlies the concept of JensenShannon divergence (JSD), which is a symmetrized and smoothed vers ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Convexity is a key concept in information theory, namely via the many implications of Jensen’s inequality, such as the nonnegativity of the KullbackLeibler divergence (KLD). Jensen’s inequality also underlies the concept of JensenShannon divergence (JSD), which is a symmetrized and smoothed version of the KLD. This paper introduces new JSDtype divergences, by extending its two building blocks: convexity and Shannon’s entropy. In particular, a new concept of qconvexity is introduced and shown to satisfy a Jensen’s qinequality. Based on this Jensen’s qinequality, the JensenTsallis qdifference is built, which is a nonextensive generalization of the JSD, based on Tsallis entropies. Finally, the JensenTsallis qdifference is charaterized in terms of convexity and extrema.
Multimodality and Nonrigid Image Registration
, 2012
"... complies with the regulations of the University and meets the accepted standards with respect to originality and quality. Signed by the final examining committee: ..."
Abstract
 Add to MetaCart
(Show Context)
complies with the regulations of the University and meets the accepted standards with respect to originality and quality. Signed by the final examining committee:
In the field...
, 2008
"... Positive definite kernels on probability measures have been recently applied in classification of text, images, and other types of structured data. Some of these kernels are related to classic information theoretic quantities, such as mutual information and the JensenShannon (JS) divergence. Meanwh ..."
Abstract
 Add to MetaCart
(Show Context)
Positive definite kernels on probability measures have been recently applied in classification of text, images, and other types of structured data. Some of these kernels are related to classic information theoretic quantities, such as mutual information and the JensenShannon (JS) divergence. Meanwhile, driven by recent advances in Tsallis statistics, nonextensive generalizations of Shannon’s information theory have been proposed. This paper bridges these two trends. We introduce new JStype divergences, by extending its two building blocks: convexity and Shannon’s entropy. These divergences are then used to define new informationtheoretic kernels on measures. In particular, we introduce a new concept of qconvexity, for which a Jensen qinequality is proved. Based on this inequality, we introduce JensenTsallis (JT) qdifferences, a nonextensive generalization of the JensenShannon divergence, and define a kth order JT qdifference between stochastic processes. We then define a new family of nonextensive mutual information kernels, which allow weights to be assigned to their arguments, and which includes the Boolean, JensenShannon, and linear kernels as particular cases. Nonextensive string kernels are also defined that subsume the pspectrum kernel. We illustrate the performance of these kernels on text categorization tasks, in
Bootstrapping a Spoken Language Identification System Using Unsupervised Integrated Sensing and Processing Decision Trees
"... Abstract—In many inference and learning tasks, collecting large amounts of labeled training data is time consuming and expensive, and oftentimes impractical. Thus, being able to efficiently use small amounts of labeled data with an abundance of unlabeled data—the topic of semisupervised learning (S ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—In many inference and learning tasks, collecting large amounts of labeled training data is time consuming and expensive, and oftentimes impractical. Thus, being able to efficiently use small amounts of labeled data with an abundance of unlabeled data—the topic of semisupervised learning (SSL) [1]—has garnered much attention. In this paper, we look at the problem of choosing these small amounts of labeled data, the first step in a bootstrapping paradigm. Contrary to traditional active learning where an initial trained model is employed to select the unlabeled data points which would be most informative if labeled, our selection has to be done in an unsupervised way, as we do not even have labeled data to train an initial model. We propose using unsupervised clustering algorithms, in particular integrated sensing and processing decision trees (ISPDTs) [2], to select small amounts of data to label and subsequently use in SSL (e.g. transductive SVMs). In a language identification task on the CallFriend 1 and 2003 NIST Language Recognition Evaluation corpora [3], we demonstrate that the proposed method results in significantly improved performance over random selection of equivalently sized training data. I.