Results 1  10
of
39
Heterogeneous Learning in the Doppelgänger User Modeling System
 Interaction
, 1995
"... Doppelg anger is a generalized user modeling system that gathers data about users, performs inferences upon the data, and makes the resulting information available to applications. Doppelg anger's learning is called heterogeneous for two reasons: first, multiple learning techniques are used to inter ..."
Abstract

Cited by 76 (0 self)
 Add to MetaCart
Doppelg anger is a generalized user modeling system that gathers data about users, performs inferences upon the data, and makes the resulting information available to applications. Doppelg anger's learning is called heterogeneous for two reasons: first, multiple learning techniques are used to interpret the data, and second, the learning techniques must often grapple with disparate data types. These computations take place at geographically distributed sites, and make use of portable user models carried by individuals. This paper concentrates on Doppelg anger's learning techniques and their implementation in an applicationindependent, sensorindependent environment. Key words: User model, machine learning, serverclient architecture, multivariate statistical analysis, Markov models, Beta distribution, linear prediction. 1 Introduction When users interact with a computer, they provide a great deal of information about themselves. Even when they are not physically at a computer console,...
Linear Cryptanalysis Using Multiple Approximations
 Advances in Cryptology  CRYPTO '94 Proceedings
, 1994
"... Abstract. We present a technique which aids in the linear cryptanalysis of a block cipher and allows for a reduction in the amount of data required for a successful attack. We note the limits of this extension when applied to DES, but illustrate that it is generally applicable and might be exception ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
Abstract. We present a technique which aids in the linear cryptanalysis of a block cipher and allows for a reduction in the amount of data required for a successful attack. We note the limits of this extension when applied to DES, but illustrate that it is generally applicable and might be exceptionally successful when applied to other block ciphers. This forces us to reconsider some of the initial attempts to quantify the resistance of block ciphers to linear cryptanalysis, and by taking account of this new technique we cover several issues which have not yet been considered. 1
Conversational Scene Analysis
, 2002
"... In this thesis, we develop computational tools for analyzing conversations based on nonverbal auditory cues. We develop a notion of conversations as being made up of a variety of scenes: in each scene, either one speaker is holding the floor or both are speaking at equal levels. Our goal is to find ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
In this thesis, we develop computational tools for analyzing conversations based on nonverbal auditory cues. We develop a notion of conversations as being made up of a variety of scenes: in each scene, either one speaker is holding the floor or both are speaking at equal levels. Our goal is to find conversations, find the scenes within them, determine what is happening inside the scenes, and then use the scene structure to characterize entire conversations. We begin by
Despain,”Exact and Approximate Methods for Calculating Signal and Transition
 Probabilities in FSMs”,Proceedings of the 31st Design Automation Conference
, 1994
"... AbstractIn this paper, we consider the problem of calculating the signal and transition probabilities of the internal nodes of the combinational logic part of a nite state machine (FSM). Given the state transition graph (STG) of the FSM, we rst calculate the state probabilities by iteratively solvi ..."
Abstract

Cited by 48 (6 self)
 Add to MetaCart
AbstractIn this paper, we consider the problem of calculating the signal and transition probabilities of the internal nodes of the combinational logic part of a nite state machine (FSM). Given the state transition graph (STG) of the FSM, we rst calculate the state probabilities by iteratively solving the ChapmanKolmogorov equations. Using these probabilities, we then calculate the exact signal and transition probabilities by an implicit state enumeration procedure. For large sequential machines where the STG cannot be explicitly built, we unroll the next state logic k times and estimate the signal probability of the state bits using an OBDDbased approach. The basic computation step consists of solving a system of nonlinear
Statistical models of video structure for content analysis and characterization
 IEEE Trans. on Image Processing
, 2000
"... Abstract — Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introdu ..."
Abstract

Cited by 44 (1 self)
 Add to MetaCart
Abstract — Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive contentbased access to movie libraries.Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive contentbased access to movie libraries. I.
A Natural Law of Succession
, 1995
"... Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we presen ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
Consider the following problem. You are given an alphabet of k distinct symbols and are told that the i th symbol occurred exactly ni times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we present a new solution to this fundamental problem in statistics and demonstrate that our solution outperforms standard approaches, both in theory and in practice.
A dynamic location management scheme for next generation multitier PCS systems
 IEEE TRANS. WIRELESS COMMUN
, 2002
"... Global wireless networks enable mobile users to communicate regardless of their locations. One of the most important issues is location management in a highly dynamic environment because mobile users may roam between different wireless systems, network operators, and geographical regions. In this pa ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
Global wireless networks enable mobile users to communicate regardless of their locations. One of the most important issues is location management in a highly dynamic environment because mobile users may roam between different wireless systems, network operators, and geographical regions. In this paper, a locationtracking mechanism is introduced that consists of intersystem location updates and intersystem paging. Intersystem update is implemented by using the concept of boundary location area, which is determined by a dynamic location update policy in which the velocity and the quality of service are taken into account on a peruser basis. Also, intersystem paging is based on the concept of boundary location register, which is used to maintain the records of mobile users crossing the boundary of systems. This mechanism not only reduces locationtracking costs, but also significantly decreases callloss rates and averagepaging delays. The performance evaluation of the proposed schemes is provided to demonstrate their effectiveness in multitier personal communication systems.
On the Efficient Evaluation of Probabilistic Similarity Functions for Image Retrieval
 IEEE Trans. Inf. Theory
, 2004
"... Probabilistic approaches are a promising solution to the image retrieval problem that, when compared to standard retrieval methods, can lead to a significant gain in retrieval accuracy. However, this occurs at the cost of a significant increase in computational complexity. In fact, closedform solut ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Probabilistic approaches are a promising solution to the image retrieval problem that, when compared to standard retrieval methods, can lead to a significant gain in retrieval accuracy. However, this occurs at the cost of a significant increase in computational complexity. In fact, closedform solutions for probabilistic retrieval are currently available only for simple probabilistic models such as the Gaussian or the histogram. We analyze the case of mixture densities and exploit the asymptotic equivalence between likelihood and Kullback–Leibler (KL) divergence to derive solutions for these models. In particular, 1) we show that the divergence can be computed exactly for vector quantizers (VQs) and 2) has an approximate solution for Gauss mixtures (GMs) that, in highdimensional feature spaces, introduces no significant degradation of the resulting similarity judgments. In both cases, the new solutions have closedform and computational complexity equivalent to that of standard retrieval approaches.
Multiscale Principal Components Analysis for Image Local Orientation Estimation
 Proceedings of the 36th Asilomar Conference on Signals, Systems and Computers
, 2002
"... This paper presents an image local orientation estimation method, which is based on a combination of two already wellknown techniques: the principal component analysis (PCA) and the multiscale pyramid decomposition. The PCA analysis is applied to find the Maximum Likelihood (ML) estimate of the loc ..."
Abstract

Cited by 22 (11 self)
 Add to MetaCart
This paper presents an image local orientation estimation method, which is based on a combination of two already wellknown techniques: the principal component analysis (PCA) and the multiscale pyramid decomposition. The PCA analysis is applied to find the Maximum Likelihood (ML) estimate of the local orientation. The proposed technique is shown to enjoy excellent robustness against noise. We present both simulated and real image examples to demonstrate the proposed technique.
Doppelgänger Goes To School: Machine Learning for User Modeling
, 1993
"... One characteristic of intelligence is adaptation. Computers should adapt to who is using them, how, why, when and where. The computer's representation of the user is called a user model; user modeling is concerned with developing techniques for representing the user and acting upon this information. ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
One characteristic of intelligence is adaptation. Computers should adapt to who is using them, how, why, when and where. The computer's representation of the user is called a user model; user modeling is concerned with developing techniques for representing the user and acting upon this information. The Doppelg anger system consists of a set of techniques for gathering, maintaining, and acting upon information about individuals, and illustrates my approach to user modeling. Work on Doppelg anger has been heavily influenced by the field of machine learning. This thesis has a twofold purpose: first, to set forth guidelines for the integration of machine learning techniques into user modeling, and second, to identify particular user modeling tasks for which machine learning is useful.