Results 1  10
of
110
Constructing Free Energy Approximations and Generalized Belief Propagation Algorithms
 IEEE Transactions on Information Theory
, 2005
"... Important inference problems in statistical physics, computer vision, errorcorrecting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems t ..."
Abstract

Cited by 414 (12 self)
 Add to MetaCart
Important inference problems in statistical physics, computer vision, errorcorrecting coding theory, and artificial intelligence can all be reformulated as the computation of marginal probabilities on factor graphs. The belief propagation (BP) algorithm is an efficient way to solve these problems that is exact when the factor graph is a tree, but only approximate when the factor graph has cycles. We show that BP fixed points correspond to the stationary points of the Bethe approximation of the free energy for a factor graph. We explain how to obtain regionbased free energy approximations that improve the Bethe approximation, and corresponding generalized belief propagation (GBP) algorithms. We emphasize the conditions a free energy approximation must satisfy in order to be a “valid ” or “maxentnormal ” approximation. We describe the relationship between four different methods that can be used to generate valid approximations: the “Bethe method, ” the “junction graph method, ” the “cluster variation method, ” and the “region graph method.” Finally, we explain how to tell whether a regionbased approximation, and its corresponding GBP algorithm, is likely to be accurate, and describe empirical results showing that GBP can significantly outperform BP.
Large margin methods for structured and interdependent output variables
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary ..."
Abstract

Cited by 372 (11 self)
 Add to MetaCart
Learning general functional dependencies between arbitrary input and output spaces is one of the key challenges in computational intelligence. While recent progress in machine learning has mainly focused on designing flexible and powerful input representations, this paper addresses the complementary issue of designing classification algorithms that can deal with more complex outputs, such as trees, sequences, or sets. More generally, we consider problems involving multiple dependent output variables, structured output spaces, and classification problems with class attributes. In order to accomplish this, we propose to appropriately generalize the wellknown notion of a separation margin and derive a corresponding maximummargin formulation. While this leads to a quadratic program with a potentially prohibitive, i.e. exponential, number of constraints, we present a cutting plane algorithm that solves the optimization problem in polynomial time for a large class of problems. The proposed method has important applications in areas such as computational biology, natural language processing, information retrieval/extraction, and optical character recognition. Experiments from various domains involving different types of output spaces emphasize the breadth and generality of our approach.
PEIR: the personal environmental impact report, as a platform for participatory sensing systems research
 in Proc. ACM/USENIX Int. Conf. Mobile Systems, Applications, and Services (MobiSys) Krakow
, 2009
"... PEIR, the Personal Environmental Impact Report, is a participatory sensing application that uses location data sampled from everyday mobile phones to calculate personalized estimates of environmental impact and exposure. It is an example of an important class of emerging mobile systems that combine ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
PEIR, the Personal Environmental Impact Report, is a participatory sensing application that uses location data sampled from everyday mobile phones to calculate personalized estimates of environmental impact and exposure. It is an example of an important class of emerging mobile systems that combine the distributed processing capacity of the web with the personal reach of mobile technology. This paper documents and evaluates the running PEIR system, which includes mobile handset based GPS location data collection, and serverside processing stages such as HMMbased activity classification (to determine transportation mode); automatic location data segmentation into “trips”; lookup of traffic, weather, and other context data needed by the models; and environmental impact and exposure calculation using efficient implementations of established models. Additionally, we describe the user interface components of PEIR and present usage statistics from a two month snapshot of system use. The paper also outlines new algorithmic components developed based on experience with the system and undergoing testing for integration into PEIR, including: new mapmatching and GSMaugmented activity classification techniques, and a selective hiding mechanism that generates believable proxy traces for times a user does not want their real location revealed.
Which Codes Have CycleFree Tanner Graphs?
 IEEE TRANS. INFORM. THEORY
, 1999
"... If a linear block code of length has a Tanner graph without cycles, then maximumlikelihood softdecision decoding of can be achieved in time O(n ). However, we show that cyclefree Tanner graphs cannot support good codes. Specifically, let be an (n; k; d) linear code of rate R = k=n that can ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
If a linear block code of length has a Tanner graph without cycles, then maximumlikelihood softdecision decoding of can be achieved in time O(n ). However, we show that cyclefree Tanner graphs cannot support good codes. Specifically, let be an (n; k; d) linear code of rate R = k=n that can be represented by a Tanner graph without cycles. We prove that if R 0:5 then d 2, while if R!0:5 then is obtained from a code of rate 0:5 and distance 2 by simply repeating certain symbols. In the latter case, we prove that k +1 ! R : Furthermore, we show by means of an explicit construction that this bound is tight for all values of n and k. We also prove that binary codes which have cyclefree Tanner graphs belong to the class of graphtheoretic codes, known as cutset codes of a graph. Finally, we discuss the asymptotics for Tanner graphs with cycles, and present a number of open problems for future research.
The hardness of the closest vector problem with preprocessing
 IEEE Transactions on Information Theory
, 2001
"... Abstract We give a new simple proof of the NPhardness of the closest vector problem. In addition to being much simpler than all previously known proofs, the new proof yields new interesting results about the complexity of the closest vector problem with preprocessing. This is a variant of the close ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Abstract We give a new simple proof of the NPhardness of the closest vector problem. In addition to being much simpler than all previously known proofs, the new proof yields new interesting results about the complexity of the closest vector problem with preprocessing. This is a variant of the closest vector problem in which the lattice is specified in advance, and can be preprocessed for an arbitrarily long amount of time before the target vector is revealed. We show that there are lattices for which the closest vector problem remains hard, regardless of the amount of preprocessing.
Instrumentspecific harmonic atoms for midlevel music representation
 IEEE Trans. on Audio, Speech and Lang. Proc
, 2008
"... Abstract—Several studies have pointed out the need for accurate midlevel representations of music signals for information retrieval and signal processing purposes. In this paper, we propose a new midlevel representation based on the decomposition of a signal into a small number of sound atoms or m ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
Abstract—Several studies have pointed out the need for accurate midlevel representations of music signals for information retrieval and signal processing purposes. In this paper, we propose a new midlevel representation based on the decomposition of a signal into a small number of sound atoms or molecules bearing explicit musical instrument labels. Each atom is a sum of windowed harmonic sinusoidal partials whose relative amplitudes are specific to one instrument, and each molecule consists of several atoms from the same instrument spanning successive time windows. We design efficient algorithms to extract the most prominent atoms or molecules and investigate several applications of this representation, including polyphonic instrument recognition and music visualization. Index Terms—Midlevel representation, music information retrieval, music visualization, sparse decomposition. I.
A Tutorial On Hidden Markov Models
 Signal Processing and Artificial Neural Networks Laboratory Department of Electrical Engineering Indian Institute of Technology — Bombay Powai, Bombay 400 076, India
, 1996
"... In this tutorial we present an overview of (i) what are HMMs, (ii) what are the different problems associated with HMMs, (iii) the Viterbi algorithm for determining the optimal state sequence, (iv) algorithms associated with training HMMs, and (v) distance between HMMs. 1 Introduction [1] Suppo ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In this tutorial we present an overview of (i) what are HMMs, (ii) what are the different problems associated with HMMs, (iii) the Viterbi algorithm for determining the optimal state sequence, (iv) algorithms associated with training HMMs, and (v) distance between HMMs. 1 Introduction [1] Suppose a person has say three coins and is sitting inside a room tossing them in some sequence this room is closed and what you are shown (on a display outside the room) is only the outcomes of his tossing TTHTHHTT. . . this will be called the observation sequence . You do not know the sequence in which he is tossing the different coins, nor do you know the bias of the various coins. To appreciate how much the outcome depends on the individual biasing and the order of tossing the coins, suppose you are given that the third coin is highly biased to produce heads and all coins are tossed with equal probability. Then, we naturally expect there to be far greater number of heads than tails in the o...
Incorporating Syntactic Constraints in Recognizing Handwritten Sentences
 IN PROCEEDINGS OF THE INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI93
, 1993
"... The output of handwritten word recognizers (HWR) tends to be very noisy due to various factors. In order to compensate for this behaviour, several choices of the HWR must be initially considered. In the case of handwritten sentence/phrase recognition, linguistic constraints may be applied in order t ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
The output of handwritten word recognizers (HWR) tends to be very noisy due to various factors. In order to compensate for this behaviour, several choices of the HWR must be initially considered. In the case of handwritten sentence/phrase recognition, linguistic constraints may be applied in order to improve the results of the HWR. This paper discusses two statistical methods of applying syntactic constraints to the output of an HWR on input consisting of sentences/phrases. Both methods are based on syntactic categories (tags) associated with words. The first is a purely statistical method, the second is a hybrid method which combines higherlevel syntactic information (hypertags) with statistical information regarding transitions between hypertags.
Convergence analysis and optimal scheduling for multiple concatenated codes
 IEEE Transactions on Information Theory
, 2005
"... Abstract—An interesting practical consideration for decoding of serial or parallel concatenated codes with more than two components is the determination of the lowest complexity component decoder schedule which results in convergence. This correspondence presents an algorithm that finds such an opti ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
Abstract—An interesting practical consideration for decoding of serial or parallel concatenated codes with more than two components is the determination of the lowest complexity component decoder schedule which results in convergence. This correspondence presents an algorithm that finds such an optimal decoder schedule. A technique is also given for combining and projecting a series of threedimensional extrinsic information transfer (EXIT) functions onto a single twodimensional EXIT chart. This is a useful technique for visualizing the convergence threshold for multiple concatenated codes and provides a design tool for concatenated codes with more than two components. Index Terms—EXIT chart, iterative decoding, multiple concatenated codes, optimal scheduling. I.
Autonomous terrain mapping and classification using hidden markov models
 In Proc. IEEE International Conference on Robotics and Automation (ICRA
, 2005
"... Abstract — This paper presents a new approach for terrain mapping and classification using mobile robots with 2D laser range finders. Our algorithm generates 3D terrain maps and classifies navigable and nonnavigable regions on those maps using Hidden Markov models. The maps generated by our approac ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstract — This paper presents a new approach for terrain mapping and classification using mobile robots with 2D laser range finders. Our algorithm generates 3D terrain maps and classifies navigable and nonnavigable regions on those maps using Hidden Markov models. The maps generated by our approach can be used for path planning, navigation, local obstacle avoidance, detection of changes in the terrain, and object recognition. We propose a map segmentation algorithm based on Markov Random Fields, which removes small errors in the classification. In order to validate our algorithms, we present experimental results using two robotic platforms. I.