Results 1  10
of
23,511
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sumproduct, cluster variational methods, expectationpropagation, mean field methods, maxproduct and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in largescale statistical models.
ReTiling Polygonal Surfaces
 Computer Graphics
, 1992
"... This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speed ..."
Abstract

Cited by 448 (3 self)
 Add to MetaCart
This paper presents an automatic method of creating surface models at several levels of detail from an original polygonal description of a given object. Representing models at various levels of detail is important for achieving high frame rates in interactive graphics applications and also for speedingup the offline rendering of complex scenes. Unfortunately, generating these levels of detail is a timeconsuming task usually left to a human modeler. This paper shows how a new set of vertices can be distributed over the surface of a model and connected to one another to create a retiling of a surface that is faithful to both the geometry and the topology of the original surface. Themain contributions of this paper are: 1) a robust method of connecting together new vertices over a surface, 2) a way of using an estimate of surface curvature to distribute more new vertices at regions of higher curvature and 3) a method of smoothly interpolating between models that represent the same object at different levels of detail. The key notion in the retiling procedure is the creation of an intermediate model called the mutual tessellation of a surface that contains both the vertices from the original model and the new points that are to become vertices in the retiled surface. The new model is then created by removing each original vertex and locally retriangulating the surface in a way that matches the local connectedness of the initial surface. This technique for surface retessellation has been successfully applied to isosurface models derived from volume data, Connolly surface molecular models and a tessellation of a minimal surface of interest to mathematicians. CRCategoriesandSubjectDescriptors: I.3.3 [ComputerGraph ics]: Picture/Image Generation  Display algorithms
Realistic Modeling for Facial Animation
, 1995
"... A major unsolved problem in computer graphics is the construction and animation of realistic human facial models. Traditionally, facial models have been built painstakingly by manual digitization and animated by ad hoc parametrically controlled facial mesh deformations or kinematic approximation of ..."
Abstract

Cited by 356 (14 self)
 Add to MetaCart
suitable for animation. In this paper, we present a methodology for automating this challenging task. Starting with a structured facial mesh, we develop algorithms that automatically construct functional models of the heads of human subjects from laserscanned range and reflectance data. These algorithms
The Hero with a Thousand Faces
, 1972
"... Botiingen Foundation, andpttt.!.,.: b % / ,.,;:,c,m B<,.ik.*, second ..."
Abstract

Cited by 353 (0 self)
 Add to MetaCart
Botiingen Foundation, andpttt.!.,.: b % / ,.,;:,c,m B<,.ik.*, second
Network Centric Warfare: Developing and Leveraging Information Superiority
 Command and Control Research Program (CCRP), US DoD
, 2000
"... the mission of improving DoD’s understanding of the national security implications of the Information Age. Focusing upon improving both the state of the art and the state of the practice of command and control, the CCRP helps DoD take full advantage of the opportunities afforded by emerging technolo ..."
Abstract

Cited by 308 (5 self)
 Add to MetaCart
the mission of improving DoD’s understanding of the national security implications of the Information Age. Focusing upon improving both the state of the art and the state of the practice of command and control, the CCRP helps DoD take full advantage of the opportunities afforded by emerging technologies. The CCRP pursues a broad program of research and analysis in information superiority, information operations, command and control theory, and associated operational concepts that enable us to leverage shared awareness to improve the effectiveness and efficiency of assigned missions. An important aspect of the CCRP program is its ability to serve as a bridge between the operational, technical, analytical, and educational communities. The CCRP provides leadership for the command and control research community by: n n
3D Sound for Virtual Reality and Multimedia
, 2000
"... This paper gives HRTF magnitude data in numerical form for 43 frequencies between 0.212 kHz, the average of 12 studies representing 100 different subjects. However, no phase data is included in the tables; group delay simulation would need to be included in order to account for ITD. In 3D sound ..."
Abstract

Cited by 282 (5 self)
 Add to MetaCart
This paper gives HRTF magnitude data in numerical form for 43 frequencies between 0.212 kHz, the average of 12 studies representing 100 different subjects. However, no phase data is included in the tables; group delay simulation would need to be included in order to account for ITD. In 3D sound applications intended for many users, we want might want to use HRTFs that represent the common features of a number of individuals. But another approach might be to use the features of a person who has desirable HRTFs, based on some criteria. (One can sense a future 3D sound system where the pinnae of various famous musicians are simulated.) A set of HRTFs from a good localizer (discussed in Chapter 2) could be used if the criterion were localization performance. If the localization ability of the person is relatively accurate or more accurate than average, it might be reasonable to use these HRTF measurements for other individuals. The Convolvotron 3D audio system (Wenzel, Wightman, and Foster, 1988) has used such sets particularly because elevation accuracy is affected negatively when listening through a bad localizers ears (see Wenzel, et al., 1988). It is best when any single nonindividualized HRTF set is psychoacoustically validated using a 113 statistical sample of the intended user population, as shown in Chapter 2. Otherwise, the use of one HRTF set over another is a purely subjective judgment based on criteria other than localization performance. The technique used by Wightman and Kistler (1989a) exemplifies a laboratorybased HRTF measurement procedure where accuracy and replicability of results were deemed crucial. A comparison of their techniques with those described in Blauert (1983), Shaw (1974), Mehrgardt and Mellert (1977), Middlebrooks, Makous, and Gree...
Geometric Compression through Topological Surgery
 ACM TRANSACTIONS ON GRAPHICS
, 1998
"... ... this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. In this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each ..."
Abstract

Cited by 280 (28 self)
 Add to MetaCart
... this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. In this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each vertex from 2, 3, or 4 of its ancestors in the tree, and the correction vectors are entropy encoded. Properties, such as normals, colors, and texture coordinates, are compressed in a similar manner. The connectivity is encoded with no loss of information to an average of less than two bits per triangle. The vertex spanning tree and a small set of jump edges are used to split the model into a simple polygon. A triangle spanning tree and a sequence of marching bits are used to encode the triangulation of the polygon. Our approach improves on Michael Deering's pioneering results by exploiting the geometric coherence of several ancestors in the vertex spanning tree, preserving the connectivity with no loss of information, avoiding vertex repetitions, and using about three times fewer bits for the connectivity. However, since decompression requires random access to all vertices, this method must be modified for hardware rendering with limited onboard memory. Finally, we demonstrate implementation results for a variety of VRML models with up to two orders of magnitude compression
Results 1  10
of
23,511