Results 1  10
of
104
Funnels, Pathways and the Energy Landscape of Protein Folding: A Synthesis
 PROTEINS
, 1994
"... The understanding, and even the description of protein folding is impeded by the complexity of the process. Much of this complexity can be described and understood by taking a statistical approach to the energetics of protein conformation, that is, to the energy landscape. The statistical energy lan ..."
Abstract

Cited by 111 (7 self)
 Add to MetaCart
The understanding, and even the description of protein folding is impeded by the complexity of the process. Much of this complexity can be described and understood by taking a statistical approach to the energetics of protein conformation, that is, to the energy landscape. The statistical energy landscape approach explains when and why unique behaviors, such as specific folding pathways, occur in some proteins and more generally explains the distinction between folding processes common to all sequences and those peculiar to individual sequences. This approach also gives new, quantitative insights into the interpretation of experiments and simulations of protein folding thermodynamics and kinetics. Specifically, the picture provides simple explanations for folding as a twostate firstorder phase transition, for the origin of metastable collapsed unfolded states and for the curved Arrhenius plots observed in both laboratory experiments and discrete lattice simulations. The relation of these quantitative ideas to folding pathways, to uniexponential vs. multiexponential behavior in protein folding experiments and to the effect of mutations on folding is also discussed. The success of energy landscape ideas in protein structure prediction is also described. The use of the energy landscape approach for analyzing data is illustrated with a quantitative analysis of some recent simulations, and a qualitative analysis of experiments on the folding of three proteins. The work unifies several previously proposed ideas concerning the mechanism protein folding and delimits the regions of validity of these ideas under different thermodynamic conditions.
Effect of neutral selection on the evolution of molecular species
 In Proc. R. Soc. London B
, 1998
"... We introduce a new model of evolution on a fitness landscape possessing a tunable degree of neutrality. The model allows us to study the general properties of molecular species undergoing neutral evolution. We find that a number of phenomena seen in RNA sequencestructure maps are present also in ou ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
(Show Context)
We introduce a new model of evolution on a fitness landscape possessing a tunable degree of neutrality. The model allows us to study the general properties of molecular species undergoing neutral evolution. We find that a number of phenomena seen in RNA sequencestructure maps are present also in our general model. Examples are the occurrence of “common ” structures which occupy a fraction of the genotype space which tends to unity as the length of the genotype increases, and the formation of percolating neutral networks which cover the genotype space in such a way that a member of such a network can be found within a small radius of any point in the space. We also describe a number of new phenomena which appear to be general properties of neutrally evolving systems. In particular, we show that the maximum fitness attained during the adaptive walk of a population evolving on such a fitness landscape increases with increasing degree of neutrality, and is directly related to the fitness of the most fit percolating network. 1
RankTwo Relaxation Heuristics for MaxCut and Other Binary Quadratic Programs
 SIAM Journal on Optimization
, 2000
"... The GoemansWilliamson randomized algorithm guarantees a highquality approximation to the MaxCut problem, but the cost associated with such an approximation can be excessively high for largescale problems due to the need for solving an expensive semidefinite relaxation. In order to achieve better ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
(Show Context)
The GoemansWilliamson randomized algorithm guarantees a highquality approximation to the MaxCut problem, but the cost associated with such an approximation can be excessively high for largescale problems due to the need for solving an expensive semidefinite relaxation. In order to achieve better practical performance, we propose an alternative, ranktwo relaxation and develop a specialized version of the GoemansWilliamson technique. The proposed approach leads to continuous optimization heuristics applicable to MaxCut as well as other binary quadratic programs, for example the MaxBisection problem. A computer code based on the ranktwo relaxation heuristics is compared with two stateoftheart semidefinite programming codes that implement the GoemansWilliamson randomized algorithm, as well as with a purely heuristic code for effectively solving a particular MaxCut problem arising in physics. Computational results show that the proposed approach is fast and scalable and, more importantly, attains a higher approximation quality in practice than that of the GoemansWilliamson randomized algorithm. An extension to MaxBisection is also discussed as well as an important difference between the proposed approach and the GoemansWilliamson algorithm, namely that the new approach does not guarantee an upper bound on the MaxCut optimal value. Key words. Binary quadratic programs, MaxCut and MaxBisection, semidefinite relaxation, ranktwo relaxation, continuous optimization heuristics. AMS subject classifications. 90C06, 90C27, 90C30 1.
An Emulator Network for
 SIMD Machine Interconnection Networks, in: Proc. 6 th annual symposium on Computer architecture
, 1979
"... Fig. 0.1. [Proposed cover figure.] The largest connected component of a network of network scientists. This network was constructed based on the coauthorship of papers listed in two wellknown review articles [13,83] and a small number of additional papers that were added manually [86]. Each node is ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
(Show Context)
Fig. 0.1. [Proposed cover figure.] The largest connected component of a network of network scientists. This network was constructed based on the coauthorship of papers listed in two wellknown review articles [13,83] and a small number of additional papers that were added manually [86]. Each node is colored according to community membership, which was determined using a leadingeigenvector spectral method followed by KernighanLin nodeswapping steps [64, 86, 107]. To determine community placement, we used the FruchtermanReingold graph visualization [45], a forcedirected layout method that is related to maximizing a quality function known as modularity [92]. To apply this method, we treated the communities as if they were themselves the nodes of a (significantly smaller) network with connections rescaled by intercommunity links. We then used the KamadaKawaii springembedding graph visualization algorithm [62] to place the nodes of each individual community (ignoring intercommunity links) and then to rotate and flip the communities for optimal placement (including intercommunity links). We gratefully acknowledge Amanda Traud for preparing this figure. COMMUNITIES IN NETWORKS
Criticality and Parallelism in Combinatorial Optimization
 Science
, 1995
"... Local search methods constitute one of the most successful approaches to solving largescale combinatorial optimization problems. A new result concerning the parallelization of such methods is presented. As parallelism is increased, optimization performance initially improves, but then abruptly degr ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
(Show Context)
Local search methods constitute one of the most successful approaches to solving largescale combinatorial optimization problems. A new result concerning the parallelization of such methods is presented. As parallelism is increased, optimization performance initially improves, but then abruptly degrades to no better than random search beyond a certain point. The existence of this transition is demonstrated for a family of generalized spinglass models and the Traveling Salesman Problem. Finitesize scaling is used to characterize sizedependent effects near the transition and analytical insight is obtained through a mean field approximation.
Modeling vocal interaction for textindependent classification of conversation type
 Proc. SIGdial
, 2007
"... Indexing, retrieval, and summarization in recordings of meetings have, to date, focused largely on the propositional content of what participants say. Although objectively relevant, such content may not be the sole or even the main aim of potential system users. Instead, users may be interested in i ..."
Abstract

Cited by 24 (10 self)
 Add to MetaCart
(Show Context)
Indexing, retrieval, and summarization in recordings of meetings have, to date, focused largely on the propositional content of what participants say. Although objectively relevant, such content may not be the sole or even the main aim of potential system users. Instead, users may be interested in information bearing on conversation flow. We explore the automatic detection of one example of such information, namely that of hotspots defined in terms of participant involvement. Our proposed system relies exclusively on lowlevel vocal activity features, and yields a classification accuracy of 84%, representing a 39 % reduction of error relative to a baseline which selects the majority class.
Information Geometry of Mean Field Approximation
, 1999
"... I present a general theory of mean field approximation, which is based on information geometry and is applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a for ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
I present a general theory of mean field approximation, which is based on information geometry and is applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean field approximation of general orders is derived. It includes in a natural way the "naive" mean field approximation, and is consistent with the ThoulessAndersonPalmer (TAP) approach and the linear response theorem in statistical physics.
Analyzing probabilistic models in hierarchical boa on traps and spin glasses
 Genetic and Evolutionary Computation Conference (GECCO2007), I
, 2007
"... The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common t ..."
Abstract

Cited by 21 (15 self)
 Add to MetaCart
(Show Context)
The hierarchical Bayesian optimization algorithm (hBOA) can solve nearly decomposable and hierarchical problems of bounded difficulty in a robust and scalable manner by building and sampling probabilistic models of promising solutions. This paper analyzes probabilistic models in hBOA on two common test problems: concatenated traps and 2D Ising spin glasses with periodic boundary conditions. We argue that although Bayesian networks with local structures can encode complex probability distributions, analyzing these models in hBOA is relatively straightforward and the results of such analyses may provide practitioners with useful information about their problems. The results show that the probabilistic models in hBOA closely correspond to the structure of the underlying problem, the models do not change significantly in subsequent iterations of BOA, and creating adequate probabilistic models by hand is not straightforward even with complete knowledge of the optimization problem. Categories and Subject Descriptors
A model of mass extinction
, 1997
"... A number of authors have in recent years proposed that the processes of macroevolution may give rise to selforganized critical phenomena which could have a significant effect on the dynamics of ecosystems. In particular it has been suggested that mass extinction may arise through a purely biotic me ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
A number of authors have in recent years proposed that the processes of macroevolution may give rise to selforganized critical phenomena which could have a significant effect on the dynamics of ecosystems. In particular it has been suggested that mass extinction may arise through a purely biotic mechanism as the result of socalled coevolutionary avalanches. In this paper we first explore the empirical evidence which has been put forward in favor of this conclusion. The data center principally around the existence of powerlaw functional forms in the distribution of the sizes of extinction events and other quantities. We then propose a new mathematical model of mass extinction which does not rely on coevolutionary effects and in which extinction is caused entirely by the action of environmental stresses on species. In combination with a simple model of species adaptation we show that this process can account for all the observed data without the need to invoke coevolution and critical processes. The model also makes some independent predictions, such as the existence of “aftershock ” extinctions in the aftermath of large mass extinction events, which should in theory be testable against the fossil record. 1