Results 1  10
of
514
Genetic Network Inference: From CoExpression Clustering To Reverse Engineering
, 2000
"... motivation: Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using highthroughput gene expression assays, we are able to measure the output of the ge ..."
Abstract

Cited by 334 (0 self)
 Add to MetaCart
motivation: Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using highthroughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of coexpression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiplecluster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e. who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and nonlinear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting and bioengineering.
ADE4: a multivariate analysis and graphical display software
 Stat. Comput
, 1997
"... e searching, zooming, selection of points, and display of data values on factor maps. The user interface is simple and homogeneous among all the programs; this contributes to making the use of ADE4 very easy for nonspecialists in statistics, data analysis or computer science. Keywords: Multivar ..."
Abstract

Cited by 181 (13 self)
 Add to MetaCart
(Show Context)
e searching, zooming, selection of points, and display of data values on factor maps. The user interface is simple and homogeneous among all the programs; this contributes to making the use of ADE4 very easy for nonspecialists in statistics, data analysis or computer science. Keywords: Multivariate analysis, principal component analysis, correspondence analysis, instrumental variables, canonical correspondence analysis, partial least squares regression, coinertia analysis, graphics, multivariate graphics, interactive graphics, Macintosh, HyperCard, Windows 95 1. Introduction ADE4 is a multivariate analysis and graphical display software for Apple Macintosh and Windows 95 microcomputers. It is made up of several standalone applications, called modules, that feature a wide range of multivariate analysis methods, from simple onetable analysis to threeway table analysis and twotable coupling methods. It also provides many possibilitie
Recognizing People by Their Gait: The Shape of Motion
, 1996
"... > y)). Scaleindependent scalar features of each flow, based on moments of the moving point weighted by u, v,or(u, v), characterize the spatial distribution of the flow. We then analyze the periodic structure of these sequences of scalars. The scalar sequences for an image sequenc ..."
Abstract

Cited by 175 (8 self)
 Add to MetaCart
(Show Context)
> y)). Scaleindependent scalar features of each flow, based on moments of the moving point weighted by u, v,or(u, v), characterize the spatial distribution of the flow. We then analyze the periodic structure of these sequences of scalars. The scalar sequences for an image sequence have the same fundamental period but differ in phase, which is a phase feature for each signal. Some phase features are consistent for one person and show significant statistical variation among persons. We use the phase feature vectors to recognize individuals by the shape of their motion. As few as three features out of the full set of twelve lead to excellent discrimination. Keywords: action recognition, gait recognition, motion features, optic flow, motion energy, spatial frequency, analysis Recognizing People by Their Gait: The Shape of Moti
Texture Analysis of SAR Sea Ice Imagery using Gray Level Cooccurrence Matrices
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 1999
"... This paper presents a preliminary study for mapping sea ice patterns (texture) with 100m ERS1 synthetic aperture radar (SAR) imagery. We used graylevel cooccurrence matrices (GLCM) to quantitatively evaluate textural parameters and representations and to determine which parameter values and rep ..."
Abstract

Cited by 96 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a preliminary study for mapping sea ice patterns (texture) with 100m ERS1 synthetic aperture radar (SAR) imagery. We used graylevel cooccurrence matrices (GLCM) to quantitatively evaluate textural parameters and representations and to determine which parameter values and representations are best for mapping sea ice texture. We conducted experiments on the quantization levels of the image and the displacement and orientation values of the GLCM by examining the effects textural descriptors such as entropy have in the representation of different sea ice textures. We showed that a complete graylevel representation of the image is not necessary for texture mapping, an eightlevel quantization representation is undesirable for textural representation, and the displacement factor in texture measurements is more important than orientation. In addition, we developed three GLCM implementations and
General Notions of Statistical Depth Function
, 2000
"... Statistical depth functions are being formulated ad hoc with increasing popularity in nonparametric inference for multivariate data. Here we introduce several general structures for depth functions, classify many existing examples as special cases, and establish results on the possession, or lack th ..."
Abstract

Cited by 80 (28 self)
 Add to MetaCart
Statistical depth functions are being formulated ad hoc with increasing popularity in nonparametric inference for multivariate data. Here we introduce several general structures for depth functions, classify many existing examples as special cases, and establish results on the possession, or lack thereof, of four key properties desirable for depth functions in general. Roughly speaking, these properties may be described as: affine invariance, maximality at center, monotonicity relative to deepest point, and vanishing at infinity. This provides a more systematic basis for selection of a depth function. In particular, from these and other considerations it is found that the halfspace depth behaves very well overall in comparison with various competitors.
On clustering of fMRI time series
, 1997
"... Introduction. The spatiotemporal fMRI signal is a combination of several interacting components: The locally correlated hemodynamic response, the network of neuronal activations, and global components such as the cardiac cycle, breathing etc. A priori this implies that the signal is correlated in t ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
(Show Context)
Introduction. The spatiotemporal fMRI signal is a combination of several interacting components: The locally correlated hemodynamic response, the network of neuronal activations, and global components such as the cardiac cycle, breathing etc. A priori this implies that the signal is correlated in time and space, and that these correlations have both short and long range components. Clustering is a classical nonparametric approach to explorative analysis data. By clustering we can group signals according to a given objective function. Clustering of waveforms has already been used in fMRI signal analysis, see e.g. (1). Clustering of stochastic data, however, is hard optimization problem with many potential pitfalls. The "optimal" cluster configuration depends on the particular choice of clustering scheme (e.g. kmeans, kmedians, hierachical clustering) examples are legio (2), but just as importantly on the choice of distance metr
On the concept of depth for functional data
 Journal of the American Statistical Association
, 2009
"... Functional data are nowadays the usual output of numerous scientific experiments. An important task in the analysis of functional data is to define robust statistics such as the median curve or trimmed mean. We provide a new notion of depth for functional data based on the graphic representation of ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
Functional data are nowadays the usual output of numerous scientific experiments. An important task in the analysis of functional data is to define robust statistics such as the median curve or trimmed mean. We provide a new notion of depth for functional data based on the graphic representation of the functions. Given a collection of curves, this idea allows us to measure the centrality of a function and it provides a natural centeroutward order for the sample functions. We show that the finitedimensional version of this concept of depth can also be interpreted as a new notion of depth for multivariate data that has the advantage of being computational feasible for highdimensional observations. Liu (1990) and Zuo and Serfling (2000) introduced in a general framework four key properties a depth should verify: invariance, maximality at the center, monotonicity with respect to the deepest point and vanishing at infinity. The depth introduced here verifies basically all these properties. Some other analytical and statistical properties such as the uniform consistency of the sample depth, are also established. Simulated results show that the trimmed mean gives a better performance than the mean when we consider contaminated models. Several real data sets are used to illustrate the new concept of depth. Finally, a generalization of the Wilcoxon rank sum test is proposed to decide whether two groups of curves come from the same population.
ℓpnorm multiple kernel learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2011
"... Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, thisℓ1norm MKL is rarely obser ..."
Abstract

Cited by 44 (5 self)
 Add to MetaCart
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, thisℓ1norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures that generalize well, we extend MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, that isℓpnorms with p≥1. This interleaved optimization is much faster than the commonly used wrapper approaches, as demonstrated on several data sets. A theoretical analysis and an experiment on controlled artificial data shed light on the appropriateness of sparse, nonsparse and ℓ∞norm MKL in various scenarios. Importantly, empirical applications of ℓpnorm MKL to three realworld problems from computational biology show that nonsparse MKL achieves accuracies that surpass the stateoftheart. Data sets, source code to reproduce the experiments, implementations of the algorithms, and
Environmental Determinants of Lexical Processing Effort
, 2000
"... A central concern of psycholinguistic research is explaining the relative ease or difficulty involved in processing words. In this thesis, we explore the connection between lexical processing effort and measurable properties of the linguistic environment. Distributional information (information abou ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
A central concern of psycholinguistic research is explaining the relative ease or difficulty involved in processing words. In this thesis, we explore the connection between lexical processing effort and measurable properties of the linguistic environment. Distributional information (information about a word's contexts of use) is easily extracted from large language corpora in the form of cooccurrence statistics. We claim that such simple distributional statistics can form the basis of a parsimonious model of lexical processing effort.