Episodes
Network inference methods based upon sparse Gaussian Graphical Models (GGM) have recently emerged as a promising exploratory tool in genomics. They give a sounded representation of direct relationships between genes and are accompanied with sparse inference strategies well suited to the high dimensional setting. They are also versatile enough to include prior structural knowledge to drive the inference. Still, GGM are now in need for a second breath after showing some limitations : among...
Published 05/16/13
When an unbiased estimator of the likelihood is used within an Markov chain Monte Carlo (MCMC) scheme, it is necessary to tradeoff the number of samples used against the computing time. Many samples for the estimator will result in a MCMC scheme which has similar properties to the case where the likelihood is exactly known but will be expensive. Few samples for the construction of the estimator will result in faster estimation but at the expense of slower mixing of the Markov chain.We explore...
Published 05/16/13
Recent technological advances in molecular biology have given rise to numerous large scale datasets whose analysis have risen serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study (GWAS) era, and more recently in transcriptomics and metabolomics. Building upon the corresponding wide literature, we present methods used to analyze...
Published 05/16/13
The main goal of this work is to tackle the problem of dimension reduction for highdimensional supervised classification. The motivation is to handle gene expression data. The proposed method works in 2 steps. First, one eliminates redundancy using clustering of variables, based on the R-package ClustOfVar. This first step is only based on the exploratory variables (genes). Second, the synthetic variables (summarizing the clusters obtained at the first step) are used to construct a classifier...
Published 05/16/13
Principal component analysis (PCA) is a well-established method commonly used to explore and visualize data. A classical PCA model is the fixed effect model where data are generated as a fixed structure of low rank corrupted by noise. Under this model, PCA does not provide the best recovery of the underlying signal in terms of mean squared error. Following the same principle as in ridge regression, we propose a regularized version of PCA that boils down to threshold the singular values. Each...
Published 05/16/13
The Online Expectation-Maximization (EM) is a generic algorithm that can be used to estimate the parameters of latent data models incrementally from large volumes of data. The general principle of the approach is to use a stochastic approximation scheme, in the domain of sufficient statistics, as a proxy for a limiting, deterministic, population version of the EM recursion. In this talk, I will briefly review the convergence properties of the method and discuss some applications and...
Published 05/16/13
The exponential random graph is arguably the most popular model for the statistical analysis of network data. However despite its widespread use, it is very complicated to handle from a statistical perspective, mainly because the likelihood function is intractable for all but trivially small networks. This talk will outline some recent work in this area to overcome this intractability. In particular, we will outline some approaches to carry out Bayesian parameter estimation and show how this...
Published 05/16/13
This work is motivated by the challenges of drawing inferences from presence-only data. For example, when trying to determine what habitat sea-turtles "prefer" we only have data on where turtles were observed, not data about where the turtles actually are. Therefore, if we find that our sample contains very few turtles living in regions with tall sea grass, we cannot conclude that these areas are unpopular with the turtles, merely that we are unlikely to observe them there. Similar issues...
Published 05/16/13
The high dimensional setting is a modern and dynamic research area in Statistics. It covers numerous situations where the number of explanatory variables is much larger than the sample size. This is the case in genomics when one observes (dozens of) thousands genes expression ; typically one has at hand a small sample of high dimensioned vectors derived from a large set of covariates. Such datasets will be abbreviated to HDD-I for High Dimensional Data of type I. A particular setting may...
Published 05/16/13