Research

My PhD research was concerned with the development of statistical methodology for the integration of multiple ‘omic datasets (e.g. genomic, transcriptomic, proteomic, etc.) in personalised medicine.

My goal was to tackle some of the challenges presented by the identification of relevant patient subgroups (e.g. patients that might be expected to respond similarly to treatments) on the basis of those datasets.

First, when combining different types of ‘omics datasets, it is crucial to take into account the different nature of each dataset. For this reason, I developed integrative clustering methods that explicitly weigh the contribution of each dataset to the final clustering according to the amount of information that it contains, and that allow to combine datasets of different type (e.g. continuous, categorical, etc.). These methods are based on the idea that the output of classical statistical techniques such as model-based Bayesian clustering can be used in combination with kernel methods from the machine learning literature to find a meaningful global clustering that summarises all the information available.

Second, because ‘omic datasets comprise measurements taken on a very large number of variables, many different patient subgroups can usually be identified, depending on which variables we include in our analysis. For this reason, I also worked on integrating genetic information with data on specific patient outcomes, to ensure that we identify truly relevant patient subgroups. To do so, I generalised the method above to the supervised case. A variational inference algorithm for outcome-guided model-based Bayesian clustering could be implemented as an alternative to that.

On a more applied note, I participated in a study on cardiovascular disease. My role in the project was to analyse data collected at the Cambridge Blood Donor Centre with the statistical methods mentioned above, to define a personalised cardiovascular disease risk score.

References:

Previous research
High performance, large scale regression

During my internship at The Alan Turing Institute, I explored different methods and libraries to perform high-performance, large-scale regression on a supercomputer, with particular focus on Apache Spark and TensorFlow. The internship was funded by Cray Inc and carried out in close collaboration with the Cray EMEA Research Lab. You can find more details about our findings on the blog and the official webpage of the project.

Permutation tests for functional and network data
Macroscopic traffic flow models
858841_10202035531592558_1588519243_o
Team OPALE (now ACUMES), INRIA — Sophia Antipolis, France — Summer 2013.
%d bloggers like this: