Ever wondered whether opiglets keep their ketchup in the fridge or cupboard? Perhaps you’ve wanted to know how to create nice figure to display lots of information simulataniously. Publication quality figures are easy in R with the ggplot package. We may also learn some good visualisation.
Continue readingCategory Archives: Statistics
The War of the Roses: Tea Edition
Picture the following: the year is 1923, and it’s a sunny afternoon at a posh garden party in Cambridge. Among the polite chatter, one Muriel Bristol—a psychologist studying the mechanisms by which algae acquire nutrients—mentions she has a preference for tea poured over milk, as opposed to milk poured over tea. In a classic example of women not being able to express even the most insignificant preference without an opinionated man telling them they’re wrong, Ronald A. Fisher, a local statistician (later turned eugenicist who dismissed the notion of smoking cigarettes being dangerous as ‘propaganda’, mind you) decides to put her claim to the test with an experiment. Bristol is given eight cups of tea and asked to classify them as milk first or tea first. Luckily, she correctly identifies all eight of them, and gets to happily continue about her life (presumably until the next time she dares mention a similarly outrageous and consequential opinion like a preferred toothpaste brand or a favourite method for filing papers). Fisher, on the other hand, is incentivized to develop Fisher’s exact test, a statistical significance test used in the analysis of contingency tables.
Continue readingPitfalls of using Pearson’s correlation for comparing model performance
Pearson’s R (correlation coefficient) is a measure of the linear correlation between two variables, giving a value between -1 and 1, where 1 is total positive linear correlation, 0 is no linear correlation, and -1 is total negative linear correlation. While it’s a useful statistic for understanding the relationship between two variables, it is often used to compare the performance of two or more models. For example, imagine we had experimental values that we are predicting and several models’ predictions. Obviously, we would prefer the model with the highest Pearson’s R … or perhaps not?
Continue readingOptimising for PR AUC vs ROC AUC – an intuitive understanding
When training a machine learning (ML) model, our main aim is usually to get the ‘best’ model out the other end in an unbiased manner. Of course, there are other considerations such as quick training and inference, but mostly we want to be good at predicting the right answer.
A number of factors will affect the quality of our final model, including the chosen architecture, optimiser, and – importantly – the metric we are optimising for. So, how should we pick this metric?
Continue readingTaking Equivariance in deep learning for a spin?
I recently went to Sheh Zaidi‘s brilliant introduction to Equivariance and Spherical Harmonics and I thought it would be useful to cement my understanding of it with a practical example. In this blog post I’m going to start with serotonin in two coordinate frames, and build a small equivariant neural network that featurises it.
Continue readingThe Surprising Shape of Normal Distributions in High Dimensions
Multivariate Normal distributions are an essential component of virtually any modern deep learning method—be it to initialise the weights and biases of a neural network, perform variational inference in a probabilistic model, or provide a tractable noise distribution for generative modelling.
What most of us (including—until very recently—me) aren’t aware of, however, is that these Normal distributions begin to look less and less like the characteristic bell curve that we associate them with as their dimensionality increases.
Continue readingA Simple Way to Quantify the Similarity Between Two Sets of Molecules
When designing machine learning algorithms with the aim of accelerating the discovery of novel and more effective therapeutics, we often care deeply about their ability to generalise to new regions of chemical space and accurately predict the properties of molecules that are structurally or functionally dissimilar to the ones we have already explored. To evaluate the performance of algorithms in such an out-of-distribution setting, it is essential that we are able to quantify the data shift that is induced by the train-test splits that we rely on to decide which model to deploy in production.
For our recent ICML 2023 paper Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions, we chose to quantify the distributional similarity between two sets of molecules through the Maximum Mean Discrepancy (MMD).
Continue readingggPlotting tips with OPIG data
Ever wondered whether opiglet keep their ketchup in the fridge or cupboard. Perhaps you’ve wanted to know how to create nice figure to display lots of information simultaniously. Publication quality figures are easy within R with the ggplot package. We may also learn some good visualisation.
Continue readingPHinally PHunctionalising my PHigures with PHATE feat. Plotly Express.
After being recommended by a friend, I really wanted to try plotly express but I never had the inclination to read more documentation when matplotlib gives me enough grief. While experimenting with ChatGPT I finally decided to functionalise my figure making scripts. With these scripts I manage to produce figures that made people question what I had actually been doing with my time – but I promise this will be worth your time.
I have been using with dimensionality reducition techniques recently and I came across this paper by Moon et al. PHATE is a technique that represents high dimensional (ie biological) data in a way that aims to preserve connections over preserving distance and I knew I wanted to try this as soon as I saw it. Why should you care? PHATE in 3D is faster that t-SNE in 2D. It would almost be rude to not try it out.
PHATE
In my opinion PHATE (or potential of heat diffusion for affinity-based transition embedding) does have a lot going on but that the choices at each stage feel quite sensisble. It might not come as a surprise this was primarily designed to make visual inspection of data easier on the eyes.
Continue readingThe Boltzmann Distribution and Gender Stereotypes
Journalist Caitlin Moran recently tweeted the following:
“I feel like every day now, I read/hear something saying “We don’t talk about what’s POSITIVE about masculinity; what’s GOOD about men and boys.” So: what IS the best stuff about boys, and men? Honest, celebratory question.”
What followed was a collection of replies acknowledging and celebrating various traits seen typically as ‘male’, including certain activities, such as knowing about sports or cars, or a desire to do DIY type work, and characteristics such as physical strength, no-nonsense attitudes and a ‘less complicated’ style of friendship between men.
Whilst I condone Moran’s efforts to turn recent discussions surrounding masculinity on their head and frame it in a positive light, to me the the responses offered and discussion that followed felt somewhat stifling. I am biologically male and identify as male, but do not feel like I personally adhere to most of these stereotypes. I am not physically strong, I know very little about cars and sports, and find there be just as much nuance and drama in male-male friendships as there is in friendships between other genders.
Continue reading