LightningCLI, my new best friend

If you’ve ever worked on machine learning projects, you’ll know that training models is just one aspect of the process. Code setup, configuration management, and ensuring reproducibility can also take up a lot of time. I’m a big fan of PyTorch Lightning primarily because it hides most of the boilerplate code you usually need, making your code more modular and readable. It even allows you to train your models on multiple GPUs with ease. All of this comes with the minor trade-off of learning an intuitive API, which can be easily extended to tweak any low-level details for those rare cases where the standard API falls short.

However, despite finding PyTorch Lightning incredibly useful, there’s one aspect that has always bothered me: the configuration of the model and training hyperparameters in a flexible and reproducible manner. In my view, the best approach to address this is to use configuration files for the various modules involved. These files can be easily overridden at runtime using command-line arguments or environment variables. To achieve this, I developed my own packages, configfile and argParseFromDoc, which facilitates this process.

But now, there’s a tool within the Lightning suite that offers all these features in a seamlessly integrated package. Allow me to introduce you to LightningCLI. This tool streamlines the process of hyperparameter configuration, making it both flexible and reproducible. With LightningCLI, you get the best of both worlds: the power of PyTorch Lightning and a hassle-free setup.

The core idea here is to write a config file (or several) that contains the required parameters for the trainer, the model and the dataset. This is done as yaml files with the following structure.

trainer:
  logger: true
  ...
model:
  out_dim: 10
  learning_rate: 0.02
data:
  data_dir: ./
  image_size: 256
ckpt_path: null

Where the yaml fields should correspond to the parameters of the PytorchLightning Trainer, and your custom Model and Data classes, that inherit from LightningModule and LightningDataModule. So a full self-contained example could be

import lightning.pytorch as pl
from lightning.pytorch.cli import LightningCLI
class MyModel(pl.LightningModule):
    def __init__(self, out_dim: int, learning_rate: float):
        super().__init__()
        self.save_hyperparameters()
        self.out_dim = out_dim
        self.learning_rate = learning_rate
        self.model = create_my_model(out_dim)
    def training_step(self, batch, batch_idx):
        out = self.model(batch.x)
        loss = self.compute_loss(out, batch.y)
        return loss
class MyDataModule(pl.LightningDataModule):
    def __init__(self, data_dir: str, image_size: int):
        super().__init__()
        self.data_dir = data_dir
        self.image_size = image_size
    def train_dataloader(self):
        return create_dataloader(self.image_size, self.data_dir)

def main():
    cli = LightningCLI(model_class=MyModel, datamodule_class=MyDataModule)
if __name__ == "__main__":
    main()

That can be run easily as

python scrip.py --config config.yaml fit

What is even better is that you can split the configuration into several config files and that the configuration files can refer to Python classes to be instantiated, making this configuration system so flexible that you can literally configure everything you can imagine.

model:
  class_path: model.MyModel2
  init_args:
    learning_rate: 0.2
    loss:
      class_path: torch.nn.CrossEntropyLoss
      init_args:
        reduction: mean

In conclusion, LightningCLI brings the convenience of configuration management, command-line flexibility, and reproducibility to your PyTorch Lightning projects. With simple yet powerful features, it’s a tool that should be part of any machine learning engineer’s toolkit.

A Simple Way to Quantify the Similarity Between Two Sets of Molecules

When designing machine learning algorithms with the aim of accelerating the discovery of novel and more effective therapeutics, we often care deeply about their ability to generalise to new regions of chemical space and accurately predict the properties of molecules that are structurally or functionally dissimilar to the ones we have already explored. To evaluate the performance of algorithms in such an out-of-distribution setting, it is essential that we are able to quantify the data shift that is induced by the train-test splits that we rely on to decide which model to deploy in production.

For our recent ICML 2023 paper Drug Discovery under Covariate Shift with Domain-Informed Prior Distributions over Functions, we chose to quantify the distributional similarity between two sets of molecules through the Maximum Mean Discrepancy (MMD).

Continue reading

AI Can’t Believe It’s Not Butter

Recently, I’ve been using a Convolutional Neural Network (CNN), and other methods, to predict the binding affinity of antibodies from their sequence. However, nine months ago, I applied a CNN to a far more important task – distinguishing images of butter from margarine. Please check out the GitHub link below to learn moo-re.

https://github.com/lewis-chinery/AI_cant_believe_its_not_butter

A simple criterion can conceal a multitude of chemical and structural sins

We’ve been investigating deep learning-based protein-ligand docking methods which often claim to be able to generate ligand binding modes within 2Å RMSD of the experimental one. We found, however, this simple criterion can conceal a multitude of chemical and structural sins…

DeepDock attempted to generate the ligand binding mode from PDB ID 1t9b
(light blue carbons, left), but gave pretzeled rings instead (white carbons, right).

Continue reading

Placeholder compounds: distraction vs. accuracy

When showcasing an approach in computational chemistry, an example molecule is required as a placeholder. But which to chose from? I would classify there different approaches: choosing a recognisable molecules, a top selling drugs, or a randomly sketched compound.

At a recent conference, Sheffield Cheminformatics 2023, I saw examples of all three and one problem I had that some placeholders distracted me into searching to figure out what it was.

Continue reading

The dangers of Conda-Pack and OpenMM

If you are running lots of little jobs in SLURM and want to make use of free nodes that suddenly become available, it is helpful to have a way of rapidly shipping your environments that does not rely on installing conda or rebuilding the environment from scratch every time. This is useful with complex rebuilds where exported .yml files do not always work as expected, even when specifying exact versions and source locations.

In these situations a tool such a conda-pack becomes incredibly useful. Once you have perfected the house of cards that is your conda environment, you can use conda-pack to save that exact state as a tar.gz file.

conda-pack -n my_precious_env -o my_precious_env.tar.gz

This can provide you with a backup to be used when you accidentally delete conda from your system, or if you irreparable corrupt the environment and cannot roll back to the point in time when everything worked. These tar.gz files can also be copied to distant locations by the use of rsync or scp, unpacked, sourced and used without installing conda…

Continue reading

Lucubration or Gaslighting?​

Or: The best lies have a nugget of truth in them.​

Lucubration – The action or occupation of intensive study originally by candle or lamplight.

Gaslighting – Psychological abuse in which a person or group causes someone to question their own sanity, memories, or perception.

I was recently having a play with Google Bard. Bard, unlike ChatGPT has access to live data. It also undergoes live feedback and quality control. I was hoping to see if it would find me any journals with articles on prion research which I’d previously overlooked.

Me: Please show me some recent articles about prion research.
(Because always be polite to our AI overlords, they’ll remember!)

Continue reading

Cosmology via Structural Biology and Half-Lives of Teaspoons: Bizarre Papers from Around the Internet

I don’t know if anyone out there shares this peculiar hobby of mine (God, I hope not!), but I often find myself scouring the depths of the internet for some truly bizarre academic papers. Though there is an endless supply of such content to keep one entertained (read: distract yourself during those afternoons you planned to be productive but ended up succumbing to the lunch food coma), I’ve managed to compile a short list of the most fascinating ones for your enjoyment!

  • The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute (Lim et al, 2005, BMJ, doi: https://doi.org/10.1136/bmj.331.7531.1498)

This fantastic and robust study examines the enigmatic phenomenon of disappearing teaspoons in a shared kitchen—an issue of acute importance to all of us who rely on these tiny utensils. The authors reveal the shocking truth about teaspoons’ shockingly short half-life in research institutes. The question remains: does this phenomenon extend to other cutlery as well?

Continue reading