Last year I wrote a post about deploying Flask apps with Apache/mod_wsgi when your app’s dependencies are installed in a conda environment. The year before, in the dark times, I wrote a post about the black magic invocations required to get multiple apps running stably using mod_wsgi. I’ve since moved away from mod_wsgi entirely and switched to running Flask apps from containers using the gunicorn WSGI server behind an Apache reverse proxy, which has made life immeasurably easier. In this post we’ll cover running a Flask app on localhost using gunicorn; in Part II we’ll run our app as a service using Singularity and deploy it to production using Apache as a HTTP proxy server.
Continue readingCategory Archives: How To
GitHub.dev: Just press “.”
GitHub.dev is an incredibly useful feature in GitHub which allows you to view and edit code directly on the browser as a remote VSCode session.
To access this remote VSCode session, either:
- Press “.”
- Change “.com” to “.dev” in the URL
This is a great way to quickly explore someone’s code without having to clone it onto your machine or go through the GitHub UI.

Current strategies to predict structures of multiple protein conformational states
Since the release of AlphaFold2 (AF2), the problem of protein structure prediction is widely believed to be solved. Current structure prediction tools, such as AF2, are able to model most proteins with high accuracy. These methods, however, have a major limitation as they have been trained to predict a single structure for a given protein. Proteins are highly dynamic molecules, and their function often depends on transitions between several conformational states. Despite research focusing on the task of predicting the structures of multiple conformations of a protein, currently, no accurate and reliable method is available. In this blog post, I will provide a short overview of the strategies developed for predicting protein conformations. I have grouped these into three sets of related approaches. To conclude, I will also demonstrate how to run one of these strategies on your own.
Continue readingLightningCLI, my new best friend
If you’ve ever worked on machine learning projects, you’ll know that training models is just one aspect of the process. Code setup, configuration management, and ensuring reproducibility can also take up a lot of time. I’m a big fan of PyTorch Lightning primarily because it hides most of the boilerplate code you usually need, making your code more modular and readable. It even allows you to train your models on multiple GPUs with ease. All of this comes with the minor trade-off of learning an intuitive API, which can be easily extended to tweak any low-level details for those rare cases where the standard API falls short.
However, despite finding PyTorch Lightning incredibly useful, there’s one aspect that has always bothered me: the configuration of the model and training hyperparameters in a flexible and reproducible manner. In my view, the best approach to address this is to use configuration files for the various modules involved. These files can be easily overridden at runtime using command-line arguments or environment variables. To achieve this, I developed my own packages, configfile
and argParseFromDoc
, which facilitates this process.
But now, there’s a tool within the Lightning suite that offers all these features in a seamlessly integrated package. Allow me to introduce you to LightningCLI. This tool streamlines the process of hyperparameter configuration, making it both flexible and reproducible. With LightningCLI, you get the best of both worlds: the power of PyTorch Lightning and a hassle-free setup.
The core idea here is to write a config file (or several) that contains the required parameters for the trainer, the model and the dataset. This is done as yaml files with the following structure.
trainer: logger: true ... model: out_dim: 10 learning_rate: 0.02 data: data_dir: ./ image_size: 256 ckpt_path: null
…
Where the yaml fields should correspond to the parameters of the PytorchLightning Trainer, and your custom Model and Data classes, that inherit from LightningModule and LightningDataModule. So a full self-contained example could be
import lightning.pytorch as pl from lightning.pytorch.cli import LightningCLI class MyModel(pl.LightningModule): def __init__(self, out_dim: int, learning_rate: float): super().__init__() self.save_hyperparameters() self.out_dim = out_dim self.learning_rate = learning_rate self.model = create_my_model(out_dim) def training_step(self, batch, batch_idx): out = self.model(batch.x) loss = self.compute_loss(out, batch.y) return loss class MyDataModule(pl.LightningDataModule): def __init__(self, data_dir: str, image_size: int): super().__init__() self.data_dir = data_dir self.image_size = image_size def train_dataloader(self): return create_dataloader(self.image_size, self.data_dir) def main(): cli = LightningCLI(model_class=MyModel, datamodule_class=MyDataModule) if __name__ == "__main__": main()
That can be run easily as
python scrip.py --config config.yaml fit
What is even better is that you can split the configuration into several config files and that the configuration files can refer to Python classes to be instantiated, making this configuration system so flexible that you can literally configure everything you can imagine.
model: class_path: model.MyModel2 init_args: learning_rate: 0.2 loss: class_path: torch.nn.CrossEntropyLoss init_args: reduction: mean
In conclusion, LightningCLI brings the convenience of configuration management, command-line flexibility, and reproducibility to your PyTorch Lightning projects. With simple yet powerful features, it’s a tool that should be part of any machine learning engineer’s toolkit.
The dangers of Conda-Pack and OpenMM
If you are running lots of little jobs in SLURM and want to make use of free nodes that suddenly become available, it is helpful to have a way of rapidly shipping your environments that does not rely on installing conda or rebuilding the environment from scratch every time. This is useful with complex rebuilds where exported .yml files do not always work as expected, even when specifying exact versions and source locations.
In these situations a tool such a conda-pack becomes incredibly useful. Once you have perfected the house of cards that is your conda environment, you can use conda-pack to save that exact state as a tar.gz file.
conda-pack -n my_precious_env -o my_precious_env.tar.gz
This can provide you with a backup to be used when you accidentally delete conda from your system, or if you irreparable corrupt the environment and cannot roll back to the point in time when everything worked. These tar.gz files can also be copied to distant locations by the use of rsync or scp, unpacked, sourced and used without installing conda…
Continue readingWhat can you do with the OPIG Immunoinformatics Suite? v3.0
OPIG’s growing immunoinformatics team continues to develop and openly distribute a wide variety of databases and software packages for antibody/nanobody/T-cell receptor analysis. Below is a summary of all the latest updates (follows on from v1.0 and v2.0).
Continue readingExploring the Observed Antibody Space (OAS)
The Observed Antibody Space (OAS) [1,2] is an amazing resource for investigating observed antibodies or as a resource for training antibody specific models, however; its size (over 2.4 billion unpaired and 1.5 million paired antibody sequences as of June 2023) can make it painful to work with. Additionally, OAS is extremely information rich, having nearly 100 columns for each antibody heavy or light chain, further complicating how to handle the data.
From spending a lot of time working with OAS, I wanted to share a few tricks and insights, which I hope will reduce the pain and increase the joy of working with OAS!
Continue readingCoding a Progress Bar for your Google Slides Presentation
Presentations are a great opportunity to explain your work to a new audience and receive valuable feedback. A vital aspect of a presentation is keeping the audience’s attention which is generally quite tricky I have found (from experience).
One thing that I have noticed other presenters using, which has helped maintain my focus, is an indication of the progression of the presentation. Including in your slides information that there are only a few slides remaining, encourages the listeners to keep their focus for a little longer.
Instead I will show you how to do it using Apps Script, Google’s cloud platform that allows you to write JavaScript code which can work with its online products such as Docs or Slides.
Continue readingWriting a BLOPIG Post With ChatGPT: A Personal Take on Using AI for Assisted Writing
Disclaimer: I used ChatGPT to improve the writing style of this article, in combination with some personal curation before obtaining a final version.
You’ve probably heard it all already, from ChatGPT writing code and doing proofreading for you to a rap battle between OPIG’s Antibodies and Small Molecules groups, and more.
Whether you like it or not, ChaGPT has unleashed people’s creative side regarding applications and attempts to find shortcuts. Questionable? Absolutely!
In this BLOPIG post, I show how I used ChatGPT to easily write a post summarising some material of my own intellectual property, which I presented as part of my group meeting talk. Mainly, I list some personal thoughts on the ethical concerns around using ChatGPT to assist your writing.
To start off, I passed on content from my own publication draft to ChatGPT, asking to generate a blog post in plain English for BLOPIG. The outcome:

Not bad.
But, it made me realise a number of things:
- With great power comes great responsibility [Uncle Ben – Spiderman].
You are responsible for the ethics that go into using ChatGPT. Are you faking expertise? Are you being actually lazy or just being efficient? Think twice (or many more times) if you’re doing the right thing. - It can significantly reduce the number of writing iterations but don’t take it at face value.
Can you actually trust the plain output? No.
Never take its output as the ground truth, as Large Language Models such as ChatGPT often produce biased writing outputs.
Keep in mind that whatever you produce as a scientist will be picked up by others, and prone to drive misinformation, if incorrect. It is OK to reduce mechanical iterations, but it’s NOT OK to skip quality control. - Be open about it.
You don’t want to set the wrong example for your colleagues. So, mention if you use it, how you used it, and it is fine to encourage efficiency, but not incentivising a culture of scientific misconduct and plagiarism. Don’t skip the step of producing quality ideas on your own. This is such a concern that publishers like Elsevier have already reacted by publishing guidelines contemplating this possibility. While Nature Springer is working on ways to spot AI-generated outputs.
The bottom line
What are the dos and don’ts of using ChatGPT?
Yes, use it to have fun. Yes, use it to proofread or polish your writing. Yes, use it to summarise your own ideas. No, don’t use it to do the analysis and interpretation of your results. No, don’t copy and paste its direct output into your publication. No, don’t hide that you used it. Finally, NO, you can’t add ChatGPT as a contributing author!
Train Your Own Protein Language Model In Just a Few Lines of Code
Language models have token the world by storm recently and, given the already explored analogies between protein primary sequence and text, there’s been a lot of interest in applying these models to protein sequences. Interest is not only coming from academia and the pharmaceutical industry, but also some very unlikely suspects such as ByteDance – yes the same ByteDance of TikTok fame. So if you also fancy trying your hand at building a protein language model then read on, it’s surprisingly easy.
Training your own protein language model from scratch is made remarkably easy by the HuggingFace Transformers library, which allows you to specify a model architecture, tokenise your training data, and train a model in only a few lines of code. Under the hood, the Transformers library uses PyTorch (or optionally Tensorflow) models, allowing you to dig deeper into customising training or model architecture, or simply leave it to the highly abstracted Transformers library to handle it all for you.
For this article, I’ll assume you already understand how language models work, and are now looking to implement one yourself, trained from scratch.
Continue reading