Tales of an OPIG Jamboree

Jamboree
(1) a large gathering, as of a political party or the teams of a sporting league, often including a program of speeches and entertainment.;
(2) a large gathering of members of the Boy Scouts or Girl Scouts, usually nationwide or international in scope

Oxford Dictionary

This October marks twenty years since our supreme leader, Charlotte Deane, came to Oxford to start the first protein informatics group in this university.

Twenty years is a really long time, and at OPIG we like to celebrate things in style. From the beginning, it was clear that we would be doing what we know best: get together, consume lots of food and drinks, and perhaps talk about science. But, frankly, that’s what we do all the time. This simply wasn’t enough to celebrate two decades of scientific production. So Charlotte entrusted several of us with an ambitious goal: to reach out to our former members, and to ask them to join us, in Oxford, to celebrate two decades of protein informatics. And that’s what we did.

For two months, we painstakingly tracked down every person that has ever been part of our group, and attempted to gather their contact details to invite them to Oxford. Attempted to, for the most part. While LinkedIn gave us some early victories, some alumni had managed to cover their tracks very well, including one person we could only found after tracking down their three previous jobs. Nevertheless, after much digging, we managed to find updated contact details for every person that has ever passed by our lab, and nearly thirty of these former alumni (almost 50% of them!) made their way to Oxford on October 8th* to hold the first OPIG Jamboree.

From the first student (Sanne Abeln, rightmost in the second row) to the most recent (Kate, whose hair can barely be seen on the leftmost third row), we are all here!
Continue reading

Llamas and nanobodies

Nanobodies are an exciting area of increasing interest in the biotherapeutics domain. They consist only of a heavy chain variable domain so are much smaller than conventional antibodies (about 1/10th of their mass) but despite this, manage to achieve comparable affinity for their targets, in addition to being more soluble and stable – good things come in small packages! Nanobodies are not naturally produced in humans but can be derived from camelids (VHHs) or sharks (vNARs) and then engineered to humanise them. For the rest of this blog post we will skip over the science entirely and learn how to draw a llama, a great example of a camelid species.

Graphormer: Merging GNNs and Transformers for Cheminformatics

This is my first OPIG blog! I’m going to start with a summary of the Graphormer, a Graph Neural Network (GNN) that borrows concepts from Transformers to boost performance on graph tasks. This post is largely based on the NeurIPS paper Do Transformers Really Perform Bad for Graph Representation? by Ying et. al., which introduces the Graphormer, and which we read for our last deep learning journal club. The project has now been integrated as a Microsoft Research project.

I’ll start with a cheap and cheerful summary of Transformers and GNNs before diving into the changes in the Graphormer. Enjoy!

Continue reading

Using Conda environments with Flask and Apache

With the advent of ABlooper, we’ve recently introduced OpenMM as a new dependency for the SAbDab-SAbPred antibody modelling platform. By far the easiest way to install the OpenMM Python API is via Conda, so we’ve moved to Conda environments for the entire platform. This has made installation of the platform much easier, but introduces complications when it comes to running its web applications under Apache. In this post, I’ll briefly explain the reason for this, and provide a basic guide for running Flask apps using Conda environments under Apache.

Continue reading

Running code that fails with style

We have all been there, working on code that continuously fails while staring at a dull and colorless command-line. However, we are in luck, as there is a way to make the constant error messages look less depressing. By changing our shell to one which enables a colorful themed command-line and fancy features like automatic text completion and web search your code won’t just fail with ease, but also with style!

A shell is your command-line interpreter, meaning you use it to process commands and output results of the command-line. The shell therefore also holds the power to add a little zest to the command-line. The most well-known shell is bash, which comes pre-installed on most UNIX systems. However, there exist many different shells, all with different pros and cons. The one we will focus on is called Z Shell or zsh for short.

Zsh was initially only for UNIX and UNIX-Like systems, but its popularity has made it accessible on most systems now. Like bash, zsh is extremely customizable and their syntax so similar that most bash commands will work in zsh. The benefit of zsh is that it comes with additional features, plugins and options, and open-source frameworks with large communities. The framework which we will look into is called Oh My Zsh.

Continue reading

Universal graph pooling for GNNs

Graph neural networks (GNNs) have quickly become one of the most important tools in computational chemistry and molecular machine learning. GNNs are a type of deep learning architecture designed for the adaptive extraction of vectorial features directly from graph-shaped input data, such as low-level molecular graphs. The feature-extraction mechanism of most modern GNNs can be decomposed into two phases:

  • Message-passing: In this phase the node feature vectors of the graph are iteratively updated following a trainable local neighbourhood-aggregation scheme often referred to as message-passing. Each iteration delivers a set of updated node feature vectors which is then imagined to form a new “layer” on top of all the previous sets of node feature vectors.
  • Global graph pooling: After a sufficient number of layers has been computed, the updated node feature vectors are used to generate a single vectorial representation of the entire graph. This step is known as global graph readout or global graph pooling. Usually only the top layer (i.e. the final set of updated node feature vectors) is used for global graph pooling, but variations of this are possible that involve all computed graph layers and even the set of initial node feature vectors. Commonly employed global graph pooling strategies include taking the sum or the average of the node features in the top graph layer.

While a lot of research attention has been focused on designing novel and more powerful message-passing schemes for GNNs, the global graph pooling step has often been treated with relative neglect. As mentioned in my previous post on the issues of GNNs, I believe this to be problematic. Naive global pooling methods (such as simply summing up all final node feature vectors) can potentially form dangerous information bottlenecks within the neural graph learning pipeline. In the worst case, such information bottlenecks pose the risk of largely cancelling out the information signal delivered by the message-passing step, no matter how sophisticated the message-passing scheme.

Continue reading

Musings on Digital Nomaddery from Seoul

The languorous, muggy heat of the Korean afternoon sun was what greeted me after 13 hour cattle-class flight from a cool, sensible Helsinki night. The goings-on in Ukraine, and associated political turmoil, meant taking the scenic route – avoiding Russia and instead passing over Turkey, Kazakstan and Mongolia – with legs contorted into unnatural positions and sleep an unattainable dream. Tired and disoriented, I relied less on Anna’s expert knowledge of the Korean language than her patience for my jet-lag-induced bad mood and brain fog. We waited an hour for a bus to take us from Incheon airport to Yongsan central station in the heart of the capital. It was 35 °C.

I’ve been here for a month. Anna has found work, starting in November; I have found the need to modify my working habits. Gone are the comfortable, temperate offices on St Giles’, replaced by an ever-changing diorama of cafés, hotel rooms and libraries. Lugging around my enormous HP Pavilion, known affectionately by some as ‘The Dominator’, proved to be unsustainable.

It’s thesis-writing time for me, so any programming I do is just tinkering and tweaking and fixing the litany of bugs that Lucy Vost has so diligently exposed. I had planned to run Ubuntu on Parallels using my MacBook Air; I discovered to my dismay that a multitude of Conda packages, including PyTorch, are not supported on Apple’s M1 chip. It has been replaced by a combination of Anna’s old Intel MacBook Pro and rewriting my codebase to install and run without a GPU – adversity is the great innovator, as the saying goes.

Continue reading

Vaccines and vino

Recently, I was fortunate enough to attend and present at GSK’s PhD and Postdoc workshop in Siena, Italy. The workshop spanned two days and I had a brilliant time there – Siena itself is beautiful, I ate fantastic food, and I learnt a huge amount about all stages of vaccine production.

Unfortunately, due to confidentiality, I can’t go into great detail about others’ current research, however I have provided a short overview of the five main areas the workshop focused on below.

Continue reading

An evolutionary lens for understanding cancer and its treatment

I recently found myself in the Oxford Blackwells’ Norrington Room browsing the shelves for some holiday reading. One book in particular caught my eye, a blend of evolution — a topic that has long interested me — and cancer biology, a topic I’m increasingly exposed to in immune repertoire analysis collaborations but on which I am assuredly “non-expert”!

Paperback cover of “The Cheating Cell” by Athene Aktipis.

The Cheating Cell by Athene Aktipis provides a theoretical framework for understanding cancer by considering it as a logical sequitor of the advent of successful multicellular life.

Continue reading