The Oxford Medical Research Council Doctoral Training Partnership (MRC DTP), the program through which my DPhil is funded, hosts an annual Symposium to highlight research being conducted by DTP students and offer insights into the career paths of external speakers.
This year, I was on the committee organising the Symposium and was involved in selecting student presenters, as well as deciding on and inviting external speakers. It was a great experience!
Over the past year, I have been working on building a graph-based paratope (antibody binding site) prediction tool – Paragraph. Fortunately, I have had moderate success with this and you can now check out the preprint of this work here.
However, for a long time, I struggled with a highly unstable network, where different random seeds yielded very different results. I believe this instability was largely due to the high class imbalance in my data – only ~10% of all residues in the Fv (variable region of the antibody) belong to the paratope.
I tried many different things in an attempt to stabilise my training, most of which failed. I will share all of these ideas with you though – successful or not – as what works for one person/network is never guaranteed to work for another. I hope that the below may provide some ideas to try out for others facing similar issues. Where possible, I also provide some example hyperparameter values that could act as sensible starting points.
Eve, Brennan and I were delighted to attend the sixth AIRR (adaptive immune receptor repertoire) Community Meeting: Exploring New Frontiers in San Diego. Eve and I had been awaiting this meeting for a mere 3 years, since it was announced during the last in-person AIRR Community Meeting back in 2019. Fortunately, San Diego did not disappoint.
After a rocky start (featuring many hours stuck in traffic on the M40, one missed flight and one delayed flight), we made it to California! The three day conference had ~230 participants (remote and in-person) and featured great talks from academia and industry. We particularly enjoyed keynote talks from Dennis Burton on rational vaccine design using broadly neutralising antibodies, Gunilla Karlsson Hedestam on functional consequences of allelic variation, Shane Crotty on covid and HIV vaccine design, and Atul Butte on uses of electronic health record data and how we should all found start-ups.
We had fun delivering a tutorial on OPIG antibody tools and, most importantly, we all won AIRR t-shirts in the raffle (potentially we were the only people who noticed how to enter on the conference app). Highlights outside of the conference included paddle boarding and seeing hummingbirds, pelicans, sealions, seals, ‘Garibaldi’ the state fish, and meeting Bob the golden retriever at a surfing shop. We’re now off to find jobs on the West Coast so we can live at the beach….
Understanding what’s going on when you’ve started training your shiny new ML model is hard enough. Will it work? Have I got the right parameters? Is it the data? Probably. Any tool that can help with that process is a Godsend. Weights and biases is a great tool to help you visualise and track your model throughout your production cycle. In this blog post, I’m going to detail some basics on how you can initialise and use it to visualise your next project.
Installation
To use weights and biases (wandb), you need to make an account. For individuals it is free, however, for team-oriented features, you will have to pay. Wandb can then be installed using pip or conda.
So you’ve submitted your paper, made your code publicly available, and maybe even provided documentation to ensure somebody can reproduce your work. But what about the data your work is based on? Is that readily available to your readers, too?
Maybe it’s too large to put on GitHub alongside your code. Maybe it’s sensitive, or subject to GDPR restrictions, so you can’t just stick a download link on your website. Maybe it’s in a proprietary format that needs non-open software to read. There are many reasons sharing data can be less straightforward than sharing code, and often it’s not entirely clear what ‘best practices’ are for a given situation. Data management is a complicated topic, and to do it justice would require far more than a quick blog post. Instead, I’d like to focus on a single source of guidance that serves as a useful starting point for thinking about responsible data management: the FAIR principles.
As a reasonably new RDKit user, I was relieved to find that using its built-in functionality for generating basic images from molecules is quite easy to use. However, over time I have picked up some additional tricks to make the images generated slightly more pleasing on the eye!
The first of these (which I definitely stole from another blog post at some point…) is to ask it to produce SVG images rather than png:
#ensure the molecule visualisation uses svg rather than png format
IPythonConsole.ipython_useSVG=True
Now for something slightly more interesting: as a fragment elaborator, I often need to look at a long list of elaborations that have been made to a starting fragment. As these have usually been docked, these don’t look particularly nice when loaded straight into RDKit and drawn:
#load several mols from a single sdf file using SDMolSupplier
#add these to a list
elabs = [mol for mol in Chem.SDMolSupplier('frag2/elabsTestNoRefine_Docked_0.sdf')]
#get list of ligand efficiencies so these can be displayed alongside the molecules
LEs = [(float(mol.GetProp('Gold.PLP.Fitness'))/mol.GetNumHeavyAtoms()) for mol in elabs]
Draw.MolsToGridImage(elabs, legends = [str(LE) for LE in LEs])
Fig. 1: Images generated without doing any tinkering
Two quick changes that will immediately make this image more useful are aligning the elaborations by a supplied substructure (here I supplied the original fragment so that it’s always in the same place) and calculating the 2D coordinates of the molecules so we don’t see the twisty business happening in the bottom right of Fig. 1:
Between the 27th April and 1st of May, I was very fortunate to be able attend the Antibodies as Drugs Keystone Symposium and give my first conference talk internationally, in which I spoke about the methods our group has developed for using structure to make predictions about where an antibody binds relative to other antibodies. This included paratyping [1], Ab-Ligity [2] and most recently SPACE [3].
I will preface this by saying that lots of the work people spoke about was unpublished, which was so exciting, but makes for a difficult blog post to write. To avoid any possibility of putting my foot in my mouth I will keep the science very surface level. The conference was held at the Keystone resort in Colorado, and the science combined with a kind of landscape I have never experienced before made for an extremely cool experience. This meeting was originally combined with a protein design meeting, and the two were split by COVID – this meant that in-silico methods were the minority in the program, but I didn’t mind that as the computational work that was presented was quite diverse so it was definitely a good representation of the field still. I also really enjoyed the large number of infectious disease talks in which we got a good range of the major human pathogens – ebolaviruses, SARS-CoV-2 of course, dengue, hantaviruses, metapneumovirus, HIV, TB and malaria all featured. The bispecific session was another highlight for me. The conference was very well organised and I liked how we were all asked to share a fun fact about ourselves – one speaker shared that he is a Christmas tree farmer in his spare time (I won’t share his name in case he is keeping that under wraps). That made me reconsider how fun I can truly consider myself…
Without turning this into a travel blog, I also want to add that Keystone was insanely beautiful and make you look at some pics I got.
The default plots made by Python’s matplotlib.pyplot module are almost always insufficient for publication. With a ~20 extra lines of code, however, you can generate high-quality plots suitable for inclusion in your next article.
The fake data I generated for the plot look something like Root Mean Square Deviation (RMSD) versus time for a converged molecular dynamics simulation, so let’s pretend they are. There are a number of problems with this plot: it’s overall ugly, the color scheme is not very attractive and may not be color-blind friendly, the y-axis range of the data extends outside the range of the tick labels, etc.
I don’t know you, but when I am compiling a complicated program and everything goes straightforward I feel a mixture of joy and surprise. Let’s face it, compiling can be quite frustrating, and if you need to compile something relatively old, chances are that you will spend hours and hours trying to understand the compiler error messages.
Several such compiler errors, that in many cases can be quite convoluted, tell you that your program requires an older version, so you first need to install it. I am going to assume that you have sudo rights, otherwise, we will be playing the game of compiling a compiler, something that I recommend you to do at least and at most once in your life.
In common Linux distributions like Ubuntu, installing an older compiler is as easy as using apt or yum:
Benford’s law is an observation that in numerical data (produced by many kinds of process), the leading digit tends to be small. Wikipedia tells you that it in datasets obeying Benford’s law, the number 1 appears as the leading digit about 30% of the time while 9 appears less than 5% of the time (p(n) = log10(1+1/n) where n is the leading digit). Wikipedia further lists multiple kinds of data where this tends to be true such as electricity bills, population numbers and physical and mathematical constants, and particularly where data can be described by a power law.
Power laws and antibodies have been co-discussed in reference to network descriptions of antigen-experienced BCR repertoires [1], which are often described as scale-free to use the network terminology (following a power law). This means a few highly-connected nodes in the network and lots of nodes with few or no connections. This is an obvious candidate for Benford’s law.
This is of no practical relevance, but I wondered if I could see Benford’s law in other kinds of data besides clone counts in the Observed Antibody Space (OAS). For example, I looked at the leading digit in the number of sequences in all of the data units in OAS. It looks like a good fit for Benford’s law (though with more density at the smaller leading digits) and has a chi-squared value of 0.007 (Figure 1A).