The Oxford Tube is a bus service that shuttles people between Oxford and London taking approximately 1 hour and 30 minutes. I have now taken the bus over 250 times which is approximately 375 hours or a fortnight of my life.
In this time spent on the bus, I have discovered some tips and tricks that make the journey ever so slightly more bearable. I shall share them so that others can optimise their experience. Enjoy!
Training a large transformer model can be a multi-day, if not multi-week, ordeal. Especially if you’re using cloud compute, this can be a very expensive affair, not to mention the environmental impact. It’s therefore worth spending a couple days trying to optimise your training efficiency before embarking on a large scale training run. Here, I’ll run through three strategies you can take which (hopefully) shouldn’t degrade performance, while giving you some free speed. These strategies will also work for any other models using linear layers.
I wont go into too much of the technical detail of any of the techniques, but if you’d like to dig into any of them further I’d highly recommend the Nvidia Deep Learning Performance Guide.
Training With Mixed Precision
Training with mixed precision can be as simple as adding a few lines of code, depending on your deep learning framework. It also potentially provides the biggest boost to performance of any of these techniques. Training throughput can be increase by up to three-fold with little degradation in performance – and who doesn’t like free speed?
On the eastern side of Oxfordshire are the Cotswolds, a pleasant hill range with a curious etymology: the hills of the goddess Cuda (maybe, see footnote). Cuda is a powerful yet wrathful goddess, and to be in her good side it does feel like druidry. The first druidic test is getting software to work: the wild magic makes the rules of this test change continually. Therefore, I am writing a summary of what works as of Late 2023.
So the servers you use have Slurm as their job scheduler? Blopig has very good resources to know how to navigate a Slurm environment.
If you are new to SLURMing, I highly recommend Alissa Hummer’s post . There, she explains in detail what you will need to submit, check or cancel a job, even how to run a job with more than one script in parallel by dividing it into tasks. She is so good that by reading her post you will learn how to move files across the servers, create and manage SSH keys as well as setting up Miniconda and github in a Slurm server.
And Blopig has even more to offer with Maranga Mokaya’s and Oliver Turnbull’s posts as nice complements to have a more advanced use of Slurm. They help with the use of array jobs, more efficient file copying and creating aliases (shortcuts) for frequently used commands.
So… What could I possibly have to add to that?
Well, suppose you are concerned that you or one of your mates might flood the server (not that it has ever happened to me, but just in case).
How would you go by figuring out how many cores are active? How much memory is left? Which GPU does that server use? Fear not, as I have some basic tricks that might help you.
Get information about the servers and nodes:
A pretty straight forward way of getting to know some information on slurm servers is the use of the command:
sinfo -M ALL
Which will give you information on partition names, if that partition is available or not, how many nodes it has, its usage state and a list with those nodes.
CLUSTER: name_of_cluster
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
low up 7-00:00.0 1 idle node_name.server.address
The -M ALL argument is used to show every cluster. If you know the name of the cluster you can use:
sinfo -M name_of_cluster
But what if you want to know not only if it is up and being used, but how much of its resource is free? Fear not, much is there to learn.
You can use the same sinfo command followed by some arguments, that will give you what you want. And the magic command is:
sinfo -o "%all" -M all
This will show you a lot of information abou every partition of every cluster
So, how can you make it more digestible and filter only the info that you want?
Always start with:
sinfo -M ALL -o "%n"
And inside the quotations you should add the info you would like to know. The %n arguments serves to show every node, the hostname, in each cluster. If you want to know how much free memory there is in each node you can use:
sinfo -M ALL -o "%n %e"
In case you would like to know how the CPUs are being used (how many are allocated, idle, other and total) you should use
sinfo -M ALL -o "%n %e %C"
Well, I could give more and more examples, but it is more efficient to just leave the table of possible arguments here. They come from slurm documentation.
Argument
What does it do?
%all
Print all fields available for this data type with a vertical bar separating each field.
%a
State/availability of a partition.
%A
Number of nodes by state in the format “allocated/idle”. Do not use this with a node state option (“%t” or “%T”) or the different node states will be placed on separate lines.
%b
Features currently active on the nodes, also see %f.
%B
The max number of CPUs per node available to jobs in the partition.
%c
Number of CPUs per node.
%C
Number of CPUs by state in the format “allocated/idle/other/total”. Do not use this with a node state option (“%t” or “%T”) or the different node states will be placed on separate lines.
%d
Size of temporary disk space per node in megabytes.
%D
Number of nodes.
%e
The total memory, in MB, currently free on the node as reported by the OS. This value is for informational use only and is not used for scheduling.
%E
The reason a node is unavailable (down, drained, or draining states).
%f
Features available the nodes, also see %b.
%F
Number of nodes by state in the format “allocated/idle/other/total”. Note the use of this format option with a node state format option (“%t” or “%T”) will result in the different node states being be reported on separate lines.
%g
Groups which may use the nodes.
%G
Generic resources (gres) associated with the nodes. (“Graph Card” that the node uses)
%h
Print the OverSubscribe setting for the partition.
%H
Print the timestamp of the reason a node is unavailable.
%i
If a node is in an advanced reservation print the name of that reservation.
%I
Partition job priority weighting factor.
%l
Maximum time for any job in the format “days-hours:minutes:seconds”
%L
Default time for any job in the format “days-hours:minutes:seconds”
%m
Size of memory per node in megabytes.
%M
PreemptionMode.
%n
List of node hostnames.
%N
List of node names.
%o
List of node communication addresses.
%O
CPU load of a node as reported by the OS.
%p
Partition scheduling tier priority.
%P
Partition name followed by “*” for the default partition, also see %R.
%r
Only user root may initiate jobs, “yes” or “no”.
%R
Partition name, also see %P.
%s
Maximum job size in nodes.
%S
Allowed allocating nodes.
%t
State of nodes, compact form.
%T
State of nodes, extended form.
%u
Print the user name of who set the reason a node is unavailable.
%U
Print the user name and uid of who set the reason a node is unavailable.
%v
Print the version of the running slurmd daemon.
%V
Print the cluster name if running in a federation.
%w
Scheduling weight of the nodes.
%X
Number of sockets per node.
%Y
Number of cores per socket.
%z
Extended processor information: number of sockets, cores, threads (S:C:T) per node.
%Z
Number of threads per core.
And there you have it! Now you can know what is going on your slurm clusters and avoid job-blocking your peers.
If you want to know more about slurm, keep an eye on Blopig!
Have you ever needed to find a reaction SMARTS pattern for a certain reaction but don’t have it already written out? Do you have a reaction SMARTS pattern but need to test it on a set of reactants and products to make sure it transforms them correctly and doesn’t allow for odd reactants to work? I recently did and I spent some time developing functions that can:
Generate a reaction SMARTS for a reaction given two reactants, a product, and a reaction name.
Check the reaction SMARTS on a list of reactants and products that have the same reaction name.
In an attempt to ease the transition from Word to LaTeX for some of my colleagues (*cough* Alex *cough*) this blog post covers some LaTeX tricks I use most frequently when preparing manuscripts. It’s pitched at someone who is already familiar with the basic syntax of paragraphs, figures and tables.
Pandas is one of the most used packages for data analysis in python. The library provides functionalities that allow to perfrom complex data manipulation operations in a few lines of code. However, as the number of functions provided is huge, it is impossible to keep track of all of them. More often than we’d like to admit we end up wiriting lines and lines of code only to later on discover that the same operation can be performed with a single pandas function.
To help avoiding this problem in the future, I will run through some of my favourite pandas functions and demonstrate their use on an example data set containing information of crystal structures in the PDB.
During the lead optimisation stage of the drug discovery pipeline, we might wish to make mutations to an initially identified binding antibody to improve properties such as developability, immunogenicity, and affinity.
There are many ways we could go about suggesting these mutations including using Large Language Models e.g. ESM and AbLang, or Inverse Folding methods e.g. ProteinMPNN and AntiFold. However, some of our recent work (soon to be pre-printed) has shown that classical non-Machine Learning approaches, such as BLOSUM, could also be worth considering at this stage.
Over the last few months my bicycle steering axle started freezing up, to the point where the first thing I did before getting on my bike in the morning was jerk the handlebars from side to side aggressively to loosen it up. It made atrocious guttural sounds and bangs when I did and navigating Oxford by bike was becoming more treacherous by the day as I swerved from left to right trying to wrestle my front wheel’s fork in the right direction. It was time to undertake some DIY…
Fragmenstein is a Python module that combine hits or position a derivative following given templates by being very strict in obeying them. This is done by creating a “monster”, a compound that has the atomic positions of the templates, which then reanimated by very strict energy minimisation. This is done in two steps, first in RDKit with an extracted frozen neighbourhood and then in PyRosetta within a flexible protein. The mapping for both combinations and placements are complicated, but I will focus here on a particular step the minimisation, primarily in answer to an enquiry, namely how does the RDKit minimisation work.