Through our work in OPIG, many of our projects come in the form of code bases written in Python. These can be many different things like databases, machine learning models, and other software tools. Often, the user interface for these tools is developed as both a web app and a command line application. Here, I will discuss one of my favourite tools for testing command-line applications: prysk!
Continue readingCategory Archives: Bash
Aider and Cheap, Free, and Local LLMs
Aider and the Future of Coding: Open-Source, Affordable, and Local LLMs
The landscape of AI coding is rapidly evolving, with tools like Cursor gaining popularity for multi-file editing and copilot for AI-assisted autocomplete. However, these solutions are both closed-source and require a subscription.
This blog post will explore Aider, an open-source AI coding tool that offers flexibility, cost-effectiveness, and impressive performance, especially when paired with affordable, free, and local LLMs like DeepSeek, Google Gemini, and Ollama.
Continue readingConverting or renaming files, whilst still maintaining the directory structure
For various reasons we might need to convert files from one format to another, for instance from lossless FLAC to MP3. For example:
ffmpeg -i lossless-audio.flac -acodec libmp3lame -ab 128k compressed-audio.mp3
This could be any conversion, but it implies that the input file and the output file are in the same directory. What if we have a carefully curated directory structure and we want to convert (or rename) every file within that structure?
find . -name “*.whateveryouneed” -exec somecommand {} \; is the tool for you.
Continue readingMounting a remote file system with SSHFS
If you’re working with data stored on a remote server, you might not want to (or even have the space to) copy data to your local file system when you work on it. Instead, we can use SSHFS to mount a remote file system via SSH, allowing us to read and write data on the remote file system without manually copying files.
Continue readingDockerized Colabfold for large-scale batch predictions
Alphafold is great, however it’s not suited for large batch predictions for 2 main reasons. Firstly, there is no native functionality for predicting structures off multiple fasta sequences (although a custom batch prediction script can be written pretty easily). Secondly, the multiple sequence alignment (MSA) step is heavy and running MSAs for, say, 10,000 sequences at a tractable speed requires some serious hardware.
Fortunately, an alternative to Alphafold has been released and is now widely used; Colabfold. For many, Colabfold’s primary strength is being cloud-based and that prediction requests can be submitted on Google Colab, thereby being extremely user-friendly by avoiding local installations. However, I would argue the greatest value Colabfold brings is a massive MSA speed up (40-60 fold) by replacing HHBlits and BLAST with MMseq2. This, and the fact batches of sequences can be natively processed facilitates a realistic option for predicting thousands of structures (this could still take days on a pair of v100s depending on sequence length etc, but its workable).
In my opinion the cleanest local installation and simplest usage of Colabfold is via Docker containers, for which both a Dockerfile and pre-built docker image have been released. Unfortunately, the Docker image does not come packaged with the necessary setup_databases.sh script, which is required to build a local sequence database. By default the MSAs are run on the Colabfold public server, which is a shared resource and can only process a total of a few thousand MSAs per day.
The following accordingly outlines preparatory steps for 100% local, batch predictions (setting up the database can in theory be done in 1 line via a mount, but I was getting a weird wget permissions error so have broken it up to first fetch the file on the local):
Pull the relevant colabfold docker image (container registry):
docker pull ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2
Create a cache to store weights:
mkdir cache
Download the model weights:
docker run -ti --rm -v path/to/cache:/cache ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2 python -m colabfold.download
Fetch the setup_databases.sh script
wget https://github.com/sokrypton/ColabFold/blob/main/setup_databases.sh
Spin up a container. The container will exit as soon as the first command is run, so we need to be a bit hacky by running an infinite command in the background:
CONTAINER_ID=$(docker run -d ghcr.io/sokrypton/colabfold:1.5.5 cuda12.2.2 /bin/bash -c "tail -f /dev/null")
Copy the setup_databases.sh script to the relevant path in the container and create a databases directory:
docker cp ./setup_databases.sh $CONTAINER_ID:/usr/local/envs/colabfold/bin/
docker exec $CONTAINER_ID mkdir /databases
Run the setup script. This will download and prepare the databases (~2TB once extracted):
docker exec $CONTAINER_ID /usr/local/envs/colabfold/bin/setup_databases.sh /databases/
Copy the databases back to the host and clean up:
docker cp $CONTAINER_ID:/databases ./
docker stop $CONTAINER_ID
docker rm $CONTAINER_ID
You should now be at a stage where batch predictions can be run, for which I have provided a template script (uses a fasta file with multiple sequences) below. It’s worth noting that maximum search speeds can be achieved by loading the database into memory and pre-indexing, but this requires about 1TB of RAM, which I don’t have.
There are 2 key processes that I prefer to log separately, colabfold_search and colabfold_batch:
#!/bin/bash
# Define the paths for database, input FASTA, and outputs
db_path="path/to/database"
input_fasta="path/to/fasta/file.fasta"
output_path="path/to/output/directory"
log_path="path/to/logs/directory"
cache_path="path/to/weights/cache"
# Run Docker container to execute colabfold_search and colabfold_batch
time docker run --gpus all -v "${db_path}:/database" -v "${input_fasta}:/input.fasta" -v "${output_path}:/predictions" -v "${log_path}:/logs" -v "${cache_path}:/cache"
ghcr.io/sokrypton/colabfold:1.5.5-cuda12.2.2 /bin/bash -c "colabfold_search --mmseqs /usr/local/envs/colabfold/bin/mmseqs /input.fasta /database msas > /logs/search.log 2>&1 && colabfold_batch msas /predictions > /logs/batch.log 2>&1"
Tip and Tricks to correct a Cuda Toolkit installation in Conda
On the eastern side of Oxfordshire are the Cotswolds, a pleasant hill range with a curious etymology: the hills of the goddess Cuda (maybe, see footnote). Cuda is a powerful yet wrathful goddess, and to be in her good side it does feel like druidry. The first druidic test is getting software to work: the wild magic makes the rules of this test change continually. Therefore, I am writing a summary of what works as of Late 2023.
Continue readingSSH, the boss-fight level: Jupyter notebooks from compute nodes
Secure shell (SSH) is an essential tool for remote operations. However, not everything with it is smooth-sailing. Especially, when you want to do things like reverse–port-forwarding via a proxy-hump or two a Jupyter notebook to your local machine from a compute node on a no-home container . Even if it sounds less plausible than the exploits on Mr Robot, it actually can work and requires zero social-engineering or sneaking in server rooms to install Raspberry Pis while using a baseball cap as a disguise.
Continue readingThe dangers of Conda-Pack and OpenMM
If you are running lots of little jobs in SLURM and want to make use of free nodes that suddenly become available, it is helpful to have a way of rapidly shipping your environments that does not rely on installing conda or rebuilding the environment from scratch every time. This is useful with complex rebuilds where exported .yml files do not always work as expected, even when specifying exact versions and source locations.
In these situations a tool such a conda-pack becomes incredibly useful. Once you have perfected the house of cards that is your conda environment, you can use conda-pack to save that exact state as a tar.gz file.
conda-pack -n my_precious_env -o my_precious_env.tar.gz
This can provide you with a backup to be used when you accidentally delete conda from your system, or if you irreparable corrupt the environment and cannot roll back to the point in time when everything worked. These tar.gz files can also be copied to distant locations by the use of rsync or scp, unpacked, sourced and used without installing conda…
Continue readingPairwise sequence identity and Tanimoto similarity in PDBbind
In this post I will cover how to calculate sequence identity and Tanimoto similarity between any pairs of complexes in PDBbind 2020. I used RDKit in python for Tanimoto similarity and the MMseqs2 software for sequence identity calculations.
A few weeks back I wanted to cluster the protein-ligand complexes in PDBbind 2020, but to achieve this I first needed to precompute the sequence identity between all pairs sequences in PDBbind, and Tanimoto similarity between all pairs of ligands. PDBbind 2020 includes 19.443 complexes but there are much fewer distinct ligands and proteins than that. However, I kept things simple and calculated the similarities for all 19.443*19.443 pairs. Calculating the Tanimoto similarity is relatively easy thanks to the BulkTanimotoSimilarity function in RDKit. The following code should do the trick:
from rdkit.Chem import AllChem, MolFromMol2File from rdkit.DataStructs import BulkTanimotoSimilarity import numpy as np import os fps = [] for pdb in pdbs: mol = MolFromMol2File(os.path.join('data', pdb, f'{pdb}_ligand.mol2')) fps.append(AllChem.GetMorganFingerprint(mol, 3)) sims = [] for i in range(len(fps)): sims.append(BulkTanimotoSimilarity(fps[i],fps)) arr = np.array(sims) np.savez_compressed('data/tanimoto_similarity.npz', arr)
Sequence identity calculations in python with Biopandas turned out to be too slow for this amount of data so I used the ultra fast MMseqs2. The first step to running MMseqs2 is to create a .fasta file of all the sequences, which I call QUERY.fasta. This is what the first few lines look like:
Continue readingStreamlining Your Terminal Commands With Custom Bash Functions and Aliases
If you’ve ever found yourself typing out the same long commands over and over again, or if you’ve ever wished you could teleport directly to your favourite directories, then this post is for you.
Before we jump into some useful examples, let’s go over what bash functions and aliases are, and how to set them up.
Bash Functions vs Aliases
A bash function is like a mini script stored in your .bashrc
or .bash_profile
file. It can accept arguments, execute a series of commands, and even return a value.