Revealing Nature’s Quantum Compass – Kickoff Day

Yesterday marked the kickoff for the BBSRC’s funded Strategic Longer and Larger (sLoLa) scheme “Revealing Nature’s Quantum Compass”1. The sLoLa grants are a laudable endeavor by the UK government to fund “ambitious research projects that will deepen our understanding of life’s most fundamental processes”. It is wonderful to see the UK government taking seriously the importance of blue sky basic research, appreciating that asking deep questions is what drives scientific progress, often leading to unexpected breakthroughs with application down the line.

At the kickoff event, principal investigators presented on what their research can bring to the table. Much like entering a bakery2 where everything smells delicious and it seems impossible to choose, an overwhelming range of experimental and computational techniques were presented, each bringing to bear their own unique approach to tackling the outstanding problem: mechanistically, how is that birds (and other animals) can navigate distances up to thousands of kilometers using the Earth’s magnetic field. Alongside this, my own group is interested in how we can develop biotechnologies that take advantage of magnetic field sensitive biochemistry, which has a host of applications near and long term.

The challenge of linking the biochemistry of a single protein known to be magnetic field sensitive to a behavioral phenotype will require a highly interdisciplinary approach, and excitingly for this community, machine learning is being involved from the start. Prof. Degiacomi, a member of the core team, presented how his lab is developing ML techniques to reduce the computational burden of linking experimental results to protein dynamics informed by molecular dynamics simulation. On the flip-side, I hope such techniques will develop into methods we can use for design. Similar to enzymes, the proteins we are interested have a function depending on mechanisms far more complex than only structure and binding (not to trivialize either of these!). Magnetic field sensing in this context depends on creating an environment in which quantum entanglement can exist, and being able to transduce the state of this quantum entanglement into into a biological signal – thus far this second step in particular has remained highly elusive.

Ultimately, the day concluded with much enthusiasm and excitement for all that is to come. Watch this space!

  1. https://www.ox.ac.uk/news/2025-11-19-new-project-aims-reveal-nature-s-quantum-compass ↩︎
  2. Yes, I just returned from a symposium in Germany ↩︎

Three Resources I Keep Coming Back to for Learning Deep Learning

There is no shortage of AI content online, but over time I have found myself returning to the same handful of resources again, and I wanted to share the three that have helped me the most.

AI Summer

This one I would recommend to anyone who is earlier in their journey. AI Summer at theaisummer.com is a free platform run by Sergios Karagiannakos and Nikolas Adaloglou, and it covers everything from the basics of neural networks through to building and deploying real ML systems. The tone is friendly and practical, and there are proper code examples throughout. It is one of those rare resources that manages to be beginner-friendly without feeling watered down.

Continue reading

What I wish I knew before applying and moving to Oxford from the US

The first time I ever visited the UK was when I moved to Oxford for my PhD (or DPhil in Oxford speak). I was nervous, excited, and thought I could assimilate easily after growing up watching Sherlock, Midsomer Murders, and Doc Martin. After all, my native language is English, how different really is the UK? Oh, how wrong I was.

Continue reading

A Golden Age of Nanomedicine

As someone who spent their entire academic career, from B.Sc. to M.Sc. to Ph.D., within a Kavli Institute for Nanoscience Discovery (first in Delft and now in Oxford), I’ve had the privilege of seeing firsthand just how beautifully intricate the nanoscale world can be. Now, as my research focuses on lipid nanoparticles for genetic therapeutics and vaccines, I would like to use this platform to advocate for what I believe is one of the most transformative frontiers in modern medicine: the rational design of nanomaterials for therapeutic delivery.

Continue reading

A first for PROTACs

Last week marked a major milestone in small-molecule drug discovery with the first FDA approval of a proteolysis targeting chimera (PROTAC). After a modest but successful phase 3 clinical trial demonstrated a 2.9 month improvement in median progression free survival1 for a type of advanced breast cancer1, the FDA has approved Veppanu (vepdegestrant), co-developed by Arvinas and Pfizer, as the first PROTAC protein degrader therapy2. So what is a PROTAC?

Continue reading

Curiosity might not kill the cat

Unlike most members of OPIG, I don’t work on small molecules, antibodies, or protein structure; I use hypergraph representations of protein complexes to predict gene essentiality and drug targets. I have also had an unconventional route to get here, and on the way, discovered my love for learning and research.

Friends and family had noticed I jumped around with my interests, so much so that when we used to meet up, they took great delight in teasing me about what my current adventure was – ‘you don’t settle do you!’, ‘when are you going to find what you’re looking for?’, ‘why can’t you just stick to something’. Looking back, there was a pattern, I just couldn’t see it yet.

Continue reading

Speeding up python through profiling

Python is a shockingly slow language. A test on a raspberry pi of simply “turn this pin on and off as fast as you can” gave the results below.

SystemLibrarySpeed
Shell/proc/mem access2.8 kHz
Shell / gpio utilityWiringPi gpio utility40 Hz
PythonRPI.GPIO70 kHz
PythonwiringPi2 bindings28 kHz
RubywiringPi bindings21 kHz
CNative library22 MHz
CBCM 28355.4 MHz
CwiringPi4.1 – 4.6 MHz
PerlBCM 283548 kHz
Continue reading

Pitfalls of AI-Generated Reviews: Case Study of a Frontiers in Microbiology Review on Anti-Influenza A bnAbs

In the last five or so years, large language models (LLMs) have transformed from a novel regurgitator of haphazardly stitched together sentences to an almost ‘human’ personality standing by our side as we tackle life. Whilst the perceived humanity of these models is the topic for perhaps a future blogpost, it is almost undeniable to understate the impact of LLMs in our daily lives. Do you need someone to proofread your essay you’ve spent hours drafting? GPT (or one of its many counterparts) has you covered. Need help drafting an email from scratch? No problem. Want to write and/or heavily edit an entire academic article which would typically require days, if not weeks, of research? Surely just needs a push of a button… right?

Despite tremendous advances in LLMs, key issues mean they are not yet a fully dependable addition to our writing endeavours. They are known to fail when asked to generate new content with only a basic prompt. Some of these failures have made headlines 1. Some of the scariest instances are those of hallucinated information 2–4 . This refers to the phenomenon where AI tools generate convincing information which is factually inaccurate or simply fabricated 2 . In Belgium, the Ghent university rector came under fire for citing quotes, supposedly from influential thinkers, which were later found to be AI-hallucinations 1.
Whilst there are numerous examples of the poorly cited and often AI-hallucinated papers falling through the cracks of the peer-review process, today we focus on a Frontiers in Microbiology review titled ‘Broadly neutralizing monoclonal antibodies against influenza A viruses: current insights and future directions’ 5. This paper attempts to provide an overview of the current landscape of monoclonal antibodies (mAbs) which are being developed to confer protection against influenza A, highlighting ‘technological advances, clinical performance, and scalability’. This paper contains many of the hallmarks of text that has been created or edited with generative AI, despite the generative AI statement stating ‘The author(s) declared that Generative AI was not used in the creation of this manuscript.’

Continue reading

Analyzing AlphaFold 3’s Diffusion Trajectory

A useful way to understand AlphaFold 3’s sampling behavior is to look not only at the final predicted structure, but at what happens along the reverse diffusion trajectory itself. If we track quantities such as the physical energy of samples, noise scale, and update magnitude over time, a very clear pattern emerges: structures remain physically imperfect for most of sampling, and only take proper global shape in the final low-noise steps.

This behavior is a result of the diffusion procedure implemented in Algorithm 18, Sample Diffusion, which follows an EDM-style sampler with churn. Rather than simply marching monotonically from noise to structure, the sampler repeatedly perturbs the current coordinates, denoises them, and then takes a Euler-like update step. Because of the churn mechanism, AlphaFold 3 deliberately injects additional noise during part of the trajectory, which encourages exploration but also delays local geometric convergence. This mechanism is shown in step 4 -7 of the Sample Diffusion Algorithm from Alphafold3 Supplementary Information.

Continue reading

No Pretraining, No Equivariant Architecture – Learning MLIPs without Explicit Equivariance

Paper
🤗 TransIP-L checkpoint
Code

Machine-learned interatomic potentials (MLIPs) have become a cornerstone of modern computational chemistry, enabling simulations that approach quantum accuracy at a fraction of the cost of traditional methods such as density functional theory (DFT). However, a central challenge in designing MLIPs lies in respecting the fundamental symmetries of molecular systems, especially rotational and translational invariance, while maintaining scalability and flexibility.

In our recent work, we introduced TransIP, a novel framework that formulates how symmetry is incorporated into molecular models by learning symmetry directly in the latent space of an atomic transformer model, in which we treat atoms as tokens, instead of hard-coding equivariance into the neural network architecture.

At the core of TransIP is a simple yet powerful idea: instead of enforcing SO(3) equivariance through specialized layers, the model is trained with a contrastive objective that aligns representations of rotated molecular configurations. A learned transformation network maps latent embeddings under rotations, encouraging the model to discover symmetry-consistent representations implicitly. This design preserves the flexibility and scalability of standard Transformers while still capturing the geometric structure of molecular systems.

Continue reading