Cosmology via Structural Biology and Half-Lives of Teaspoons: Bizarre Papers from Around the Internet

I don’t know if anyone out there shares this peculiar hobby of mine (God, I hope not!), but I often find myself scouring the depths of the internet for some truly bizarre academic papers. Though there is an endless supply of such content to keep one entertained (read: distract yourself during those afternoons you planned to be productive but ended up succumbing to the lunch food coma), I’ve managed to compile a short list of the most fascinating ones for your enjoyment!

  • The case of the disappearing teaspoons: longitudinal cohort study of the displacement of teaspoons in an Australian research institute (Lim et al, 2005, BMJ, doi: https://doi.org/10.1136/bmj.331.7531.1498)

This fantastic and robust study examines the enigmatic phenomenon of disappearing teaspoons in a shared kitchen—an issue of acute importance to all of us who rely on these tiny utensils. The authors reveal the shocking truth about teaspoons’ shockingly short half-life in research institutes. The question remains: does this phenomenon extend to other cutlery as well?

Continue reading

What can you do with the OPIG Immunoinformatics Suite? v3.0

OPIG’s growing immunoinformatics team continues to develop and openly distribute a wide variety of databases and software packages for antibody/nanobody/T-cell receptor analysis. Below is a summary of all the latest updates (follows on from v1.0 and v2.0).

Continue reading

A match made in heaven: academic writing with latex and git

Alternative titles:

  • A match made in heavenhell: academic writing with latex and git
  • Procrastinating writing by over-engineering my workflow

If you are like me, you can happily write code for hours and hours on end but as soon as you need to write a paper you end up staring at a blank page. Luckily, I have come up with a fool proof way to trick myself into thinking I am coding when in reality I am finalling getting around to writing up the work my supervisor has been wanting for the last month. Introducing Latex and git- this was my approach to draft a review paper recently and in this blopig post I will go through some of the ups and downs I had using these tools.

Continue reading

“The Rise of ChatGPT 4.0: Is the Future of Work in Jeopardy?”

In my previous blog post, I explored the capabilities of ChatGPT 3.5, testing its skills as a programmer and mathematician’s assistant. The results were mixed, to say the least. While it could handle simple coding tasks with ease, it faltered when faced with more complex mathematical problems and image manipulation tasks. I concluded that while ChatGPT 3.5 was impressive, it was far from replacing us in our jobs. It was a tool, yes, but one that needed careful supervision and frequent corrections.

Fast forward to today, and we have ChatGPT 4.0 with code interpreter enabled. I approached this new version with the same scepticism I had for its predecessor, but I must admit, my initial impressions have been challenged. The leap from 3.5 to 4.0 is not just a step forward; it’s a giant leap. You can see my experiment here https://chat.openai.com/share/0099a12f-e577-44bd-aa10-7c7508f15f19.

In my previous experiment, I asked ChatGPT 3.5 to translate a 2D image, a task that combines both programming and a bit of maths. The results were less than satisfactory. It was able to write code to translate images if they were in real space, a task that can be easily found searching on the web. However, it struggled to handle images represented in Fourier space correctly, and even with multiple hints, the best it could do was a rough approximation of the correct solution. It took six attempts to get the code automatically generated, and even then, it was far from perfect.

With ChatGPT 4.0, however, the story is different. When asked to perform a similar task, it provided a clear, concise, and correct solution. It understood the concept of image translation in the context of computer vision and image processing, and it was able to generate Python code using the scikit-image package to perform the task. The code was not only syntactically correct but also logically sound, demonstrating a significant improvement in understanding and problem-solving capabilities.

"""
#Me: Can you know provide some python code to translate (pixel shifts) images? And can you execute it?
#Chat gpt: Here's a simple Python example using scikit-image to translate an image:
"""

import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage.transform import AffineTransform, warp

# Load an example image
image = data.camera()

# Define a translation transform
tform = AffineTransform(translation=(50, 50))

# Apply the transform to the image
translated_image = warp(image, tform.inverse)

# Plot the original and translated images
fig, ax = plt.subplots(1, 2)
ax[0].imshow(image, cmap='gray')
ax[0].set_title('Original Image')
ax[1].imshow(translated_image, cmap='gray')
ax[1].set_title('Translated Image')
plt.show()

"""
#Chat gpt: 
In this code, AffineTransform is used to define a translation of 50 pixels to the right and 50 pixels down, and warp is used to apply this transform to the image. The result is an image that has been shifted.
However, I must clarify that the current environment doesn't support displaying plots, so the matplotlib part of the code won't work here. You should be able to run it in a Jupyter notebook or similar environment on your local machine. Let me know if you need further help or clarification!
"""


"""
ME: perform the same operation as before but with an image represented in Fourier space. 
#Chat gpt: 
Sure! However, we need to keep in mind that translating an image in the Fourier domain is not the same as translating an image in the spatial domain. In the Fourier domain, translations are performed by multiplying the image by a complex exponential, which essentially changes the phase of the Fourier transform.

Here is a Python code snippet demonstrating translation in the Fourier domain:


"""

import numpy as np
from skimage import data
from scipy.fft import fft2, ifft2, fftshift, ifftshift

# Load an example image
image = data.camera()

# Compute the 2-dimensional FFT of the image
image_fft = fft2(image)

# Define a translation
dy, dx = 30, 30

# Generate an array with the same size as the image
x = np.arange(0, image.shape[1])
y = np.arange(0, image.shape[0])
xv, yv = np.meshgrid(x, y)

# Create the translation function in Fourier space
translation = np.exp(-2j * np.pi * (dx * xv / image.shape[1] + dy * yv / image.shape[0]))

# Translate the image in Fourier space
translated_image_fft = image_fft * translation

# Transform back to spatial domain
translated_image = np.abs(ifft2(translated_image_fft))

# Now you could plot the images
"""
#Chat gpt: 
This code first computes the Fourier transform of the image using fft2. Then it generates a complex exponential function that represents a shift in the Fourier domain. This function is multiplied with the Fourier-transformed image to achieve the translation. Finally, the inverse Fourier transform ifft2 is used to transform the image back to the spatial domain.

As before, the current environment doesn't support displaying plots, so you would need to run this on your local machine to see the translated image.
"""

Another test I revisited was a mathematical problem I had posed to ChatGPT 3.5: “What is the intersection between the unit circle and the curve y = x^2?” The previous version had failed to provide a correct answer. However, ChatGPT 4.0 not only understood the problem but also wrote its own code to solve it, correctly identifying the intersection points. This was a clear demonstration of the AI’s improved mathematical problem-solving capabilities although It’s important to note that only the premium version of ChatGPT 4.0 can run the code it suggests, as it did in the example of the intersection of the unit circle and the parabola. +

  • ME: Can you tell me which is the intersection between the unit circle and the curve “y= x*x”?
  • ChatGPT: The unit circle is defined by the equation x2+y2=x2+y2=1. The curve y=x2 is a parabola. To find their intersection, we can substitute y2 from the equation of the parabola into the equation of the circle, and solve for x2. So, we have x2+(x²)²= 1 or x²+x⁴=1 This is a quartic equation and can be difficult to solve, but let’s give it a try. [AND IT STARTS WRITING AND EXECUTING CODE]

So, is my job in danger now? Well, it’s a complex question. The advancements in AI, as demonstrated by the jump from ChatGPT 3.5 to 4.0, are indeed impressive. The AI’s ability to understand complex tasks and generate accurate solutions is growing quite fast. However, it’s important to remember that AI, at its core, is a tool. It’s a tool that can augment our capabilities, automate mundane tasks, and help us solve complex problems. In the end, whether AI becomes a threat or an ally in our jobs depends largely on how we choose to use it. If we see it as a tool to enhance our skills and productivity, then there’s no danger, only opportunity. But if we see it as a replacement for human intelligence and creativity, then we might indeed have cause for concern. For now, though, I believe we’re safe. The Turing test might be a thing of the past, but the “human test” is still very much alive.

PHinally PHunctionalising my PHigures with PHATE feat. Plotly Express.

After being recommended by a friend, I really wanted to try plotly express but I never had the inclination to read more documentation when matplotlib gives me enough grief. While experimenting with ChatGPT I finally decided to functionalise my figure making scripts. With these scripts I manage to produce figures that made people question what I had actually been doing with my time – but I promise this will be worth your time.

I have been using with dimensionality reducition techniques recently and I came across this paper by Moon et al. PHATE is a technique that represents high dimensional (ie biological) data in a way that aims to preserve connections over preserving distance and I knew I wanted to try this as soon as I saw it. Why should you care? PHATE in 3D is faster that t-SNE in 2D. It would almost be rude to not try it out.

PHATE

In my opinion PHATE (or potential of heat diffusion for affinity-based transition embedding) does have a lot going on but that the choices at each stage feel quite sensisble. It might not come as a surprise this was primarily designed to make visual inspection of data easier on the eyes.

Continue reading

Le Tour de Farce 2023

16:30 BST 27/06/2023 Oxford, UK. A large number of scientists were spotting riding bicycles across town, to the consternation of onlookers. The event was the Oxford Protein Informatics Group (OPIG) “tour de farce” 2023. A circular bike ride from the Department of Statistics, to The Up in Arms (Marston), The Trout Inn (Godstow), The Perch (Port Meadow) and The Holly Bush (Osney Island). This spurred great bystander-anxiety due to one of a multitude of factors: the impressive size of the jovial horde, the erraticism of the cycling, the deplorable maintenance of certain bikes, and the unchained bizarrerie of the overheard dialogue.

Dissociated Press.
Continue reading

KAUST Computational Advances in Structural Biology

Last month, I had the privilege of being invited to the KAUST Research Conference on Computational Advances in Structural Biology, held from May 1-3, 2023. This gave me the opportunity to present some of the latest OPIG works on small molecules while visiting an exceptional campus with state-of-the-art facilities in one of those corners of the world that are not widely known. Moreover, the experience went beyond the impressive surroundings as I had the chance to attend a highly engaging conference and meet many scientists from different backgrounds.

KAUST Library (left) and Dinning Hall (right)

The conference brought together experts in the field to explore cutting-edge developments in computational structural biology. It had a primary focus on advancements in protein structure prediction, multi-scale simulations, and integrative structural biology. Cryo-electron microscopy (cryo-EM) was the most popular experimental technique, with more than a third of the talks dedicated to its applications. These talks showcased impressive examples where structure prediction, simulations, and mid-resolution cryo-EM maps were combined to construct atomic models of large macromolecular complexes.

Notable examples of integrative works were presented by Jan Kosinski and Thomas Miller, among others. Jan Kosinski shared insights into the model of the human nuclear pore complex, highlighting the integration of cryo-electron tomography (cryo-ET), prior experimental knowledge, and AlphaFold predictions. Thomas Miller, on the other hand, presented his work on EM-based visual biochemistry, which combines single-particle cryo-electron microscopy (cryo-EM), and time-resolved experiments, as a tool to study the molecular mechanisms of eukaryotic DNA replication.

There were also several talks about novel algorithms. Nazim Bouatta presented some less-known details about OpenFold and introduced some of their approaches to tackling the problem of multimer modelling. He also announced the future release of folding methods for predicting protein-ligand complexes. Jianlin Cheng presented MULTICOM, their new protein structure predictor based on consensus predictions from Alphafold. Sergei Grudinin showed deep-learning tools able to predict protein dynamics as well as some integrative modelling tools driven by low-resolution experimental observations, such as small-angle scattering.

On the cryo-EM methods side, Mikhail Kudryashev presented TomoBEAR and SUSAN, cryoEM tools developed to automatize the analysis of tomographic data. Johannes Schwab presented dynamight, a deep learning-based approach for heterogeneity analysis in single particle cryo-EM. While, on the ComChem side, Haribabu Arthanari showed their ultra-large Virtual screening platform and Jean-Louis Reymond talked about tools to enumerate, visualize and search the vast chemical space of drug-like molecules

Overall, the conference provided a quite diverse set of talks that facilitated multidisciplinary views and discussions. From protein structure prediction to integrative approaches combining experimental and computational methods, the talks showed the transformative potential of computational analysis in unravelling the complexities of biological macromolecules.

9th Joint Sheffield Conference on Cheminformatics

Over the next few days, researchers from around the world will be gathering in Sheffield for the 9th Joint Sheffield Conference on Cheminformatics. As one of the organizers (wearing my Molecular Graphics and Modeling Society ‘hat’), I can say we have an exciting array of speakers and sessions:

  • De Novo Design
  • Open Science
  • Chemical Space
  • Physics-based Modelling
  • Machine Learning
  • Property Prediction
  • Virtual Screening
  • Case Studies
  • Molecular Representations

It has traditionally taken place every three years, but despite the global pandemic it is returning this year, once again in person in the excellent conference facilities at The Edge. You can download the full programme in iCal format, and here is the conference calendar:

Continue reading

Customising MCS mapping in RDKit

Finding the parts in common between two molecules appears to be a straightforward, but actually is a maze of layers. The task, maximum common substructure (MCS) searching, in RDKit is done by Chem.rdFMCS.FindMCS, which is highly customisable with lots of presets. What if one wanted to control in minute detail if a given atom X and is a match for atom Y? There is a way and this is how.

Continue reading

Machine learning strategies to overcome limited data availability

Machine learning (ML) for biological/biomedical applications is very challenging – in large part due to limitations in publicly available data (something we recently published about [1]). Substantial amounts of time and resources may be required to generate the types of data (eg protein structures, protein-protein binding affinity, microscopy images, gene expression values) required to train ML models, however.

In cases where there is sufficient data available to provide signal, but not enough for the desired performance, ML strategies can be employed:

Continue reading