Category Archives: Code

In MATLAB, it’s colormaps all the way down

My overriding emotion, working in R, has been incomprehension: incomprehension at the gallery of ugly gnomes that populate the namespace and worried puzzlement over the strange incantations required to get them to dance in a statistically harmonious way. But all that aside, I resolved, joining the group, to put aside my misgivings and give the gnomes another try.

Soon, I found myself engaged in a reassessment of my life choices. I realized that life’s too short to spend it tickling gnomes – especially when only one of them knows how to do linear regression, but he won’t tell you your p value unless you give him the right kinds of treats. I fired up MATLAB and I haven’t looked back.

However, there was issue of continued perplexity, and I’m not referring to why MATLAB insists on shouting itself at you. I need to make a lot of 2-D plots of protein distance matrices. The trouble is that I like to highlight parts of them, and that’s not straightforward in MATLAB. Let’s have a look at an example:

>> dists=dlmread('1hel.distances');
>> colormap gray;
>> imagesc(dists>8);
>> axis square;

Contact map

Now, let’s load up a set of residues and try to overlay them on top of the first image:

>> resn=dlmread('1hel.resn');
>> mask = zeros(size(dists));
>> mask(resn,resn)=1;
>> hold on
>> imagesc(1-mask, 'AlphaData',mask*.5);

So far, so easy. To review the main points:

mask is a matrix which has a one at all the pixels that we want to highlight. But we use imagesc(1-mask) because the gray colormap displays black at 0 and white at 1. If we did imagesc(mask), we would end up with grey everywhere and white only where we hoped to highlight – the opposite effect from the one that we sought.

AlphaData is a property which sets the transparency of the image. We want the image to be fully transparent where mask is 0 – so as not to fog out the underlying image – and partially transparent where mask is 1. 0.5*mask is a matrix which is 0.5 everywhere that mask is 1 and 0 everywhere else.  If we set 0.5*mask as the AlphaData property, then the colour we add will be at half transparency and the white areas will be fully transparent.

But this isn’t a very pleasant image. We want to be able to highlight the regions in some colour other than grey. Let’s try.

>> close all
>> imagesc(dists>8)
>> colormap gray
>> axis square
>> imagesc(1-mask, 'AlphaData',mask*.3,'ColorMap','jet');
Error using image
There is no ColorMap property on the Image class.

Error in imagesc (line 39)
hh = image(varargin{:},'CDataMapping','scaled');

No luck! What’s more, setting the colormap between calls to image() and imagesc() also doesn’t work. Here’s the problem: the colormap is a property of the figure, not the data. (More precisely, it is not a property of the MATLAB axes.) When you change the colormap, you change the colors of every datapoint in the image.

The fix

MATLAB’s colormap mechanism is just simple enough to be confusing. MATLAB stores colours as 1×3 vectors, where each element in the vector is the proportion of red, green, or blue, respectively. [1 1 1] is white, [0 0 0] is black, and [1 0 0] is a frightfully iridescent red. A colormap is just a list of colors – 64 will normally do – which change smoothly from from one colour to another. To have a look at the built-in MATLAB colormaps, see here.

image rounds every value in the matrix to the nearest whole number (call that number i)  and plots that pixel with the color given by colormap(i,:). Zero or below gets the first entry in the colormap and any index higher than the maximum is displayed with the last color in the colormap. So: if we construct a new colormap by concatenating two colormaps – the first running from rows 1 to 64 and the second running from 65 to 128 – if we scale our data so that the minimum is 65 and the maximum is 128, the data will never use the first set of colors. And, likewise, if we scale so that the lowest value is 1 and the highest is 64, we will use the first colormap. This seems like the sort of thing that we could manage automatically – and should, in fact. So I set myself to replace image and imagesc so that they would accept a ColorMap parameter.

How would it work?

>> colormap bone
>> imagesc(dists>8)
>> hold on
>> imagesc(mask,'ColorMap',[0.5 0 0.5],'AlphaData',0.5*(mask>0))
>> axis square

Beautiful!

Implementation notes

  • image is implemented in the MATLAB Java source code, but imagesc is a wrapper to image, written directly in MATLAB code. Therefore, overloading image requires the new function to be placed in a special directory called @double, while imagesc can be placed anywhere (except it cannot be placed in @double). If you then want to call the original version of image(), you can use builtin(‘image’,arg1,arg2,…), whereas if you want to call the original imagesc, it is a right pain. Instead, I used type imagesc to extract the source of imagesc and I modified that source directly – obviating any need to call the original imagesc. For reference, though, the most efficient way works out to be to find the function with which('imagesc'), cd into the containing directory, create a function handle to imagesc, and then cd out. As I said, it’s a mess.
  • These edits break colorbars. I added a spacer entry in each colormap which stores the maximum and minimum ‘real’ values of the data – in case that is useful for when I get around to extending colorbar. colormap entries must be inside [0,1] so these data are stored in the first twelve decimal places of the colormap entries: a strange burlesque on floating points. It’s a hack, but for my purposes it works.
  • In addition to the standard colormaps, I often require a mask in a particular color. For this purpose it helps to have a colormap that smoothly varies from white to the color in question. It actually doesn’t matter if it varies from white or any other color – ultimately, I only use the full colour value, since I set the transparency of all other pixels to maximum – but either way, passing the colour on [0,1] scale or [0,255] scale sets a colormap which varies from white to that color.

The code is available on MATLAB File Exchange at this link and is installable by copying imagesc.mbootleg_fp.m, and the directory @double into your working directory. The idea to concatenate colormaps is widely available online – for example, here.

Using bare git repos

Git is a fantastic method of doing version control of your code. Whether it’s to share with collaborators, or just for your own reference, it almost acts as an absolute point of reference for a wide variety of applications and needs. The basic concept of git is that you have your own folder (in which you edit your code, etc.) and you commit/push those changes to a git repository. Note that Git is a version control SYSTEM and GitHub/BitBucket etc. are services that host repositories using Git as its backend!

The basic procedure of git can be summarised to:

1. Change/add/delete files in your current working directory as necessary. This is followed by a git add or git rm command.
2. “Commit” those changes; we usually put a message reflecting the change from step 1. e.g. git commit -m "I changed this file because it had a bug before."
3. You “push” those changes with git push to a git repository (e.g. hosted by BitBucket, GitHub, etc.); this is sort of like saying “save” that change.

Typically we use services like GitHub to HOST a repository. We then push our changes to that repository (or git pull from it) and all is good. However, a powerful concept to bear in mind is the ‘bare’ git repository. This is especially useful if you have code that’s private and should be strictly kept within your company/institution’s server, yet you don’t want people messing about too much with the master version of the code. The diagram below makes the bare git repository concept quite clear:

The bare repo acts as a “master” version of sorts, and every other “working”, or non-bare repo pushes/pulls changes out of it.

Let’s start with the easy stuff first. Every git repository (e.g. the one you’re working on in your machine) is a WORKING/NON-BARE git repository. This shows files in your code as you expect it, e.g. *.py or *.c files, etc. A BARE repository is a folder hosted by a server which only holds git OBJECTS. In it, you’ll never see a single .py or .c file, but a bunch of folders and text files that look nothing like your code. By the magic of git, this is easily translated as .py or .c files (basically a version of the working repo) when you git clone it. Since the bare repo doesn’t contain any of the actual code, you can safely assume that no one can really mess up with the master version without having gone through the process of git add/commit/push, making everything documented. To start a bare repo…

# Start up a bare repository in a server
user@server:$~  git init --bare name_to_repo.git

# Go back to your machine then clone it
user@machine:$~ git clone user@server:/path/to/repo/name_to_repo.git .

# This will clone a empty git repo in your machine
cd name_to_repo
ls
# Nothing should come up.

touch README
echo "Hello world" >> README
git add README
git commit -m "Adding a README to initialise the bare repo."
git push origin master # This pushes to your origin, which is user@server:/path/to/repo/name_to_repo.git

If we check our folders, we will see the following:

user@machine:$~ ls /path/to/repo
README # only the readme exists

user@server:$~ ls /path/to/repo/name_to_repo.git/
branches/ config description HEAD hooks/ info/ objects/ refs/

Magic! README isn’t in our git repo. Again, this is because the git repo is BARE and so the file we pushed won’t exist. But when we clone it in a different machine…

user@machine2:$~ git clone user@server:/path/to/repo/name_to_repo.git .
ls name_to_repo.git/
README
cat README
Hello world #magic!

This was a bit of a lightning tour but hopefully you can see that the purpose of a bare repo is to allow you to host code as a “master version” without having you worry that people will see the contents directly til they do a git clone. Once they clone, and push changes, everything will be documented via git, so you’ll know exactly what’s going on!

Bitbucket and PyCharm – Tools to make a DPhil less problematic

I find Git a wonderful tool for my work, with version control providing much needed damage control to my projects. I also find Git incredibly powerful at making my working life easier, with the ability to use git push and git pull to synchronise my code between the various computers that I use for my DPhil. Via a BitBucket account, providing a remote Git repository, I am able to move my code around to wherever I am working and allow more room for either more procrastination or staring at my screen in confusion.

As simple as GIT is, it can be a fiddle entering the git commands in command line as well as remembering to do this as you rush to leave the building. This has all been made much easier with PyCharm, from JetBrains. This IDE (integrated development environment) has many tools including version control such as support for a variety of file types, PEP8 checks to ensure good quality code and its ability to work with ipython notebooks.

I’ve put the following mini-tutorial together for those who want to make or bring in an existing repository to PyCharm and get version control working:

Continue reading

A very basic introduction to Random Forests using R

Random Forests is a powerful tool used extensively across a multitude of fields. As a matter of fact, it is hard to come upon a data scientist that never had to resort to this technique at some point. Motivated by the fact that I have been using Random Forests quite a lot recently, I decided to give a quick intro to Random Forests using R.

So what are Random Forests?  Well, I am probably not the most suited person to answer this question (a google search will reveal much more interesting answers) , still I shall give it a go. Random Forests is a learning method for classification (and others applications — see below). It is based on generating a large number of decision trees, each constructed using a different subset of your training set. These subsets are usually selected by sampling at random and with replacement from the original data set. The decision trees are then used to identify a classification consensus by selecting the most common output (mode). While random forests can be used for other applications (i.e. regression), for the sake of keeping this post short, I shall focus solely on classification.

Why R? Well, the quick and easy question for this is that I do all my plotting in R (mostly because I think ggplot2 looks very pretty). I decided to explore Random Forests in R and to assess what are its advantages and shortcomings. I am planning to compare Random Forests in R against the python implementation in scikit-learn. Do expect a post about this in the near future!

The data: to keep things simple, I decided to use the Edgar Anderson’s Iris Data set. You can have a look at it by inspecting the contents of iris in R. This data set contains observations for four features (sepal length and width, and petal length and width – all in cm) of 150 flowers, equally split between three different iris species. This data set is fairly canon in classification and data analysis. Let us take a look at it, shall we:

As you can observe, there seems to be some separation in regards to the different features and our three species of irises [note: this set is not very representative of a real world data set and results should be taken with a grain of salt].

Training and Validation sets: great care needs to be taken to ensure clear separation between training and validation sets. I tend to save the cases for which I am actually interested in performing predictions as a second validation set (Validation 2). Then I split the remaining data evenly into Training and Validation 1.

Let us split our data set then, shall we?

# Set random seed to make results reproducible:
set.seed(17)
# Calculate the size of each of the data sets:
data_set_size <- floor(nrow(iris)/2)
# Generate a random sample of "data_set_size" indexes
indexes <- sample(1:nrow(iris), size = data_set_size)

# Assign the data to the correct sets
training <- iris[indexes,]
validation1 <- iris[-indexes,]

Before we can move on, here are some things to consider:

1- The size of your data set usually imposes a hard limit on how many features you can consider. This occurs due to the curse of dimensionality, i.e. your data becomes sparser and sparser as you increase the number of features considered, which usually leads to overfitting. While there is no rule of thumb relating to how many features vs.  the number of observations you should use, I try to keep e^Nf < No (Nf = number of features, No = number of observations) to minimise overfitting [this is not always possible and it does not ensure that we won’t overfit]. In this case, our training set has 75 observations, which suggests that using four features (e^4 ~ 54.6) is not entirely absurd. Obviously, this depends on your data, so we will cover some further overfitting checks later on.

2- An important thing to consider when assembling training sets is the proportion of negatives vs. positives in your data. Think of an extreme scenario where you have many, many more observations for one class vs. the others. How will this affect classification? This would make it more likely for the classifier to predict the dominant class when given new values. I mentioned before that the iris set is quite nice to play with. It comes with exactly 50 observations for each species of irises. What happens if you have a data set with a much higher number of observations for a particular class? You can bypass any imbalance regarding the representation of each class by carefully constructing your training set in order not to favour any particular class. In this case, our randomly selected set has 21 observations for species setosa and 27 observations for each of species versicolor and virginica, so we are good to go.

3- Another common occurrence that is not represented by the iris data set is missing values (NAs) for observations. There are many ways of dealing with missing values, including assigning the median or the mode for that particular feature to the missing observation or even disregarding some observations entirely, depending on how many observations you have. There are even ways to use random forests to estimate a good value to assign to the missing observations, but for the sake of brevity, this will not be covered here.

Right, data sets prepared and no missing values, it is time to fire our random forests algorithm. I am using the  randomForest package. You can click the link for additional documentation. Here is the example usage code:

#import the package
library(randomForest)
# Perform training:
rf_classifier = randomForest(Species ~ ., data=training, ntree=100, mtry=2, importance=TRUE)

Note some important parameters:

-The first parameter specifies our formula: Species ~ . (we want to predict Species using each of the remaining columns of data).
ntree defines the number of trees to be generated. It is typical to test a range of values for this parameter (i.e. 100,200,300,400,500) and choose the one that minimises the OOB estimate of error rate.
mtry is the number of features used in the construction of each tree. These features are selected at random, which is where the “random” in “random forests” comes from. The default value for this parameter, when performing classification, is sqrt(number of features).
importance enables the algorithm to calculate variable importance.

We can quickly look at the results of our classifier for our training set by printing the contents of rf_classifier:

> rf_classifier

Call:
 randomForest(formula = Species ~ ., data = training,ntree=100,mtry=2, importance = TRUE) 
               Type of random forest: classification
                     Number of trees: 100
No. of variables tried at each split: 2

        OOB estimate of  error rate: 5.33%
Confusion matrix:
           setosa versicolor virginica class.error
setosa         21          0         0  0.00000000
versicolor      0         25         2  0.07407407
virginica       0          2        25  0.07407407


As you can see, it lists the call used to build the classifier, the number of trees (100), the variables at each split (2), and it outputs a very useful confusion matrix and OOB estimate of error rate. This estimate is calculated by counting however many points in the training set were misclassified (2 versicolor and 2 virginica observations = 4) and dividing this number by the total number of observations (4/75 ~= 5.33%).

The OOB estimate of error rate is a useful measure to discriminate between different random forest classifiers. We could, for instance, vary the number of trees or the number of variables to be considered, and select the combination that produces the smallest value for this error rate. For more complicated data sets, i.e. when a higher number of features is present, a good idea is to use cross-validation to perform feature selection using the OOB error rate (see rfcv from randomForest for more details).

Remember the importance parameter? Let us take a look at the importance that our classifier has assigned to each variable:

varImpPlot(rf_classifier)

Each features’s importance is assessed based on two criteria:

-MeanDecreaseAccuracy: gives a rough estimate of the loss in prediction performance when that particular variable is omitted from the training set. Caveat: if two variables are somewhat redundant, then omitting one of them may not lead to massive gains in prediction performance, but would make the second variable more important.

-MeanDecreaseGini: GINI is a measure of node impurity. Think of it like this, if you use this feature to split the data, how pure will the nodes be? Highest purity means that each node contains only elements of a single class. Assessing the decrease in GINI when that feature is omitted leads to an understanding of how important that feature is to split the data correctly.

Do note that these measures are used to rank variables in terms of importance and, thus, their absolute values could be disregarded.

Ok, great. Looks like we have a classifier that was properly trained and is producing somewhat good predictions for our training set. Shall we evaluate what happens when we try to use this classifier to predict classes for our  validation1 set?

# Validation set assessment #1: looking at confusion matrix
prediction_for_table <- predict(rf_classifier,validation1[,-5])
table(observed=validation1[,5],predicted=prediction_for_table)

            predicted
observed     setosa versicolor virginica
  setosa         29          0         0
  versicolor      0         20         3
  virginica       0          1        22

The confusion matrix is a good way of looking at how good our classifier is performing when presented with new data.

Another way of assessing the performance of our classifier is to generate a ROC curve and compute the area under the curve:

 

# Validation set assessment #2: ROC curves and AUC

# Needs to import ROCR package for ROC curve plotting:
library(ROCR)

# Calculate the probability of new observations belonging to each class
# prediction_for_roc_curve will be a matrix with dimensions data_set_size x number_of_classes
prediction_for_roc_curve <- predict(rf_classifier,validation1[,-5],type="prob")

# Use pretty colours:
pretty_colours <- c("#F8766D","#00BA38","#619CFF")
# Specify the different classes 
classes <- levels(validation1$Species)
# For each class
for (i in 1:3)
{
 # Define which observations belong to class[i]
 true_values <- ifelse(validation1[,5]==classes[i],1,0)
 # Assess the performance of classifier for class[i]
 pred <- prediction(prediction_for_roc_curve[,i],true_values)
 perf <- performance(pred, "tpr", "fpr")
 if (i==1)
 {
     plot(perf,main="ROC Curve",col=pretty_colours[i]) 
 }
 else
 {
     plot(perf,main="ROC Curve",col=pretty_colours[i],add=TRUE) 
 }
 # Calculate the AUC and print it to screen
 auc.perf <- performance(pred, measure = "auc")
 print(auc.perf@y.values)
}

Here is the final product (ROC curve):

And here are the values for our AUCs:

Setosa
AUC = 1

Versicolor
AUC = 0.98

Virginica
AUC = 0.98

Voila! I hope this was somewhat useful!

Parallel Computing: GNU Parallel

Recently I started using the OPIG servers to run the algorithm I have developed (CRANkS) on datasets from DUDE (Database of Useful Decoys Enhanced).

This required learning how to run jobs in parallel. Previously I had been using computer clusters with their own queuing system (Torque/PBS) which allowed me to submit each molecule to be scored by the algorithm as a separate job. The queuing system would then automatically allocate nodes to jobs and execute jobs accordingly. On a side note I learnt how to submit these jobs an array, which was preferable to submitting ~ 150,000 separate jobs:

qsub -t 1:X array_submit.sh

where the contents of array_submit.sh would be:

#!/bin/bash
./$SGE_TASK_ID.sh

which would submit jobs 1.sh to X.sh, where X is the total number of jobs.

However the OPIG servers do not have a global queuing system to use. I needed a way of being able to run the code I already had in parallel with minimal changes to the workflow or code itself. There are many ways to run jobs in parallel, but to minimise work for myself, I decided to use GNU parallel [1].

This is an easy-to-use shell tool, which I found quick and easy to install onto my home server, allowing me to access it on each of the OPIG servers.

To use it I simply run the command:

cat submit.sh | parallel -j Y

where Y is the number of cores to run the jobs on, and submit.sh contains:

./1.sh
./2.sh
...
./X.sh

This executes each job making use of Y number of cores when available to run the jobs in parallel.

Quick, easy, simple and minimal modifications needed! Thanks to Jin for introducing me to GNU Parallel!

[1] O. Tange (2011): GNU Parallel – The Command-Line Power Tool, The USENIX Magazine, February 2011:42-47.

Interesting Jupyter and IPython Notebooks

Here’s a treasure trove of interesting Jupyter and iPython notebooks, with lots of diverse examples relevant to OPIG, including an RDKit notebook, but also:

Entire books or other large collections of notebooks on a topic (covering Introductory Tutorials; Programming and Computer Science; Statistics, Machine Learning and Data Science; Mathematics, Physics, Chemistry, Biology; Linguistics and Text Mining; Signal Processing; Scientific computing and data analysis with the SciPy Stack; General topics in scientific computing; Machine Learning, Statistics and Probability; Physics, Chemistry and Biology; Data visualization and plotting; Mathematics; Signal, Sound and Image Processing; Natural Language Processing; Pandas for data analysis); General Python Programming; Notebooks in languages other than Python (Julia; Haskell; Ruby; Perl; F#; C#); Miscellaneous topics about doing various things with the Notebook itself; Reproducible academic publications; and lots more!  

 

Using RDKit to load ligand SDFs into Pandas DataFrames

If you have downloaded lots of ligand SDF files from the PDB, then a good way of viewing/comparing all their properties would be to load it into a Pandas DataFrame.

RDKit has a very handy function just for this – it’s found under the PandasTool module.

I show an example below within Jupypter-notebook, in which I load in the SDF file, view the table of molecules and perform other RDKit functions to the molecules.

First import the PandasTools module:

from rdkit.Chem import PandasTools

Read in the SDF file:

SDFFile = "./Ligands_noHydrogens_noMissing_59_Instances.sdf"
BRDLigs = PandasTools.LoadSDF(SDFFile)

You can see the whole table by calling the dataframe:

BRDLigs

The ligand properties in the SDF file are stored as columns. You can view what these properties are, and in my case I have loaded 59 ligands each having up to 26 properties:

BRDLigs.info()

It is also very easy to perform other RDKit functions on the dataframe. For instance, I noticed there is no heavy atom column, so I added my own called ‘NumHeavyAtoms’:

BRDLigs['NumHeavyAtoms']=BRDLigs.apply(lambda x: x['ROMol'].GetNumHeavyAtoms(), axis=1)

Here is the column added to the table, alongside columns containing the molecules’ SMILES and RDKit molecule:

BRDLigs[['NumHeavyAtoms','SMILES','ROMol']]

R or Python for data vis?

Python users: ever wanted to learn R?
R users: ever wanted to learn Python?
Check out: http://mathesaurus.sourceforge.net/r-numpy.html

Both languages are incredibly powerful for doing large-scale data analyses. They both have amazing data visualisation platforms, allowing you to make custom graphs very easily (e.g. with your own set of fonts, color palette choices, etc.) These are just a quick run-down of the good, bad, and ugly:

R

  • The good:
    • More established in statistical analyses; if you can’t find an R package for something, chances are it won’t be available in Python either.
    • Data frame parsing is fast and efficient, and incredibly easy to use (e.g. indexing specific rows, which is surprisingly hard in Pandas)
    • If GUIs are your thing, there are programs like Rstudio that mesh the console, plotting, and code.
  • The bad:
    • For loops are traditionally slow, meaning that you have to use lots of apply commands (e.g. tapply, sapply).
  • The ugly:
    • Help documentation can be challenging to read and follow, leading to (potentially) a steep learning curve.

Python

  • The good:
    • If you have existing code in Python (e.g. analysing protein sequences/structures), then you can plot straight away without having to save it as a separate CSV file for analysis, etc.
    • Lots of support for different packages such as NumPy, SciPy, Scikit Learn, etc., with good documentation and lots of help on forums (e.g. Stack Overflow)
    • It’s more useful for string manipulation (e.g. parsing out the ordering of IMGT numbering for antibodies, which goes from 111A->111B->112B->112A->112)
  • The bad:
    • Matplotlib, which is the go-to for data visualisation, has a pretty steep learning curve.
  • The ugly:
    • For statistical analyses, model building can have an unusual syntax. For example, building a linear model in R is incredibly easy (lm), whereas Python involves sklearn.linear_model.LinearRegression().fit. Otherwise you have to code up a lot of things yourself, which might not be practical.

For me, Python wins because I find it’s much easier to create an analysis pipeline where you can go from raw data (e.g. PDB structures) to analysing it (e.g. with BioPython) then plotting custom graphics. Another big selling point is that Python packages have great documentation. Of course, there are libraries to do the analyses in R but the level of freedom, I find, is a bit more restricted, and R’s documentation means you’re often stuck interpreting what the package vignette is saying, rather than doing actual coding.

As for plotting (because pretty graphs are where it’s at!), here’s a very simple implementation of plotting the densities of two normal distributions, along with their means and standard deviations.

import numpy as np
from matplotlib import rcParams

# plt.style.use('xkcd') # A cool feature of matplotlib is stylesheets, e.g. make your plots look XKCD-like

# change font to Arial
# you can change this to any TrueType font that you have in your machine
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']

import matplotlib.pyplot as plt
# Generate two sets of numbers from a normal distribution
# one with mean = 4 sd = 0.5, another with mean (loc) = 1 and sd (scale) = 2
randomSet = np.random.normal(loc = 4, scale = 0.5, size = 1000)
anotherRandom = np.random.normal(loc = 1, scale = 2, size = 1000)

# Define a Figure and Axes object using plt.subplots
# Axes object is where we do the actual plotting (i.e. draw the histogram)
# Figure object is used to configure the actual figure (e.g. the dimensions of the figure)
fig, ax = plt.subplots()

# Plot a histogram with custom-defined bins, with a blue colour, transparency of 0.4
# Plot the density rather than the raw count using normed = True
ax.hist(randomSet, bins = np.arange(-3, 6, 0.5), color = '#134a8e', alpha = 0.4, normed = True)
ax.hist(anotherRandom, bins = np.arange(-3, 6, 0.5), color = '#e8291c', alpha = 0.4, normed = True)

# Plot solid lines for the means
plt.axvline(np.mean(randomSet), color = 'blue')
plt.axvline(np.mean(anotherRandom), color = 'red')

# Plot dotted lines for the std devs
plt.axvline(np.mean(randomSet) - np.std(randomSet), linestyle = '--', color = 'blue')
plt.axvline(np.mean(randomSet) + np.std(randomSet), linestyle = '--', color = 'blue')

plt.axvline(np.mean(anotherRandom) - np.std(anotherRandom), linestyle = '--', color = 'red')
plt.axvline(np.mean(anotherRandom) + np.std(anotherRandom), linestyle = '--', color = 'red')

# Set the title, x- and y-axis labels
plt.title('A fancy plot')
ax.set_xlabel("Value of $x$") 
ax.set_ylabel("Density")

# Set the Figure's size as a 5in x 5in figure
fig.set_size_inches((5,5))

Figure made by matplotlib using the code above.

randomSet = rnorm(mean = 4, sd = 0.5, n = 1000)
anotherRandom = rnorm(mean = 1, sd = 2, n = 1000)

# Let's define a range to plot the histogram for binning;
limits = range(randomSet, anotherRandom)
lbound = limits[1] - (diff(limits) * 0.1)
ubound = limits[2] + (diff(limits) * 0.1)
# use freq = F to plot density
# in breaks, we define the bins of the histogram by providing a vector of values using seq
# xlab, ylab define axis labels; main sets the title
# rgb defines the colour in RGB values from 0-1, with the fourth digit setting transparency
# e.g. rgb(0,1,0,1) is R = 0, G = 1, B = 0, with a alpha of 1 (i.e. not transparent)
hist(randomSet, freq = F, breaks = seq(lbound, ubound, 0.5), col = rgb(0,0,1,0.4), xlab = 'Value of x', ylab = 'Density', main = 'A fancy plot')
# Use add = T to keep both histograms in one graph
# other parameters, such as breaks, etc., can be introduced here
hist(anotherRandom, freq = F, breaks = seq(lbound, ubound, 0.5), col = rgb(1,0,0,0.4), add = T)

# Plot vertical lines with v =
# lty = 2 generates a dashed line
abline(v = c(mean(randomSet), mean(anotherRandom)), col = c('blue', 'red'))

abline(v = c(mean(randomSet)-sd(randomSet), mean(randomSet)+sd(randomSet)), col = 'blue', lty = 2)
abline(v = c(mean(anotherRandom)-sd(anotherRandom), mean(anotherRandom)+sd(anotherRandom)), col = 'red', lty = 2)

Similar figure made using R code from above.

*Special thanks go out to Ali and Lyuba for helpful fixes to make the R code more efficient!

How to Calculate PLIFs Using RDKit and PLIP

Protein-Ligand interaction fingerprints (PLIFs) are becoming more widely used to compare small molecules in the context of a protein target. A fingerprint is a bit vector that is used to represent a small molecule. Fingerprints of molecules can then be compared to determine the similarity between two molecules. Rather than using the features of the ligand to build the fingerprint, a PLIF is based on the interactions between the protein and the small molecule. The conventional method of building a PLIF is that each bit of the bit vector represents a residue in the binding pocket of the protein. The bit is set to 1 if the molecule forms an interaction with the residue, whereas it is set to 0 if it does not.

Constructing a PLIF therefore consists of two parts:

  1. Calculating the interactions formed by a small molecule from the target
  2. Collating this information into a bit vector.

Step 1 can be achieved by using the Protein-Ligand Interaction Profiler (PLIP). PLIP is an easy-to-use tool, that given a pdb file will calculate the interactions between the ligand and protein. This can be done using the online web-tool or alternatively using the command-line tool. Six different interaction types are calculated: hydrophobic, hydrogen-bonds, water-mediated hydrogen bonds, salt bridges, pi-pi and pi-cation. The command-line version outputs an xml report file containing all the information required to construct a PLIF.

Step 2 involves manipulating the output of the report file into a bit vector. RDKit is an amazingly useful Cheminformatics toolkit with great documentation. By reading the PLIF into an RDKit bit vector this allows the vector to be manipulated as an RDKit fingerprint. The fingerprints can then be compared using RDKit functionality very easily, for example, using Tanimoto Similarity.

EXAMPLE:

Let’s take 3 pdb files as an example. Fragment screening data from the SGC is a great sort of data for this analysis, as it contains lots of pdb structures of small hits bound to the same target. The data can be found here. For this example I will use 3 protein-ligand complexes from the BRD1 dataset: BRD1A-m004.pdb, BRD1A-m006.pdb and BRD1A-m009.pdb.

brd1_sgc

1.PLIP First we need to run plip to generate a report file for each protein-ligand complex. This is done using:


 

plipcmd -f BRD1A-m004.pdb -o m004 -x

plipcmd -f BRD1A-m006.pdb -o m006 -x

plipcmd -f BRD1A-m009.pdb -o m009 -x

 


A report file (‘report.xml’) is created for each pdb file within the directory m004, m006 and m009.

2. Get Interactions: Using a python script the results of the report can be collated using the function “generate_plif_lists” (shown below) on each report file. The function takes in the report file name, and the residues already found to be in the binding site (residue_list). “residue_list” must be updated for each molecule to be compared as the residues used to define the binding site can vary betwen each report file. The function then returns the updated “residue_list”, as well as a list of residues found to interact with the ligand: “plif_list_all”.

 


import xml.etree.ElementTree as ET

################################################################################

def generate_plif_lists(report_file, residue_list, lig_ident):

    #uses report.xml from PLIP to return list of interacting residues and update list of residues in binding site

        plif_list_all = []

        tree = ET.parse(report_file)

        root = tree.getroot()

        #list of residue keys that form an interaction

        for binding_site in root.findall('bindingsite'):

                nest = binding_site.find('identifiers')

                lig_code = nest.find('hetid')

                if str(lig_code.text) == str(lig_ident):

                        #get the plifs stuff here

                        nest_residue = binding_site.find('bs_residues')

                        residue_list_tree = nest_residue.findall('bs_residue')

                        for residue in residue_list_tree:

                                res_id = residue.text

                                dict_res_temp = residue.attrib

                                if res_id not in residue_list:

                                        residue_list.append(res_id)

                                if dict_res_temp['contact'] == 'True':

                                        if res_id not in plif_list_all:

                                                plif_list_all.append(res_id)

        return plif_list_all, residue_list

###############################################################################

plif_list_m006, residue_list = generate_plif_lists('m006/report.xml',residue_list, 'LIG')

plif_list_m009, residue_list = generate_plif_lists('m009/report.xml', residue_list, 'LIG')

plif_list_m004, residue_list = generate_plif_lists('m004/report.xml', residue_list, 'LIG')


3. Read Into RDKit: Now we have the list of binding site residues and which residues are interacting with the ligand a PLIF can be generated. This is done using the function shown below (“generate_rdkit_plif”):


from rdkit import Chem,  DataStructs

from rdkit.DataStructs import cDataStructs

################################################################################

def generate_rdkit_plif(residue_list, plif_list_all):

    #generates RDKit plif given list of residues in binding site and list of interacting residues

    plif_rdkit = DataStructs.ExplicitBitVect(len(residue_list), False)

    for index, res in enumerate(residue_list):

        if res in plif_list_all:

            print 'here'

            plif_rdkit.SetBit(index)

        else:

            continue

    return plif_rdkit

#########################################################################

plif_m006 = generate_rdkit_plif(residue_list, plif_list_m006)

plif_m009 = generate_rdkit_plif(residue_list, plif_list_m009)

plif_m004 = generate_rdkit_plif(residue_list, plif_list_m004)


4. Play! These PLIFs can now be compared using RDKit functionality. For example the Tanimoto similarity between the ligands can be computed:


def similarity_plifs(plif_1, plif_2):

    sim = DataStructs.TanimotoSimilarity(plif_1, plif_2)

    print sim

    return sim

###################################################################

print similarity_plifs(plif_m006, plif_m009)

print similarity_plifs(plif_m006, plif_m004)

print similarity_plifs(plif_m009, plif_m004)


The output is: 0.2, 0.5, 0.0.

All files used to generate the PLIFs cound be found here. Happy PLIF-making!

Viewing 3D molecules interactively in Jupyter iPython notebooks

Greg Landrum, curator of the invaluable open source cheminformatics API, RDKit, recently blogged about viewing molecules in a 3D window within a Jupyter-hosted iPython notebook (as long as your browser supports WebGL, that is).

The trick is to use py3Dmol. It’s easy to install:

pip install py3Dmol

This is built on the object-oriented, webGL based JavaScript library for online molecular visualization 3Dmol.js (Rego & Koes, 2015); here's a nice summary of the capabilities of 3Dmol.js. It's features include:

  • support for pdb, sdf, mol2, xyz, and cube formats
  • parallelized molecular surface computation
  • sphere, stick, line, cross, cartoon, and surface styles
  • atom property based selection and styling
  • labels
  • clickable interactivity with molecular data
  • geometric shapes including spheres and arrows

I tried a simple example and it worked beautifully:

import py3Dmol
view = py3Dmol.view(query='pdb:1hvr')
view.setStyle({'cartoon':{'color':'spectrum'}})
view

py3dmol_in_jupyter_ipython

The 3Dmol.js website summarizes how to view molecules, along with how to choose representations, how to embed it, and even how to develop with it.

References

Nicholas Rego & David Koes (2015). “3Dmol.js: molecular visualization with WebGL”.
Bioinformatics, 31 (8): 1322-1324. doi:10.1093/bioinformatics/btu829