Optimising machine learning models requires extensive comparison of architectures and hyperparameter combinations. There are many frameworks that make logging and visualising performance metrics across model runs easier. I recently started using Weights & Biases. In the following, I give a brief overview over some basic code snippets for your machine learning python code to get started with this tool.
For account setup see here. The basic account is free.
First, initialise the run alongside all parameters you want to keep track off before you start the model training:
wandb.init( project='demo_project', name='demo_run', config={ 'learning_rate': 0.001, 'dropout': 0.2, 'layers': 6, 'train epochs': 3 } )
This will set up a project ‘demo_project’ in your Weights & Biases account and log the following code as the run ‘demo_run’, associating the hyperparameters logged in config with the run metrics.
We can then log any metrics to this run as demonstrated here:
for epoch in range(1, 5): for step in range(1, 100): #### # Model training goes here #### # simulating loss and accuracy of train step dummy_loss = mult * (1/(step + 5 * epoch) + random.random()/(step + 5 * epoch)) dummy_accuracy = 1 - mult * (1/math.sqrt((step + 5 * epoch) + random.random())) # log to Weights & Biases wandb.log({'Train loss': dummy_loss}) wandb.log({'Train accuracy': dummy_accuracy}) #### # Model testing per epoch goes here #### dummy_test_loss = mult * (1 / (epoch * 10) + random.random() / (epoch * 10)) dummy_test_accuracy = 1 - mult * (1/math.sqrt(epoch * 10 + random.random())) wandb.log({'Test loss': dummy_test_loss}) wandb.log({'Test accuracy': dummy_test_accuracy})
This can the be used for easy visualisation of training and testing progress:
And even to generate quick visualisations of hyperparameter sweeps:
Weights & Biases has a number of additional features, including automated running of hyperparameter optimisation and integration into standard machine learning frameworks, which I will go into in a later post.