Organise Your ML Projects With Hydra

One of the most annoying parts of ML research is keeping track of all the various different experiments you’re running – quickly changing and keeping track of changes to your model, data or hyper-parameters can turn into an organisational nightmare. I’m normally a fan of avoiding too many different libraries/frameworks as they often break down if you to do anything even a little bit custom and days are often wasted trying to adapt yourself to a new framework or adapt the framework to you. However, my last codebase ended up straying pretty far into the chaotic side of things so I thought it might be worth trying something else out for my next project. In my quest to instil a bit more order, I’ve started using Hydra, which strikes a nice balance between giving you more structure to organise a project, while not rigidly insisting on it, and I’d highly recommend checking it out yourself.

Hydra is an open-source Python framework, originally developed by Meta, that allows easy configuration of any Python project and is particularly suited to ML projects. Hydra uses hierarchical configuration files defined in YAML and allows you to easily override them from the command line or select a different configuration file at runtime. As an example, you might want to use one overall ‘train.yaml’ file, which would then point to different config files to define your model, dataset, trainer etc. You can then override which model config file to use from the command line. This lets you quickly set up multiple different experiments, easily mixing and matching different config file combinations.

Another powerful feature of Hydra is instantiating objects from their class definition provided in the config file. This can be particularly useful if you want to try out multiple different model architectures, for example, without having to use multiple flags and if statements:

In your train.yaml file

model: my_model

In your model/my_model.yaml file:

_target_: mypackage.models.MyPyTorchModule
    n_hidden_layers: 10

In your main training script:

model: LightningModule = hydra.utils.instantiate(cfg.model)

This lets you keep your code very modular, with most of the changes happening in Yaml files if you want to try out a different dataset or model architecture.

When combined with PyTorch Lightning, Hydra makes a very flexible solution to ML experimentation. The popular Lightning-Hydra-Template provides a nice starting point to organise your own project and also serves as a good guide to other best practices you might want to use.

Author