Shortcuts

Lightning Wrapper

As explained in Training and Evaluation, Asteroid provides a thin wrapper on the top of PyTorchLightning for training your models.

class asteroid.engine.system.System(model, optimizer, loss_func, train_loader, val_loader=None, scheduler=None, config=None)[source]

Bases: pytorch_lightning.core.module.LightningModule

Base class for deep learning systems. Contains a model, an optimizer, a loss function, training and validation dataloaders and learning rate scheduler.

Note that by default, any PyTorch-Lightning hooks are not passed to the model. If you want to use Lightning hooks, add the hooks to a subclass:

class MySystem(System):
    def on_train_batch_start(self, batch, batch_idx, dataloader_idx):
        return self.model.on_train_batch_start(batch, batch_idx, dataloader_idx)
Parameters:
  • model (torch.nn.Module) – Instance of model.
  • optimizer (torch.optim.Optimizer) – Instance or list of optimizers.
  • loss_func (callable) – Loss function with signature (est_targets, targets).
  • train_loader (torch.utils.data.DataLoader) – Training dataloader.
  • val_loader (torch.utils.data.DataLoader) – Validation dataloader.
  • scheduler (torch.optim.lr_scheduler._LRScheduler) – Instance, or list of learning rate schedulers. Also supports dict or list of dict as {"interval": "step", "scheduler": sched} where interval=="step" for step-wise schedulers and interval=="epoch" for classical ones.
  • config – Anything to be saved with the checkpoints during training. The config dictionary to re-instantiate the run for example.

Note

By default, training_step (used by pytorch-lightning in the training loop) and validation_step (used for the validation loop) share common_step. If you want different behavior for the training loop and the validation loop, overwrite both training_step and validation_step instead.

For more info on its methods, properties and hooks, have a look at lightning’s docs: https://pytorch-lightning.readthedocs.io/en/stable/lightning_module.html#lightningmodule-api

forward(*args, **kwargs)[source]

Applies forward pass of the model.

Returns:torch.Tensor
common_step(batch, batch_nb, train=True)[source]

Common forward step between training and validation.

The function of this method is to unpack the data given by the loader, forward the batch through the model and compute the loss. Pytorch-lightning handles all the rest.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.
  • batch_nb (int) – The number of the batch in the epoch.
  • train (bool) – Whether in training mode. Needed only if the training and validation steps are fundamentally different, otherwise, pytorch-lightning handles the usual differences.
Returns:

torch.Tensor – The loss value on this batch.

Note

This is typically the method to overwrite when subclassing System. If the training and validation steps are somehow different (except for loss.backward() and optimzer.step()), the argument train can be used to switch behavior. Otherwise, training_step and validation_step can be overwriten.

training_step(batch, batch_nb)[source]

Pass data through the model and compute the loss.

Backprop is not performed (meaning PL will do it for you).

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.
  • batch_nb (int) – The number of the batch in the epoch.
Returns:

torch.Tensor, the value of the loss.

validation_step(batch, batch_nb)[source]

Need to overwrite PL validation_step to do validation.

Parameters:
  • batch – the object returned by the loader (a list of torch.Tensor in most cases) but can be something else.
  • batch_nb (int) – The number of the batch in the epoch.
on_validation_epoch_end()[source]

Log hp_metric to tensorboard for hparams selection.

configure_optimizers()[source]

Initialize optimizers, batch-wise and epoch-wise schedulers.

lr_scheduler_step(scheduler, optimizer_idx, metric)[source]

Override this method to adjust the default way the Trainer calls each scheduler. By default, Lightning calls step() and as shown in the example for each scheduler based on its interval.

Parameters:
  • scheduler – Learning rate scheduler.
  • optimizer_idx – Index of the optimizer associated with this scheduler.
  • metric – Value of the monitor used for schedulers like ReduceLROnPlateau.

Examples:

# DEFAULT
def lr_scheduler_step(self, scheduler, optimizer_idx, metric):
    if metric is None:
        scheduler.step()
    else:
        scheduler.step(metric)

# Alternative way to update schedulers if it requires an epoch value
def lr_scheduler_step(self, scheduler, optimizer_idx, metric):
    scheduler.step(epoch=self.current_epoch)
train_dataloader()[source]

Training dataloader

val_dataloader()[source]

Validation dataloader

on_save_checkpoint(checkpoint)[source]

Overwrite if you want to save more things in the checkpoint.

static config_to_hparams(dic)[source]

Sanitizes the config dict to be handled correctly by torch SummaryWriter. It flatten the config dict, converts None to "None" and any list and tuple into torch.Tensors.

Parameters:dic (dict) – Dictionary to be transformed.
Returns:dict – Transformed dictionary.
Read the Docs v: latest
Versions
latest
stable
v0.6.0
v0.5.3
v0.5.2
v0.5.1
v0.5.0
v0.4.5
v0.4.4
v0.4.3
v0.4.2
v0.4.1
v0.4.0
v0.3.5_b
v0.3.4
v0.3.3
v0.3.2
v0.3.1
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.