Hooks

Inference

XPM Configxpmir.context.Hook[source]

Bases: Config

Submit type: xpmir.context.Hook

Base class for all hooks

XPM Configxpmir.context.InitializationHook[source]

Bases: Hook

Submit type: xpmir.context.InitializationHook

Base class for hooks before/after initialization

after(context: Context)[source]

Called after initialization

before(context: Context)[source]

Called before initialization

Learning

Hooks can be used to modify the learning process

XPM Configxpmir.learning.context.TrainingHook[source]

Bases: Hook

Submit type: xpmir.learning.context.TrainingHook

Base class for all training hooks

XPM Configxpmir.learning.context.InitializationTrainingHook[source]

Bases: TrainingHook, InitializationHook

Submit type: xpmir.learning.context.InitializationTrainingHook

Base class for hooks called at initialization

after(state: TrainerContext)[source]

Called after initialization

before(state: TrainerContext)[source]

Called before initialization

XPM Configxpmir.learning.context.StepTrainingHook[source]

Bases: TrainingHook

Submit type: xpmir.learning.context.StepTrainingHook

Base class for hooks called at each step (before/after)

after(state: TrainerContext)[source]

Called after a training step

before(state: TrainerContext)[source]

Called before a training step

Distributed

Hooks can be used to distribute a model over GPUs

XPM Configxpmir.distributed.DistributableModel[source]

Bases: Config

Submit type: xpmir.distributed.DistributableModel

A model that can be distributed over GPUs

Subclasses must implement distribute_models()

distribute_models(update: Callable[[TorchModule], TorchModule])[source]

This method is called with an update parameter that should be used to update all the torch modules that we need to distribute on GPUs

XPM Configxpmir.distributed.DistributedHook(*, models)[source]

Bases: InitializationHook

Submit type: xpmir.distributed.DistributedHook

Hook to distribute the model processing

When in multiprocessing/multidevice, use torch.nn.parallel.DistributedDataParallel ,otherwise use torch.nn.DataParallel.

models: List[xpmir.distributed.DistributableModel]

The model to distribute over GPUs