Alignment

XPM Configxpmir.letor.trainers.alignment.AlignmentLoss(*, weight)[source]

Bases: Config, ABC, Generic[AlignementLossInput, AlignmentLossTarget]

Submit type: xpmir.letor.trainers.alignment.AlignmentLoss

weight: float = 1.0

Weight for this loss

XPM Configxpmir.letor.trainers.alignment.AlignmentTrainer(*, hooks, model, batcher, sampler, batch_size, losses, target_model)[source]

Bases: LossTrainer

Submit type: xpmir.letor.trainers.alignment.AlignmentTrainer

Compares two representations

Both the representations are expected to a be in a vector space

hooks: List[xpmir.learning.context.TrainingHook] = []

Hooks for this trainer: this includes the losses, but can be adapted for other uses The specific list of hooks depends on the specific trainer

model: xpmir.learning.optim.Module

If the model to optimize is different from the model passsed to Learn, this parameter can be used – initialization is still expected to be done at the learner level

batcher: xpmir.learning.batchers.Batcher = xpmir.learning.batchers.Batcher.XPMValue()

How to batch samples together

sampler: xpmir.learning.base.BaseSampler

The pairwise sampler

batch_size: int = 16

Number of samples per batch

losses: Dict[str, xpmir.letor.trainers.alignment.AlignmentLoss]

The loss function(s)

target_model: xpmir.learning.optim.Module

Target model

XPM Configxpmir.letor.trainers.alignment.MSEAlignmentLoss(*, weight)[source]

Bases: AlignmentLoss[RepresentationOutput, RepresentationOutput]

Submit type: xpmir.letor.trainers.alignment.MSEAlignmentLoss

Computes the MSE between contextualized query representation and gold representation

weight: float = 1.0

Weight for this loss