Pointwise

Trainers are responsible for defining the the way to train a learnable scorer.

XPM Configxpmir.learning.trainers.multiple.MultipleTrainer(*, hooks, model, trainers)[source]

Bases: Trainer

Submit type: xpmir.learning.trainers.multiple.MultipleTrainer

This trainer can be used to combine various trainers

hooks: List[xpmir.learning.context.TrainingHook] = []

Hooks for this trainer: this includes the losses, but can be adapted for other uses The specific list of hooks depends on the specific trainer

model: xpmir.learning.optim.Module

If the model to optimize is different from the model passsed to Learn, this parameter can be used – initialization is still expected to be done at the learner level

trainers: Dict[str, xpmir.learning.trainers.Trainer]

The trainers

XPM Configxpmir.letor.trainers.LossTrainer(*, hooks, model, batcher, sampler, batch_size)[source]

Bases: Trainer

Submit type: xpmir.letor.trainers.LossTrainer

Trainer based on a loss function

This trainer supposes that:

  • the sampler_iter is initialized – and is a serializable iterator over batches

hooks: List[xpmir.learning.context.TrainingHook] = []

Hooks for this trainer: this includes the losses, but can be adapted for other uses The specific list of hooks depends on the specific trainer

model: xpmir.learning.optim.Module

If the model to optimize is different from the model passsed to Learn, this parameter can be used – initialization is still expected to be done at the learner level

batcher: xpmir.learning.batchers.Batcher = xpmir.learning.batchers.Batcher.XPMValue()

How to batch samples together

sampler: xpmir.learning.base.Sampler

The sampler to use

batch_size: int = 16

Number of samples per batch

process_microbatch(records: BaseRecords)[source]

Combines a forward and backard

This method can be implemented by specific trainers that use the gradient. In that case the regularizer losses should be taken into account with self.add_losses.

XPM Configxpmir.letor.trainers.pointwise.PointwiseTrainer(*, hooks, model, batcher, sampler, batch_size, lossfn)[source]

Bases: LossTrainer

Submit type: xpmir.letor.trainers.pointwise.PointwiseTrainer

Pointwise trainer

hooks: List[xpmir.learning.context.TrainingHook] = []

Hooks for this trainer: this includes the losses, but can be adapted for other uses The specific list of hooks depends on the specific trainer

model: xpmir.learning.optim.Module

If the model to optimize is different from the model passsed to Learn, this parameter can be used – initialization is still expected to be done at the learner level

batcher: xpmir.learning.batchers.Batcher = xpmir.learning.batchers.Batcher.XPMValue()

How to batch samples together

sampler: xpmir.letor.samplers.PointwiseSampler

The pairwise sampler

batch_size: int = 16

Number of samples per batch

lossfn: xpmir.letor.trainers.pointwise.PointwiseLoss = xpmir.letor.trainers.pointwise.MSELoss.XPMValue(weight=1.0)

Loss function to use

Trainer

XPM Configxpmir.letor.trainers.pointwise.PointwiseTrainer(*, hooks, model, batcher, sampler, batch_size, lossfn)[source]

Bases: LossTrainer

Submit type: xpmir.letor.trainers.pointwise.PointwiseTrainer

Pointwise trainer

hooks: List[xpmir.learning.context.TrainingHook] = []

Hooks for this trainer: this includes the losses, but can be adapted for other uses The specific list of hooks depends on the specific trainer

model: xpmir.learning.optim.Module

If the model to optimize is different from the model passsed to Learn, this parameter can be used – initialization is still expected to be done at the learner level

batcher: xpmir.learning.batchers.Batcher = xpmir.learning.batchers.Batcher.XPMValue()

How to batch samples together

sampler: xpmir.letor.samplers.PointwiseSampler

The pairwise sampler

batch_size: int = 16

Number of samples per batch

lossfn: xpmir.letor.trainers.pointwise.PointwiseLoss = xpmir.letor.trainers.pointwise.MSELoss.XPMValue(weight=1.0)

Loss function to use

Losses

XPM Configxpmir.letor.trainers.pointwise.PointwiseLoss(*, weight)[source]

Bases: Config

Submit type: xpmir.letor.trainers.pointwise.PointwiseLoss

weight: float = 1.0
XPM Configxpmir.letor.trainers.pointwise.MSELoss(*, weight)[source]

Bases: PointwiseLoss

Submit type: xpmir.letor.trainers.pointwise.MSELoss

weight: float = 1.0
XPM Configxpmir.letor.trainers.pointwise.BinaryCrossEntropyLoss(*, weight)[source]

Bases: PointwiseLoss

Submit type: xpmir.letor.trainers.pointwise.BinaryCrossEntropyLoss

Computes binary cross-entropy

Uses a BCE with logits if the scorer output type is not a probability

weight: float = 1.0

Sampler

XPM Configxpmir.letor.samplers.PointwiseSampler[source]

Bases: Sampler

Submit type: xpmir.letor.samplers.PointwiseSampler

pointwise_iter() SerializableIterator[PointwiseRecord, Any][source]

Iterable over pointwise records

XPM Configxpmir.letor.samplers.PointwiseModelBasedSampler(*, dataset, retriever, relevant_ratio)[source]

Bases: PointwiseSampler, ModelBasedSampler

Submit type: xpmir.letor.samplers.PointwiseModelBasedSampler

dataset: datamaestro_text.data.ir.Adhoc

The IR adhoc dataset

retriever: xpmir.rankers.Retriever

A retriever to sample negative documents

relevant_ratio: float = 0.5

The target relevance ratio