Learning to rank

Learning to rank is handled by various classes. Some are located in the learning module.

Listeners

XPM Configxpmir.letor.learner.ValidationListener(*, id, metrics, dataset, retriever, warmup, validation_interval, early_stop, hooks)[source]

Bases: LearnerListener

Submit type: Any

Learning validation early-stopping

Computes a validation metric and stores the best result. If early_stop is set (> 0), then it signals to the learner that the learning process can stop.

id: str

Unique ID to identify the listener (ignored for signature)

metrics: Dict[str, bool] = {'map': True}

Dictionary whose keys are the metrics to record, and boolean values whether the best performance checkpoint should be kept for the associated metric ([parseable by ir-measures](https://ir-measur.es/))

dataset: datamaestro_text.data.ir.Adhoc

The dataset to use

retriever: xpmir.rankers.Retriever

The retriever for validation

warmup: int = -1

How many epochs before actually computing the metric

bestpath: Pathgenerated

Path to the best checkpoints

info: Pathgenerated

Path to the JSON file that contains the metric values at each epoch

validation_interval: int = 1

Epochs between each validation

early_stop: int = 0

Number of epochs without improvement after which we stop learning. Should be a multiple of validation_interval or 0 (no early stopping)

hooks: List[xpmir.learning.context.ValidationHook] = []

The list of the hooks during the validation

Scorers

Scorers are able to give a score to a (query, document) pair. Among the scorers, some are have learnable parameters.

XPM Configxpmir.rankers.Scorer[source]

Bases: Config, Initializable, EasyLogger

Query-document scorer

A model able to give a score to a list of documents given a query

eval()[source]

Put the model in inference/evaluation mode

getRetriever(retriever: Retriever, batch_size: int, batcher: Batcher = Config[xpmir.learning.batchers.batcher], top_k=None, device=None)[source]

Returns a two stage re-ranker from this retriever and a scorer

Parameters:
  • device – Device for the ranker or None if no change should be made

  • batch_size – The number of documents in each batch

  • top_k – Number of documents to re-rank (or None for all)

initialize(*args, **kwargs)

Main initialization

Calls __initialize__() once (using __initialize__())

rsv(query: str, documents: Iterable[ScoredDocument], keepcontent=False) List[ScoredDocument][source]

Score all the documents (inference mode, no training)

to(device)[source]

Move the scorer to another device

XPM Configxpmir.rankers.RandomScorer(*, random)[source]

Bases: Scorer

A random scorer

random: xpmir.learning.base.Random

The random number generator

XPM Configxpmir.rankers.AbstractModuleScorer[source]

Bases: Scorer, Module

Base class for all learnable scorer

XPM Configxpmir.rankers.LearnableScorer[source]

Bases: AbstractModuleScorer

Learnable scorer

A scorer with parameters that can be learnt

xpmir.rankers.scorer_retriever(documents: Documents, *, retrievers: RetrieverFactory, scorer: Scorer, **kwargs)[source]

Helper function that returns a two stage retriever. This is useful when used with partial (when the scorer is not known).

Parameters:
  • documents – The document collection

  • retrievers – A retriever factory

  • scorer – The scorer

Returns:

A retriever, calling the :meth:scorer.getRetriever

Retrievers

Scores can be used as retrievers through a xpmir.rankers.TwoStageRetriever

Samplers

Samplers provide samples in the form of records. They all inherit from:

class xpmir.letor.samplers.SerializableIterator[source]

Bases: Iterator[T], Generic[T, State]

An iterator that can be serialized through state dictionaries.

This is used when saving the sampler state

XPM Configxpmir.letor.samplers.ModelBasedSampler(*, dataset, retriever)[source]

Bases: Sampler

Base class for retriever-based sampler

dataset: datamaestro_text.data.ir.Adhoc

The IR adhoc dataset

retriever: xpmir.rankers.Retriever

A retriever to sample negative documents

Records for training

class xpmir.letor.records.PairwiseRecord(query: TopicRecord, positive: DocumentRecord, negative: DocumentRecord)[source]

Bases: object

A pairwise record is composed of a query, a positive and a negative document

class xpmir.letor.records.PointwiseRecord(topic: TopicRecord, document: DocumentRecord, relevance: float | None = None)[source]

Bases: object

A record from a pointwise sampler

Document samplers

Useful for pre-training or when learning index parameters (e.g. for FAISS).

XPM Configxpmir.documents.samplers.DocumentSampler(*, documents)[source]

Bases: Config

How to sample from a document store

documents: datamaestro_text.data.ir.DocumentStore
XPM Configxpmir.documents.samplers.HeadDocumentSampler(*, documents, max_count, max_ratio)[source]

Bases: DocumentSampler

A basic sampler that iterates over the first documents

documents: datamaestro_text.data.ir.DocumentStore
max_count: int = 0

Maximum number of documents (if 0, no limit)

max_ratio: float = 0

Maximum ratio of documents (if 0, no limit)

XPM Configxpmir.documents.samplers.RandomDocumentSampler(*, documents, max_count, max_ratio, random)[source]

Bases: DocumentSampler

A basic sampler that iterates over the first documents

Either max_count or max_ratio should be non null

documents: datamaestro_text.data.ir.DocumentStore
max_count: int = 0

Maximum number of documents (if 0, no limit)

max_ratio: float = 0

Maximum ratio of documents (if 0, no limit)

random: xpmir.learning.base.Random

Random sampler

Adapters

XPM Configxpmir.letor.samplers.hydrators.SampleTransform[source]

Bases: Config

XPM Configxpmir.letor.samplers.hydrators.SampleHydrator(*, documentstore, querystore)[source]

Bases: SampleTransform

Base class for document/topic hydrators

documentstore: datamaestro_text.data.ir.DocumentStore

The store for document texts if needed

querystore: xpmir.datasets.adapters.TextStore

The store for query texts if needed