Conversation

Learning

XPM Configxpmir.conversation.learning.DatasetConversationEntrySampler(*, datasets)[source]

Bases: BaseSampler, DatasetConversationBase

Submit type: xpmir.conversation.learning.DatasetConversationEntrySampler

Uses a conversation dataset and topic records entries

datasets: List[datamaestro_text.data.conversation.base.ConversationDataset]

The conversation datasets

XPM Configxpmir.conversation.learning.reformulation.ConversationRepresentationEncoder[source]

Bases: TextEncoderBase[List[Record], RepresentationOutput], ABC

Submit type: xpmir.conversation.learning.reformulation.ConversationRepresentationEncoder

XPM Configxpmir.conversation.learning.reformulation.DecontextualizedQueryConverter[source]

Bases: Converter[Record, str]

Submit type: xpmir.conversation.learning.reformulation.DecontextualizedQueryConverter

CoSPLADE

XPM Configxpmir.conversation.models.cosplade.AsymetricMSEContextualizedRepresentationLoss(*, weight)[source]

Bases: AlignmentLoss[CoSPLADEOutput, TextsRepresentationOutput]

Submit type: xpmir.conversation.models.cosplade.AsymetricMSEContextualizedRepresentationLoss

Computes the asymetric loss for CoSPLADE

weight: float = 1.0

Weight for this loss

version: int = 2constant

Current version

XPM Configxpmir.conversation.models.cosplade.CoSPLADE(*, history_size, queries_encoder, history_encoder)[source]

Bases: ConversationRepresentationEncoder

Submit type: xpmir.conversation.models.cosplade.CoSPLADE

CoSPLADE model

history_size: int = 0

Size of history to take into account (0 for infinite)

queries_encoder: xpmir.neural.splade.SpladeTextEncoderV2[List[List[str]]]

Encoder for the query history (the first one being the current one)

history_encoder: xpmir.neural.splade.SpladeTextEncoderV2[Tuple[str, str]]

Encoder for (query, answer) pairs

version: int = 2constant

Current version