LightningIRTokenizer

class lightning_ir.base.tokenizer.LightningIRTokenizer(*args, query_length: int = 32, doc_length: int = 512, **kwargs)[source]

Bases: object

Base class for LightningIR tokenizers. Derived classes implement the tokenize method for handling query and document tokenization. It acts as mixin for a transformers.PreTrainedTokenizer backbone tokenizer.

__init__(*args, query_length: int = 32, doc_length: int = 512, **kwargs)[source]

Initializes the tokenizer.

Parameters:
  • query_length (int, optional) – Maximum number of tokens per query, defaults to 32

  • doc_length (int, optional) – Maximum number of tokens per document, defaults to 512

Methods

__init__(*args[, query_length, doc_length])

Initializes the tokenizer.

from_pretrained(model_name_or_path, *args, ...)

Loads a pretrained tokenizer.

tokenize([queries, docs])

Tokenizes queries and documents.

config_class

Configuration class for the tokenizer.

alias of LightningIRConfig

classmethod from_pretrained(model_name_or_path: str, *args, **kwargs) LightningIRTokenizer[source]

Loads a pretrained tokenizer. Wraps the transformers.PreTrainedTokenizer.from_pretrained method to return a derived LightningIRTokenizer class. See LightningIRTokenizerClassFactory for more details.

>>> Loading using model class and backbone checkpoint
>>> type(BiEncoderTokenizer.from_pretrained("bert-base-uncased"))
...
<class 'lightning_ir.base.class_factory.BiEncoderBertTokenizerFast'>
>>> Loading using base class and backbone checkpoint
>>> type(LightningIRTokenizer.from_pretrained("bert-base-uncased", config=BiEncoderConfig()))
...
<class 'lightning_ir.base.class_factory.BiEncoderBertTokenizerFast'>
Parameters:

model_name_or_path (str) – Name or path of the pretrained tokenizer

Raises:

ValueError – If called on the abstract class LightningIRTokenizer and no config is passed

Returns:

A derived LightningIRTokenizer consisting of a backbone tokenizer and a LightningIRTokenizer mixin

Return type:

LightningIRTokenizer

tokenize(queries: Sequence[str] | str | None = None, docs: Sequence[str] | str | None = None, **kwargs) Dict[str, BatchEncoding][source]

Tokenizes queries and documents.

Parameters:
  • queries (str | Sequence[str] | None, optional) – Queries to tokenize, defaults to None

  • docs (str | Sequence[str] | None, optional) – Documents to tokenize, defaults to None

Raises:

NotImplementedError – Must be implemented by the derived class

Returns:

Dictionary of tokenized queries and documents

Return type:

Dict[str, BatchEncoding]