MistralTokenizer¶
- class torchtune.models.mistral.MistralTokenizer(path: str)[source]¶
Mistral’s implementation of the SentencePiece tokenizer
- Parameters:
path (str) – Path to pretrained tokenizer file.
Examples
>>> tokenizer = MistralTokenizer("/path/to/spm_model") >>> tokenized_text = tokenizer.encode("Hello world!", add_bos=True, add_eos=True) >>> print(tokenized_text) [1, 31587, 29644, 102, 2]
- encode(text: str, add_bos: bool = True, add_eos: bool = True, trim_leading_whitespace: bool = False) List[int] [source]¶
Encode a string into a list of token IDs
- Parameters:
text (str) – The input text to be encoded, unbatched.
add_bos (bool) – Whether to prepend BOS special token (Beginning of Sentence) to the input, defaults to True.
add_eos (bool) – Whether to append EOS special token (End of Sentence) to the input, defaults to True.
trim_leading_whitespace (bool) – Whether to trim leading whitespace from underlying sentencepiece tokenization. Sentencepiece normally prepends whitespace to any tokenized text, which can cause differences where encode(s1) + encode(s2) != encode(s1 + s2) due to leading whitespace added to s2. Default: False
- Returns:
The encoded token IDs.
- Return type:
List[int]
- tokenize_messages(messages: List[Message], max_seq_len: Optional[int] = None) Tuple[List[int], List[bool]] [source]¶
Tokenize a list of messages one at a time then concatenate them, returning a list of tokens and a list of masks.
Note: sentencepiece has problems where in general encode(s1 + s2) != encode(s1) + encode(s2) due to whitespace handling. We can get around this by prepending s2 with a known token and slicing the beginning off the tokenized s2.
Example
>>> tokenizer = MistralTokenizer(tokenizer_path) >>> messages = [ Message(role="system", content="system message\n", masked=True), Message(role="user", content="user prompt\n", masked=True), Message(role="assistant", content="assistant response\n"), ] # tokenize_messages encodes messages separately and concats >>> tokenizer.tokenize_messages(messages, max_seq_len)[0] [1, 1788, 2643, 13, 1792, 9508, 13, 465, 22137, 2933, 2]
# Same result as encoding the full string in one go >>> tokenizer.encode(‘’.join([message.content for message in messages])) [1, 1788, 2643, 13, 1792, 9508, 13, 465, 22137, 2933, 2]