alpaca_dataset¶
- torchtune.datasets.alpaca_dataset(tokenizer: ModelTokenizer, *, source: str = 'tatsu-lab/alpaca', column_map: Optional[Dict[str, str]] = None, train_on_input: bool = True, packed: bool = False, filter_fn: Optional[Callable] = None, split: str = 'train', **load_dataset_kwargs: Dict[str, Any]) Union[SFTDataset, PackedDataset] [source]¶
Support for family of Alpaca-style datasets from Hugging Face Datasets using the data input format and prompt template from the original alpaca codebase, where
instruction
,input
, andoutput
are fields from the dataset. This template is automatically applied independent of any prompt template configured in the tokenizer.Masking of the prompt during training is controlled by the
train_on_input
flag, which is set toTrue
by default - Iftrain_on_input
is True, the prompt is used during training and contributes to the loss. - Iftrain_on_input
is False, the prompt is masked out (tokens replaced with -100)- Parameters:
tokenizer (ModelTokenizer) – Tokenizer used by the model that implements the
tokenize_messages
method.source (str) – path to dataset repository on Hugging Face. For local datasets, define source as the data file type (e.g. “json”, “csv”, “text”) and pass in the filepath in
data_files
. See Hugging Face’sload_dataset
for more details. Default istatsu-lab/alpaca
.column_map (Optional[Dict[str, str]]) – a mapping from the expected columns in the message transform
AlpacaToMessages
to the new column names in the dataset. Keys should be “instruction”, “input”, and “output” and values should be the actual column names. If None, uses the default column names"instruction
,"input"
, and"output"
intatsu-lab/alpaca
.train_on_input (bool) – Whether the model is trained on the prompt or not. Default is False.
packed (bool) – Whether or not to pack the dataset to
max_seq_len
prior to training. Default is False.filter_fn (Optional[Callable]) – callable used to filter the dataset prior to any pre-processing. See the Hugging Face docs for more details.
split (str) –
split
argument fordatasets.load_dataset
. You can use this argument to load a subset of a given split, e.g.split="train[:10%]"
. Default is “train”.**load_dataset_kwargs (Dict[str, Any]) – additional keyword arguments to pass to
load_dataset
. See Hugging Face’s API ref for more details.
- Returns:
dataset configured with source data and transform
- Return type:
Union[SFTDataset, PackedDataset]
- Raises:
ValueError – If
packed
is True andmax_seq_len
is not set on the tokenizer.
Example
>>> alpaca_ds = alpaca_dataset(tokenizer=tokenizer) >>> for batch in Dataloader(alpaca_ds, batch_size=8): >>> print(f"Batch size: {len(batch)}") >>> Batch size: 8