chat_dataset¶
- torchtune.datasets.chat_dataset(tokenizer: ModelTokenizer, *, source: str, conversation_column: str, conversation_style: str, train_on_input: bool = False, new_system_prompt: Optional[str] = None, packed: bool = False, **load_dataset_kwargs: Dict[str, Any]) Union[SFTDataset, PackedDataset] [source]¶
Configure a custom dataset with conversations between user and model assistant.
This builder function can be used to configure a custom chat dataset directly from the yaml config as an alternative to
SFTDataset
, as it is made to be config friendly.The dataset is expected to contain a single column with the conversations:
| conversations | |----------------------------------------| | [{"role": "user", "content": Q1}, | | {"role": "assistant", "content": A1}] |
This will be converted to:
messages = [ Message(role="user", content="Q1"), Message(role="assistant", content="A1"), ]
This list of messages is then tokenized for model training.
You may have a different structure for your conversations, such as different role names or different keys in the json structure. You can use the
conversation_style
parameter to choose from standard formats such as “sharegpt” (seeShareGPTToMessages
) or “openai” (seeOpenAIToMessages
). If your dataset is not in one of these formats, we recommend creating a custom message transform and using it in a custom dataset builder function similar tochat_dataset
.If your column names are different, use the
conversation_column
parameter to point towards the column with the conversations.Masking of the prompt during training is controlled by the
train_on_input
flag, which is set toFalse
by default.If
train_on_input
is True, the prompt is used during training and contributes to the loss.If
train_on_input
is False, the prompt is masked out (tokens replaced with -100).
- Parameters:
tokenizer (ModelTokenizer) – Tokenizer used by the model that implements the
tokenize_messages
method.source (str) – path to dataset repository on Hugging Face. For local datasets, define source as the data file type (e.g. “json”, “csv”, “text”), pass in the filepath in
data_files
, and setsplit="train"
. See Hugging Face’sload_dataset
for more details.conversation_column (str) – name of column containing the conversations.
conversation_style (str) – string specifying expected style of conversations in the dataset for automatic conversion to the
Message
structure. Supported styles are: “sharegpt”, “openai”train_on_input (bool) – Whether the model is trained on the prompt or not. Default is False.
new_system_prompt (Optional[str]) – if specified, prepend a system message. This can serve as instructions to guide the model response. Default is None.
packed (bool) – Whether or not to pack the dataset to
max_seq_len
prior to training. Default is False.**load_dataset_kwargs (Dict[str, Any]) – additional keyword arguments to pass to
load_dataset
, such asdata_files
orsplit
.
Examples:
my_dataset.json [ { "conversations": [ { "from": "human", "value": "What time is it in London?", }, { "from": "gpt", "value": "It is 10:00 AM in London.", }, ], }, { "conversations": [ ... ], }, ..., ]
>>> from torchtune.datasets import chat_dataset >>> dataset = chat_dataset( ... tokenizer=tokenizer, ... source="json", ... data_files="my_dataset.json", ... conversation_column="conversations", ... conversation_style="sharegpt", ... train_on_input=False, ... packed=False, ... split="train", ... ) >>> tokens = dataset[0]["tokens"] >>> tokenizer.decode(tokens) "What time is it in London?It is 10:00 AM in London."
This can also be accomplished via the yaml config:
dataset: _component_: torchtune.datasets.chat_dataset source: json data_files: my_dataset.json conversation_column: conversations conversation_style: sharegpt train_on_input: False packed: False split: train
- Returns:
- the configured
SFTDataset
or
PackedDataset
ifpacked=True
- the configured
- Return type:
Union[SFTDataset, PackedDataset]
- Raises:
ValueError – if the conversation format is not supported