!pip install "transformers==4.34.0" "datasets==2.13.0" "peft==0.4.0" "accelerate==0.23.0" "bitsandbytes==0.41.1" "trl==0.4.7" "safetensors>=0.3.1" --upgrade
Instruction-tune Llama 2 with TRL and SFTTrainer
This blog post is an extended guide on instruction-tuning Llama 2 from Meta AI. The idea of the blog post is to focus on creating the instruction dataset, which we can then use to fine-tune the base model of Llama 2 to follow our instructions.
The goal is to create a model which can create instructions based on input. The idea behind this is that this can then be used for others to create instruction data from inputs. That’s especially helpful if you want to personalize models for, e.g., tweeting, email writing, etc, which means that you would be able to generate an instruction dataset from your emails to then train a model to mimic your email writing.
Okay, so can we get started on this? In the blog, we are going to:
- Define our use case in detail and create a prompt template for our instructions
- Create an instruction dataset
- Instruction-tune Llama 2 using
trl
and theSFTTrainer
- Test the Model and run Inference
1. Define our use case in detail and create a template for our instructions
Before we describe our use case, we need to better understand what even is an instruction.
An instruction is a piece of text or prompt that is provided to an LLM, like Llama, GPT-4, or Claude, to guide it to generate a response. Instructions allow humans to steer the conversation and constrain the language model’s output to be more natural, useful, and aligned with the user’s goals. Crafting clear, well-formulated instructions is key to productive conversations.
Examples of instructions are listed below in the table.
Capability | Example Instruction |
---|---|
Brainstorming | Provide a diverse set of creative ideas for new flavors of ice cream. |
Classification | Categorize these movies as either comedy, drama, or horror based on the plot summary. |
Closed QA | Answer the question ‘What is the capital of France?’ with a single word. |
Generation | Write a poem in the style of Robert Frost about nature and the changing seasons. |
Information Extraction | Extract the names of the main characters from this short story. |
Open QA | Why do leaves change color in autumn? Explain the scientific reasons. |
Summarization | Summarize this article on recent advancements in renewable energy in 2-3 sentences. |
As described in the beginning, we want to fine-tune a model to be able to generate instructions based on input. (output). We want to use this as a way to create synthetic datasets to personalize LLMs and Agents.
Converting the idea into a basic prompt template following the Alpaca format we get.
### Instruction:
input using an LLM.
Use the Input below to create an instruction, which could have been used to generate the
### Input:
Dear [boss name],
'm writing to request next week, August 1st through August 4th,
Ioff as paid time off.
I have some personal matters to attend to that week that require as much advance
me to be out of the office. I wanted to give you as possible so you can plan accordingly while I am away.
notice
if you need any additional information from me
Please let me know or have any concerns with me taking next week off. I appreciate you
considering this request.
Thank you, [Your name]
### Response:
next week 08/01 - 08/04 off. Write an email to my boss that I need
2. Create an instruction dataset
After we defined our use case and prompt template, we need to create our instruction dataset. Creating a high-quality instruction dataset is key for a good-performing model. Research shows that “Less Is More for Alignment” shows that creating a high-quality, low-quantity (~1000 samples) dataset can achieve the same performance as less-quality and high-quantity datasets.
There are several ways to create an instruction dataset, including:
- Using an existing dataset and converting it into an instruction dataset, e.g., FLAN
- Use existing LLMs to create synthetically instruction datasets, e.g., Alpaca
- Use Humans to create instructions datasets, e.g., Dolly.
Each of the methods has its own advantages and disadvantages and depends on the budget, time, and quality requirements. For example, using an existing dataset is the easiest but might not be tailored to your specific use case, while using humans might be the most accurate but can be time-consuming and expensive. It is also possible to combine several methods to create an instruction dataset, as shown in Orca: Progressive Learning from Complex Explanation Traces of GPT-4.
To keep it simple, we are going to use Dolly an open-source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories outlined in the InstructGPT paper, including brainstorming, classification, closed QA, generation, information extraction, open QA, and summarization.
Let’s start coding, but first, let’s install our dependencies.
To load the databricks/databricks-dolly-15k
dataset, we use the load_dataset()
method from the 🤗 Datasets library.
from datasets import load_dataset
from random import randrange
# Load dataset from the hub
= load_dataset("databricks/databricks-dolly-15k", split="train")
dataset
print(f"dataset size: {len(dataset)}")
print(dataset[randrange(len(dataset))])
# dataset size: 15011
Found cached dataset json (/home/ubuntu/.cache/huggingface/datasets/databricks___json/databricks--databricks-dolly-15k-7427aa6e57c34282/0.0.0/e347ab1c932092252e717ff3f949105a4dd28b27e842dd53157d2f72e276c2e4)
dataset size: 15011
{'instruction': 'On what month and day was Antwan Deon Odom born?', 'context': 'Antwan Deon Odom (born September 24, 1981) is a former American football defensive end. He was drafted by the Tennessee Titans in the second round of the 2004 NFL Draft. He played college football at Alabama. He has also played for the Cincinnati Bengals.', 'response': 'September 24', 'category': 'closed_qa'}
To instruct tune our model, we need to convert our structured examples into a collection of tasks described via instructions. We define a formatting_function
that takes a sample and returns a string with our format instruction.
def format_instruction(sample):
return f"""### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
{sample['response']}
### Response:
{sample['instruction']}
"""
Let’s test our formatting function on a random example.
from random import randrange
print(format_instruction(dataset[randrange(len(dataset))]))
### Instruction:
Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
Sir Dorabji Tata and Allied Trusts and Sir Ratan Tata Trust
### Response:
What are the names of Tata trusts which Ratan Tata heads?
3. Instruction-tune Llama 2 using trl
and the SFTTrainer
We will use the recently introduced method in the paper “QLoRA: Quantization-aware Low-Rank Adapter Tuning for Language Generation” by Tim Dettmers et al. QLoRA is a new technique to reduce the memory footprint of large language models during finetuning, without sacrificing performance. The TL;DR; of how QLoRA works is:
- Quantize the pre-trained model to 4 bits and freeze it.
- Attach small, trainable adapter layers. (LoRA)
- Finetune only the adapter layers while using the frozen quantized model for context.
If you want to learn more about QLoRA and how it works, I recommend you to read the Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA blog post.
Flash Attention
Flash Attention is a an method that reorders the attention computation and leverages classical techniques (tiling, recomputation) to significantly speed it up and reduce memory usage from quadratic to linear in sequence length. It is based on the paper “FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness”. The TL;DR; accelerates training up to 3x. Learn more at FlashAttention. Flash Attention is currently only available for Ampere (A10, A40, A100, …) & Hopper (H100, …) GPUs. You can check if your GPU is supported and install it using the following command:
Note: If your machine has less than 96GB of RAM and lots of CPU cores, reduce the number of MAX_JOBS
. On the g5.2xlarge
we used 4
.
python -c "import torch; assert torch.cuda.get_device_capability()[0] >= 8, 'Hardware not supported for Flash Attention'"
pip install ninja packaging
MAX_JOBS=4 pip install flash-attn --no-build-isolation
Installing flash attention can take quite a bit of time (10-45 minutes).
The example supports the use of Flash Attention for all Llama checkpoints, but is not enabled by default. To use Flash Attention change the value of use_flash_attentin
to True
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
= False
use_flash_attention
# Hugging Face model id
= "NousResearch/Llama-2-7b-hf" # non-gated
model_id # model_id = "meta-llama/Llama-2-7b-hf" # gated
# BitsAndBytesConfig int-4 config
= BitsAndBytesConfig(
bnb_config =True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16
load_in_4bit
)
# Load model and tokenizer
= AutoModelForCausalLM.from_pretrained(
model
model_id,=bnb_config,
quantization_config=False,
use_cache=use_flash_attention,
use_flash_attention_2="auto",
device_map
)= 1
model.config.pretraining_tp
= AutoTokenizer.from_pretrained(model_id)
tokenizer = tokenizer.eos_token
tokenizer.pad_token = "right" tokenizer.padding_side
The SFTTrainer
supports a native integration with peft
, which makes it super easy to efficiently instruction tune LLMs. We only need to create our LoRAConfig
and provide it to the trainer.
from peft import LoraConfig, prepare_model_for_kbit_training, get_peft_model
# LoRA config based on QLoRA paper
= LoraConfig(
peft_config =16,
lora_alpha=0.1,
lora_dropout=64,
r="none",
bias="CAUSAL_LM",
task_type
)
# prepare model for training
= prepare_model_for_kbit_training(model) model
Before we can start our training we need to define the hyperparameters (TrainingArguments
) we want to use.
from transformers import TrainingArguments
= TrainingArguments(
args ="llama-7-int4-dolly",
output_dir=3,
num_train_epochs=6 if use_flash_attention else 4,
per_device_train_batch_size=2,
gradient_accumulation_steps=True,
gradient_checkpointing="paged_adamw_32bit",
optim=10,
logging_steps="epoch",
save_strategy=2e-4,
learning_rate=True,
bf16=False,
fp16=True,
tf32=0.3,
max_grad_norm=0.03,
warmup_ratio="constant",
lr_scheduler_type=False, # disable tqdm since with packing values are in correct
disable_tqdm
)
# Upcast layer for flash attnetion
if use_flash_attention:
from utils.llama_patch import upcast_layer_for_flash_attention
= torch.bfloat16 if args.bf16 else torch.float16 if args.fp16 else torch.float32
torch_dtype = upcast_layer_for_flash_attention(model, torch_dtype)
model
= get_peft_model(model, peft_config) model
We now have every building block we need to create our SFTTrainer
to start then training our model.
from trl import SFTTrainer
= 2048 # max sequence length for model and packing of the dataset
max_seq_length
= SFTTrainer(
trainer =model,
model=dataset,
train_dataset=peft_config,
peft_config=max_seq_length,
max_seq_length=tokenizer,
tokenizer=True,
packing=format_instruction,
formatting_func=args,
args )
Start training our model by calling the train()
method on our Trainer
instance.
# train
# there will not be a progress bar since tqdm is disabled
trainer.train()
# save model
trainer.save_model()
You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
The training without Flash Attention enabled took 03:08:00 on a g5.2xlarge
. The instance costs 1,212$/h
which brings us to a total cost of 3.7$
. The training with Flash Attention enabled took 02:08:00 on a g5.2xlarge
. The instance costs 1,212$/h
which brings us to a total cost of 2.6$
.
The results using Flash Attention are mind blowing and impressive, 1.5x faster and 30% cheaper.
4. Test Model and run Inference
After the training is done we want to run and test our model. We will use peft
and transformers
to load our LoRA adapter into our model.
if use_flash_attention:
# unpatch flash attention
from utils.llama_patch import unplace_flash_attn_with_attn
unplace_flash_attn_with_attn()
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
= "llama-7-int4-dolly"
args.output_dir
# load base LLM model and tokenizer
= AutoPeftModelForCausalLM.from_pretrained(
model
args.output_dir,=True,
low_cpu_mem_usage=torch.float16,
torch_dtype=True,
load_in_4bit
) = AutoTokenizer.from_pretrained(args.output_dir) tokenizer
Let’s load the dataset again with a random sample to try to generate an instruction.
from datasets import load_dataset
from random import randrange
# Load dataset from the hub and get a sample
= load_dataset("databricks/databricks-dolly-15k", split="train")
dataset = dataset[randrange(len(dataset))]
sample
= f"""### Instruction:
prompt Use the Input below to create an instruction, which could have been used to generate the input using an LLM.
### Input:
{sample['response']}
### Response:
"""
= tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
input_ids # with torch.inference_mode():
= model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.9)
outputs
print(f"Prompt:\n{sample['response']}\n")
print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")
print(f"Ground truth:\n{sample['instruction']}")
Nice! our model works! If want to accelerate our model we can deploy it with Text Generation Inference. Therefore we would need to merge our adapter weights into the base model.
from peft import AutoPeftModelForCausalLM
= AutoPeftModelForCausalLM.from_pretrained(
model
args.output_dir,=True,
low_cpu_mem_usage
)
# Merge LoRA and base model
= model.merge_and_unload()
merged_model
# Save the merged model
"merged_model",safe_serialization=True)
merged_model.save_pretrained("merged_model")
tokenizer.save_pretrained(
# push merged model to the hub
# merged_model.push_to_hub("user/repo")
# tokenizer.push_to_hub("user/repo")