NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment

llm
research paper
Author

Santosh Sawant

Published

May 7, 2024

Aligning Large Language Models (LLMs) with human values and preferences is essential for making them helpful and safe. However, building efficient tools to perform alignment can be challenging, especially for the largest and most competent LLMs which often contain tens or hundreds of billions of parameters.

In order to simplify LLM alignment issues, Nvidia has launched NeMo-Aligner, a toolkit for model alignment that can efficiently scale to using hundreds of GPUs for training. NeMo-Aligner comes with highly optimized and scalable implementations for major paradigms of model alignment such as: Reinforcement Learning from Human Feedback (RLHF), Direct Preference Optimization (DPO), SteerLM, and Self-Play Fine-Tuning (SPIN). Additionally, our toolkit supports running most of the alignment techniques in a Parameter Efficient Fine-Tuning (PEFT) setting. NeMo-Aligner is designed for extensibility, allowing support for other alignment techniques with minimal effort.

NeMo-Aligner addresses scalability challenges by (1) building upon Megatron-LM with 3D (data, tensor, and pipeline)-parallelism training, (2) having a distributed approach to Proximal Policy Optimization (PPO) training in RLHF and (3) integrating PPO inference optimizations based on TensorRT-LLM during rollout stage. Combined, these optimizations allow users to efficiently train the largest models over hundreds of GPUs reducing research iteration time.

Paper : https://lnkd.in/gMh2vqqc

Github : https://lnkd.in/gvqsWZQq