Generative AI Model Alignment (1fca595d-b140-4ce0-8fd8-c4c6bee87540)
When training or fine-tuning a generative AI model it is important to utilize techniques that improve model alignment with safety, security, and content policies.
The fine-tuning process can potentially remove built-in safety mechanisms in a generative AI model, but utilizing techniques such as Supervised Fine-Tuning, Reinforcement Learning from Human Feedback or AI Feedback, and Targeted Safety Context Distillation can improve the safety and alignment of the model.