Skip to yearly menu bar Skip to main content


Poster

Risk-Averse Finetuning of Large Language Models

Sapana Chaudhary · Ujwal Dinesha · Dileep Kalathil · Srinivas Shakkottai


Abstract:

We consider the challenge of mitigating the generation of negative or toxic content by Large Language Models (LLMs) in response to certain prompts. We propose integrating risk-averse principles into LLM fine-tuning to minimize the occurrence of harmful outputs, particularly rare but significant events. By optimizing the risk measure of Conditional Value at Risk (CVaR), our methodology trains LLMs to exhibit superior performance in avoiding toxic outputs while maintaining effectiveness in generative tasks. Empirical evaluations on sentiment modification and toxicity mitigation tasks demonstrate the efficacy of risk-averse reinforcement learning with human feedback (RLHF) in promoting a safer and more constructive online discourse environment.

Live content is unavailable. Log in and register to view live content