Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 (FL@FM-NeurIPS'23)

DPZero: Dimension-Independent and Differentially Private Zeroth-Order Optimization

Liang Zhang · Kiran Thekumparampil · Sewoong Oh · Niao He

Keywords: [ Zeroth-order optimization ] [ Dimension-Independent ] [ Large language models ] [ differential privacy ]


Abstract:

Today’s widespread practice of fine-tuning pretrained large language models (LLMs) on domain-specific data faces two grand challenges in memory and privacy. First, as LLMs continue to expand, encompassing billions of parameters, the memory demands of gradient-based training methods via backpropagation become prohibitively high. Second, given the tendency of LLMs to memorize and disclose sensitive training data, the privacy of fine-tuning data must be respected. To this end, we explore the potential of zeroth-order methods in differentially private optimization for fine-tuning LLMs. Zeroth-order methods, which rely solely on forward passes, substantially reduce memory consumption during training. However, directly combining them with standard differential privacy mechanism poses dimension-dependent complexity. To bridge the gap, we introduce DPZero, a novel differentially private zeroth-order algorithm with nearly dimension-independent rates. Our theoretical analysis reveals that its complexity hinges primarily on the problem's intrinsic dimension and exhibits only a logarithmic dependence on the ambient dimension. This renders DPZero a highly practical option for real-world LLMs deployments.

Chat is not available.