Poster
in
Workshop: Safe Generative AI
Differential Privacy of Cross-Attention with Provable Guarantee
Yingyu Liang · Zhenmei Shi · Zhao Song · Yufa Zhou
Abstract:
Cross-attention has become a fundamental module nowadays in many important artificial intelligence applications, e.g.,retrieval-augmented generation (RAG), system prompt, guided stable diffusion, and many more. Ensuring cross-attention privacy is crucial and urgently needed because its key and value matrices may contain sensitive information about model providers and their users.In this work, we design a novel differential privacy (DP) data structure to address the privacy security of cross-attention with a theoretical guarantee.In detail, let be the input token length of system prompt/RAG data, be the feature dimension, be the relative error parameter, be the maximum value of the query and key matrices, be the maximum value of the value matrix, and be parameters of polynomial kernel methods. Then, our data structure requires memory consumption with initialization time complexity and query time complexity for a single token query.In addition, our data structure can guarantee that the process of answering user query satisfies -DP with additive error and relative error between our output and the true answer.Furthermore, our result is robust to adaptive queries in which users can intentionally attack the cross-attention system. To our knowledge, this is the first work to provide DP for cross-attention and is promising to inspire more privacy algorithm design in large generative models (LGMs).
Chat is not available.