Timezone: »
We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO). Given a pre-trained base language model and a sequence-level boolean oracle function, we aim to decompose the oracle function into token-level guidance to steer the base model in text generation. Specifically, the token-level guidance is provided by NADO, a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. Based on posterior regularization, we present the close-form optimal solution to incorporate the decomposed token-level guidance into the base model for controllable generation. We further discuss how the neural approximation affects the quality of the solution. These experiments conducted on two different applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while keeping high generation quality.
Author Information
Tao Meng (, University of California, Los Angeles)
Sidi Lu (University of California, Los Angeles)
Nanyun Peng (University of California, Los Angeles)
Kai-Wei Chang (UCLA)
More from the Same Authors
-
2022 : Parameter-Efficient Low-Resource Dialogue State Tracking by Prompt Tuning »
Mingyu Derek Ma · Jiun-Yu Kao · Shuyang Gao · arpit gupta · Di Jin · Tagyoung Chung · Nanyun Peng -
2022 : Group Excess Risk Bound of Overparameterized Linear Regression with Constant-Stepsize SGD »
Arjun Subramonian · Levent Sagun · Kai-Wei Chang · Yizhou Sun -
2022 : Empowering Language Models with Knowledge Graph Reasoning for Question Answering »
Ziniu Hu · Yichong Xu · Wenhao Yu · Shuohang Wang · Ziyi Yang · Chenguang Zhu · Kai-Wei Chang · Yizhou Sun -
2022 Poster: On the Discrimination Risk of Mean Aggregation Feature Imputation in Graphs »
Arjun Subramonian · Kai-Wei Chang · Yizhou Sun -
2022 Poster: Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone »
Zi-Yi Dou · Aishwarya Kamath · Zhe Gan · Pengchuan Zhang · Jianfeng Wang · Linjie Li · Zicheng Liu · Ce Liu · Yann LeCun · Nanyun Peng · Jianfeng Gao · Lijuan Wang -
2022 Poster: Semantic Probabilistic Layers for Neuro-Symbolic Learning »
Kareem Ahmed · Stefano Teso · Kai-Wei Chang · Guy Van den Broeck · Antonio Vergari -
2022 Poster: InsNet: An Efficient, Flexible, and Performant Insertion-based Text Generation Model »
Sidi Lu · Tao Meng · Nanyun Peng -
2022 Poster: Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering »
Pan Lu · Swaroop Mishra · Tanglin Xia · Liang Qiu · Kai-Wei Chang · Song-Chun Zhu · Oyvind Tafjord · Peter Clark · Ashwin Kalyan -
2020 : Invited talk - Creative Language Generation: Stories, Sarcasms, and Similes - Nanyun Peng »
Nanyun Peng -
2020 Poster: Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond »
Kaidi Xu · Zhouxing Shi · Huan Zhang · Yihan Wang · Kai-Wei Chang · Minlie Huang · Bhavya Kailkhura · Xue Lin · Cho-Jui Hsieh -
2016 Poster: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings »
Tolga Bolukbasi · Kai-Wei Chang · James Y Zou · Venkatesh Saligrama · Adam T Kalai -
2016 Poster: A Credit Assignment Compiler for Joint Prediction »
Kai-Wei Chang · He He · Stephane Ross · Hal Daumé III · John Langford