Timezone: »
Energy-based models, a.k.a. energy networks, perform inference by optimizing an energy function, typically parametrized by a neural network. This allows one to capture potentially complex relationships between inputs andoutputs.To learn the parameters of the energy function, the solution to thatoptimization problem is typically fed into a loss function.The key challenge for training energy networks lies in computing loss gradients,as this typically requires argmin/argmax differentiation.In this paper, building upon a generalized notion of conjugate function,which replaces the usual bilinear pairing with a general energy function,we propose generalized Fenchel-Young losses, a natural loss construction forlearning energy networks. Our losses enjoy many desirable properties and theirgradients can be computed efficiently without argmin/argmax differentiation.We also prove the calibration of their excess risk in the case of linear-concaveenergies. We demonstrate our losses on multilabel classification and imitation learning tasks.
Author Information
Mathieu Blondel (NTT)
Research scientist at NTT CS Labs.
Felipe Llinares-Lopez (Google Research, Brain Team)
Robert Dadashi (Google Brain)
Leonard Hussenot (Google Research, Brain Team)
Matthieu Geist (Google Research, Brain Team)
More from the Same Authors
-
2021 : Continuous Control with Action Quantization from Demonstrations »
Robert Dadashi · Leonard Hussenot · Damien Vincent · Anton Raichuk · Matthieu Geist · Olivier Pietquin -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Sergey Levine · Russ Salakhutdinov -
2022 : A Connection between One-Step Regularization and Critic Regularization in Reinforcement Learning »
Benjamin Eysenbach · Matthieu Geist · Russ Salakhutdinov · Sergey Levine -
2023 Poster: Differentiable Clustering with Perturbed Spanning Forests »
Lawrence Stewart · Francis Bach · Felipe Llinares-Lopez · Quentin Berthet -
2023 Poster: The Curious Price of Distributional Robustness in Reinforcement Learning with a Generative Model »
Laixi Shi · Gen Li · Yuting Wei · Yuxin Chen · Matthieu Geist · Yuejie Chi -
2023 Poster: On Imitation in Mean-field Games »
Giorgia Ramponi · Pavel Kolev · Olivier Pietquin · Niao He · Mathieu Lauriere · Matthieu Geist -
2023 Poster: Policy Gradient for Rectangular Robust Markov Decision Processes »
Navdeep Kumar · Esther Derman · Matthieu Geist · Kfir Y. Levy · Shie Mannor -
2022 Poster: Efficient and Modular Implicit Differentiation »
Mathieu Blondel · Quentin Berthet · Marco Cuturi · Roy Frostig · Stephan Hoyer · Felipe Llinares-Lopez · Fabian Pedregosa · Jean-Philippe Vert -
2021 Poster: What Matters for Adversarial Imitation Learning? »
Manu Orsini · Anton Raichuk · Leonard Hussenot · Damien Vincent · Robert Dadashi · Sertan Girgin · Matthieu Geist · Olivier Bachem · Olivier Pietquin · Marcin Andrychowicz -
2020 Poster: Munchausen Reinforcement Learning »
Nino Vieillard · Olivier Pietquin · Matthieu Geist -
2020 Poster: Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning »
Nino Vieillard · Tadashi Kozuno · Bruno Scherrer · Olivier Pietquin · Remi Munos · Matthieu Geist -
2020 Oral: Leverage the Average: an Analysis of KL Regularization in Reinforcement Learning »
Nino Vieillard · Tadashi Kozuno · Bruno Scherrer · Olivier Pietquin · Remi Munos · Matthieu Geist -
2020 Poster: Fictitious Play for Mean Field Games: Continuous Time Analysis and Applications »
Sarah Perrin · Julien Perolat · Mathieu Lauriere · Matthieu Geist · Romuald Elie · Olivier Pietquin -
2019 : Poster and Coffee Break 2 »
Karol Hausman · Kefan Dong · Ken Goldberg · Lihong Li · Lin Yang · Lingxiao Wang · Lior Shani · Liwei Wang · Loren Amdahl-Culleton · Lucas Cassano · Marc Dymetman · Marc Bellemare · Marcin Tomczak · Margarita Castro · Marius Kloft · Marius-Constantin Dinu · Markus Holzleitner · Martha White · Mengdi Wang · Michael Jordan · Mihailo Jovanovic · Ming Yu · Minshuo Chen · Moonkyung Ryu · Muhammad Zaheer · Naman Agarwal · Nan Jiang · Niao He · Nikolaus Yasui · Nikos Karampatziakis · Nino Vieillard · Ofir Nachum · Olivier Pietquin · Ozan Sener · Pan Xu · Parameswaran Kamalaruban · Paul Mineiro · Paul Rolland · Philip Amortila · Pierre-Luc Bacon · Prakash Panangaden · Qi Cai · Qiang Liu · Quanquan Gu · Raihan Seraj · Richard Sutton · Rick Valenzano · Robert Dadashi · Rodrigo Toro Icarte · Roshan Shariff · Roy Fox · Ruosong Wang · Saeed Ghadimi · Samuel Sokota · Sean Sinclair · Sepp Hochreiter · Sergey Levine · Sergio Valcarcel Macua · Sham Kakade · Shangtong Zhang · Sheila McIlraith · Shie Mannor · Shimon Whiteson · Shuai Li · Shuang Qiu · Wai Lok Li · Siddhartha Banerjee · Sitao Luan · Tamer Basar · Thinh Doan · Tianhe Yu · Tianyi Liu · Tom Zahavy · Toryn Klassen · Tuo Zhao · Vicenç Gómez · Vincent Liu · Volkan Cevher · Wesley Suttle · Xiao-Wen Chang · Xiaohan Wei · Xiaotong Liu · Xingguo Li · Xinyi Chen · Xingyou Song · Yao Liu · YiDing Jiang · Yihao Feng · Yilun Du · Yinlam Chow · Yinyu Ye · Yishay Mansour · · Yonathan Efroni · Yongxin Chen · Yuanhao Wang · Bo Dai · Chen-Yu Wei · Harsh Shrivastava · Hongyang Zhang · Qinqing Zheng · SIDDHARTHA SATPATHI · Xueqing Liu · Andreu Vall -
2019 Poster: Wasserstein Weisfeiler-Lehman Graph Kernels »
Matteo Togninalli · Elisabetta Ghisu · Felipe Llinares-Lopez · Bastian Rieck · Karsten Borgwardt -
2019 Poster: A Geometric Perspective on Optimal Representations for Reinforcement Learning »
Marc Bellemare · Will Dabney · Robert Dadashi · Adrien Ali Taiga · Pablo Samuel Castro · Nicolas Le Roux · Dale Schuurmans · Tor Lattimore · Clare Lyle -
2017 Poster: Multi-output Polynomial Networks and Factorization Machines »
Mathieu Blondel · Vlad Niculae · Takuma Otsuka · Naonori Ueda -
2017 Poster: A Regularized Framework for Sparse and Structured Neural Attention »
Vlad Niculae · Mathieu Blondel -
2016 Poster: Higher-Order Factorization Machines »
Mathieu Blondel · Akinori Fujino · Naonori Ueda · Masakazu Ishihata