Timezone: »
A significant effort has been made to train neural networks that replicate algorithmic reasoning, but they often fail to learn the abstract concepts underlying these algorithms. This is evidenced by their inability to generalize to data distributions that are outside of their restricted training sets, namely larger inputs and unseen data. We study these generalization issues at the level of numerical subroutines that comprise common algorithms like sorting, shortest paths, and minimum spanning trees. First, we observe that transformer-based sequence-to-sequence models can learn subroutines like sorting a list of numbers, but their performance rapidly degrades as the length of lists grows beyond those found in the training set. We demonstrate that this is due to attention weights that lose fidelity with longer sequences, particularly when the input numbers are numerically similar. To address the issue, we propose a learned conditional masking mechanism, which enables the model to strongly generalize far outside of its training range with near-perfect accuracy on a variety of algorithms. Second, to generalize to unseen data, we show that encoding numbers with a binary representation leads to embeddings with rich structure once trained on downstream tasks like addition or multiplication. This allows the embedding to handle missing data by faithfully interpolating numbers not seen during training.
Author Information
Yujun Yan (University of Michigan)
Kevin Swersky (Google)
Danai Koutra (U Michigan)
Parthasarathy Ranganathan (Google)
Milad Hashemi (Google)
More from the Same Authors
-
2021 : Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks »
Yujun Yan · Milad Hashemi · Kevin Swersky · Yaoqing Yang · Danai Koutra -
2021 : A Graph Perspective on Neural Network Dynamics »
Fatemeh Vahedian · Ruiyu Li · Puja Trivedi · Di Jin · Danai Koutra -
2021 : Data-Driven Offline Optimization for Architecting Hardware Accelerators »
Aviral Kumar · Amir Yazdanbakhsh · Milad Hashemi · Kevin Swersky · Sergey Levine -
2021 : Interpretability of Machine Learning in Computer Systems: Analyzing a Caching Model »
Leon Sixt · Evan Liu · Marie Pellat · James Wexler · Milad Hashemi · Been Kim · Martin Maas -
2020 Poster: Big Self-Supervised Models are Strong Semi-Supervised Learners »
Ting Chen · Simon Kornblith · Kevin Swersky · Mohammad Norouzi · Geoffrey E Hinton -
2020 Poster: Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs »
Jiong Zhu · Yujun Yan · Lingxiao Zhao · Mark Heimann · Leman Akoglu · Danai Koutra -
2019 Workshop: ML For Systems »
Milad Hashemi · Azalia Mirhoseini · Anna Goldie · Kevin Swersky · Xinlei XU · Jonathan Raiman · Jonathan Raiman -
2019 Poster: Graph Normalizing Flows »
Jenny Liu · Aviral Kumar · Jimmy Ba · Jamie Kiros · Kevin Swersky -
2018 Workshop: Machine Learning for Systems »
Anna Goldie · Azalia Mirhoseini · Jonathan Raiman · Kevin Swersky · Milad Hashemi