Timezone: »
We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations, whose training dynamics and dependence on training set size can be predicted by our effective theory (in a toy setting). We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a "Goldilocks zone" (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of "intelligence from starvation" in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning.
Author Information
Ziming Liu (MIT)
Ouail Kitouni (MIT)
Niklas S Nolte (MIT)
Eric Michaud (University of California, Berkeley)
Max Tegmark (MIT)
Max Tegmark is a professor doing physics and AI research at MIT, and advocates for positive use of technology as president of the Future of Life Institute. He is the author of over 250 publications as well as the New York Times bestsellers “Life 3.0: Being Human in the Age of Artificial Intelligence” and "Our Mathematical Universe: My Quest for the Ultimate Nature of Reality". His AI research focuses on intelligible intelligence. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.”
Mike Williams (MIT)
More from the Same Authors
-
2021 : Physics-Augmented Learning: A New Paradigm Beyond Physics-Informed Learning »
Ziming Liu · Yuanqi Du · Yunyue Chen · Max Tegmark -
2021 : Robust and Provably Monotonic Networks »
Niklas S Nolte · Ouail Kitouni · Mike Williams -
2022 : Finding NEEMo: Geometric Fitting using Neural Estimation of the Energy Mover’s Distance »
Ouail Kitouni · Mike Williams · Niklas S Nolte -
2022 Spotlight: Poisson Flow Generative Models »
Yilun Xu · Ziming Liu · Max Tegmark · Tommi Jaakkola -
2022 Spotlight: Lightning Talks 6B-1 »
Yushun Zhang · Duc Nguyen · Jiancong Xiao · Wei Jiang · Yaohua Wang · Yilun Xu · Zhen LI · Anderson Ye Zhang · Ziming Liu · Fangyi Zhang · Gilles Stoltz · Congliang Chen · Gang Li · Yanbo Fan · Ruoyu Sun · Naichen Shi · Yibo Wang · Ming Lin · Max Tegmark · Lijun Zhang · Jue Wang · Ruoyu Sun · Tommi Jaakkola · Senzhang Wang · Zhi-Quan Luo · Xiuyu Sun · Zhi-Quan Luo · Tianbao Yang · Rong Jin -
2022 Panel: Panel 1C-3: Towards Understanding Grokking:… & Approximation with CNNs… »
Ziming Liu · GUOHAO SHEN -
2022 Poster: Poisson Flow Generative Models »
Yilun Xu · Ziming Liu · Max Tegmark · Tommi Jaakkola -
2021 Workshop: AI for Science: Mind the Gaps »
Payal Chandak · Yuanqi Du · Tianfan Fu · Wenhao Gao · Kexin Huang · Shengchao Liu · Ziming Liu · Gabriel Spadon · Max Tegmark · Hanchen Wang · Adrian Weller · Max Welling · Marinka Zitnik -
2020 Poster: AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity »
Silviu-Marian Udrescu · Andrew Tan · Jiahai Feng · Orisvaldo Neto · Tailin Wu · Max Tegmark -
2020 Oral: AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity »
Silviu-Marian Udrescu · Andrew Tan · Jiahai Feng · Orisvaldo Neto · Tailin Wu · Max Tegmark -
2015 : Machine Learning in HEP »
Mike Williams