Skip to yearly menu bar Skip to main content

Workshop: Machine Learning with New Compute Paradigms

Scaling-up Memristor Monte Carlo with magnetic domain-wall physics

Thomas Dalgaty · Shogo Yamada · Anca Molnos · Eiji Kawasaki · Thomas Mesquida · Rummens Fran├žois · TATSUO SHIBATA · Yukihiro Urakawa · Yukio Terasaki · Tomoyuki Sasaki · Marc Duranton

[ ] [ Project Page ]
Sat 16 Dec 7:40 a.m. PST — 7:50 a.m. PST


By exploiting the intrinsic random nature of nanoscale devices, Memristor Monte Carlo (MMC) is a promising enabler of edge learning systems. However, due to multiple algorithmic and device-level limitations, existing demonstrations have been restricted to very small neural network models and datasets. We discuss these limitations, and describe how they can be overcome, by mapping the stochastic gradient Langevin dynamics (SGLD) algorithm onto the physics of magnetic domain-wall Memristors to scale-up MMC models by five orders of magnitude. We propose the push-pull pulse programming method that realises SGLD in-physics, and use it to train a domain-wall based ResNet18 on the CIFAR-10 dataset. On this task, we observe no performance degradation relative to a floating point model down to an update precision of between 6 and 7-bits, indicating we have made a step towards a large-scale edge learning system leveraging noisy analogue devices.

Chat is not available.