Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning with New Compute Paradigms

The Benefits of Self-Supervised Learning for Training Physical Neural Networks

Jeremie Laydevant · Peter McMahon · Davide Venturelli · Paul Lott

[ ] [ Project Page ]
Sat 16 Dec 9:25 a.m. PST — 10:30 a.m. PST

Abstract:

Physical Neural Networks (PNNs) are energy-efficient alternatives to their digital counterparts. Because they are inherently variable, noisy and hardly differentiable, PNNs require tailored trainign methods. Additionally, while the properties of PNNs make them good candidates for edge computing, where memory and computational ressources are constrained, most of the training algorithms developed for training PNNs focus on supervised learning, though labeled data could not be accessible on the edge. Here, we propose to use Self-Supervised Learning (SSL) as an ideal framework for training PNNs (we focus here on computer vision tasks) : 1. SSL globally eliminates the reliance on labeled data and 2. as SSL enforces the network to extract high-level concepts, networks trained with SSL should result in high robustness to noise and device variability. We investigate and show with simulations that the later properties effectively emerge when a network is trained on MNIST in the SSL settings while it does not when trained supervisely. We also explore and show empirically that we can optimize layer-wise SSL objectives rather than a single global one while still achieving the performance of the global optimization on MNIST and CIFAR-10. This could allow local learning without backpropagation at all, especially in the scheme we propose with stochastic optimization. We expect this preliminary work, based on simulations, to pave the way of a robust paradigm for training PNNs and hope to stimulate interest in the community of unconventional computing and beyond.

Chat is not available.