Timezone: »
Recently, research has increasingly focused on developing efficient neural network architectures. In this work, we explore logic gate networks for machine learning tasks by learning combinations of logic gates. These networks comprise logic gates such as "AND" and "XOR", which allow for very fast execution. The difficulty in learning logic gate networks is that they are conventionally non-differentiable and therefore do not allow training with gradient descent. Thus, to allow for effective training, we propose differentiable logic gate networks, an architecture that combines real-valued logics and a continuously parameterized relaxation of the network. The resulting discretized logic gate networks achieve fast inference speeds, e.g., beyond a million images of MNIST per second on a single CPU core.
Author Information
Felix Petersen (Stanford University)
Christian Borgelt (Paris-Lodron-University of Salzburg)
Hilde Kuehne (Goethe University Frankfurt)
Oliver Deussen (University of Konstanz)
More from the Same Authors
-
2022 Poster: Domain Adaptation meets Individual Fairness. And they get along. »
Debarghya Mukherjee · Felix Petersen · Mikhail Yurochkin · Yuekai Sun -
2022 Poster: How Transferable are Video Representations Based on Synthetic Data? »
Yo-whan Kim · Samarth Mishra · SouYoung Jin · Rameswar Panda · Hilde Kuehne · Leonid Karlinsky · Venkatesh Saligrama · Kate Saenko · Aude Oliva · Rogerio Feris -
2021 Poster: Learning with Algorithmic Supervision via Continuous Relaxations »
Felix Petersen · Christian Borgelt · Hilde Kuehne · Oliver Deussen