Skip to yearly menu bar Skip to main content


Machine Learning with New Compute Paradigms

Jannes Gladrow · Benjamin Scellier · Eric Xing · Babak Rahmani · Francesca Parmigiani · Paul Prucnal · Cheng Zhang

Room 235 - 236

Sat 16 Dec, 7 a.m. PST

As GPU computing comes closer to a plateau in terms of efficiency and cost due to Moore’s law reaching its limit, there is a growing need to explore alternative computing paradigms, such as (opto-)analog, neuromorphic, and low-power computing. This NeurIPS workshop aims to unite researchers from machine learning and alternative computation fields to establish a new hardware-ML feedback loop.By co-designing models with specialized accelerators, we can leverage the benefits of increased throughput or lower per-flop power consumption. Novel devices hold the potential to further accelerate standard deep learning or even enable efficient inference and training of hitherto compute-constrained model classes. However, new compute paradigms typically present challenges such as intrinsic noise, restricted sets of compute operations, or limited bit-depth, and thus require model-hardware co-design. This workshop’s goal is to foster cross-disciplinary collaboration to capitalize on the opportunities offered by emerging AI accelerators.

Chat is not available.
Timezone: America/Los_Angeles