The expanding horizons of music AI research
Abstract
In the blink of an eye, music AI has leapt out of the research lab and into the real world. This development heralds both profound opportunities and profound risks, and is already redefining relationships between diverse stakeholders including musicians, listeners, the music and tech industries, policymakers, and even researchers. We face a pressing question as researchers: how should our work adapt to confront the rapidly expanding horizons of music AI? In this talk, I will discuss some of my lab’s recent work, which aims to take a holistic view of music AI research from developing the core capabilities (AI/ML), to surfacing those capabilities to users (HCI), to studying their broader societal consequences. In particular, I will introduce our work on SingSong and Music ControlNet, which aim to improve the controllability of core generative modeling methods. I will also share our recent work on Hookpad Aria and AMUSE, “Copilots” for musicians that have been used by thousands of songwriters. Finally, I will discuss our ongoing work on training data attribution and in-the-wild evaluation (Copilot Arena, Music Arena), which seek to bring clarity to some of the broader societal questions surrounding music AI.