Panel: Machine learning and audio signal processing: State of the art and future perspectives
in
Workshop: Machine Learning for Audio Signal Processing (ML4Audio)
Abstract
How can end-to-end audio processing be further optimized? How can an audio processing system be built that generalizes across domains, in particular different languages, music styles, or acoustic environments? How can complex musical hierarchical structure be learned? How can we use machine learning to build a music system that is able to react in the same way an improvisation partner would? Can we build a system that could put a composer in the role of a perceptual engineer?
Sepp Hochreiter (Johannes Kepler University Linz, http://www.bioinf.jku.at/people/hochreiter/) Bo Li (Google, https://research.google.com/pubs/BoLi.html) Karen Livescu (Toyota Technological Institute at Chicago, http://ttic.uchicago.edu/~klivescu/) Arindam Mandal (Amazon Alexa, https://scholar.google.com/citations?user=tV1hW0YAAAAJ&hl=en) Oriol Nieto (Pandora, http://urinieto.com/about/) Malcolm Slaney (Google, http://www.slaney.org/malcolm/pubs.html) Hendrik Purwins (Aalborg University Copenhagen, http://personprofil.aau.dk/130346?lang=en)