Timezone: »
While traditional computer security relies on well-defined attack models and proofs of security, a science of security for machine learning systems has proven more elusive. This is due to a number of obstacles, including (1) the highly varied angles of attack against ML systems, (2) the lack of a clearly defined attack surface (because the source of the data analyzed by ML systems is not easily traced), and (3) the lack of clear formal definitions of security that are appropriate for ML systems. At the same time, security of ML systems is of great import due the recent trend of using ML systems as a line of defense against malicious behavior (e.g., network intrusion, malware, and ransomware), as well as the prevalence of ML systems as parts of sensitive and valuable software systems (e.g., sentiment analyzers for predicting stock prices). This workshop will bring together experts from the computer security and machine learning communities in an attempt to highlight recent work in this area, as well as to clarify the foundations of secure ML and chart out important directions for future work and cross-community collaborations.
Author Information
Jacob Steinhardt (UC Berkeley)
Nicolas Papernot (Google Brain)
Bo Li (University of Illinois at Urbana–Champaign (UIUC))
Chang Liu (Citadel)
Percy Liang (Stanford University)
Dawn Song (UC Berkeley)
More from the Same Authors
-
2020 Poster: Synthesize, Execute and Debug: Learning to Repair for Neural Program Synthesis »
Kavi Gupta · Peter Ebert Christensen · Xinyun Chen · Dawn Song -
2020 Poster: Compositional Generalization via Neural-Symbolic Stack Machines »
Xinyun Chen · Chen Liang · Adams Wei Yu · Dawn Song · Denny Zhou -
2018 Workshop: Workshop on Security in Machine Learning »
Nicolas Papernot · Jacob Steinhardt · Matt Fredrikson · Kamalika Chaudhuri · Florian Tramer -
2018 Poster: Semidefinite relaxations for certifying robustness to adversarial examples »
Aditi Raghunathan · Jacob Steinhardt · Percy Liang -
2018 Poster: Tree-to-tree Neural Networks for Program Translation »
Xinyun Chen · Chang Liu · Dawn Song -
2017 Workshop: Aligned Artificial Intelligence »
Dylan Hadfield-Menell · Jacob Steinhardt · David Duvenaud · David Krueger · Anca Dragan -
2017 Poster: Certified Defenses for Data Poisoning Attacks »
Jacob Steinhardt · Pang Wei Koh · Percy Liang -
2016 Workshop: Reliable Machine Learning in the Wild »
Dylan Hadfield-Menell · Adrian Weller · David Duvenaud · Jacob Steinhardt · Percy Liang -
2016 Poster: Latent Attention For If-Then Program Synthesis »
Chang Liu · Xinyun Chen · Richard Shin · Mingcheng Chen · Dawn Song -
2015 Poster: Learning with Relaxed Supervision »
Jacob Steinhardt · Percy Liang -
2009 Poster: Tracking Dynamic Sources of Malicious Activity at Internet Scale »
Shobha Venkataraman · Avrim Blum · Dawn Song · Subhabrata Sen · Oliver Spatscheck -
2009 Spotlight: Tracking Dynamic Sources of Malicious Activity at Internet Scale »
Shobha Venkataraman · Avrim Blum · Dawn Song · Subhabrata Sen · Oliver Spatscheck