Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Security, IP protection, Privacy on Federated Learning and Machine Learning Edge Devices

Mahdieh Grailoo


Abstract:

Neural networks (NNs) on edge devices have experienced rapid adoption in many security-critical applications, including autonomous cars, facial recognition, surveillance, medical devices, drones, and robotics, making the associated security and privacy issues an urgent and severe concern. For example, in privacy issues, the leakage of the patient’s genomic information in medical devices, users’ location in autonomous cars, and confidential information in smart cities and smart homes may result in substantial economic losses to data owners and endanger their lives in extreme cases. In security issues, if the autonomous cars misclassify a stop sign to the speed sign of 80km/h, it potentially results in a crash. In facial and fingerprint recognition, an unauthorized person can gain authority, and in skin cancer screening, skin lesion images can be misdiagnosed [1,2]. On the other hand, the unprecedented success of NNs is largely supported by the subsequent advances in specialized hardware (HW) and their usage in tackling data-intensive computational workloads [2,3]. Therefore, when considering the general concept of security and privacy in NNs, it is impossible to ignore that the HW itself is a key factor in the equation. Furthermore, the assumption that HW is trustworthy and the security effort needs only encompass networks and software (SW) is no longer valid. Because attacks mounted on HW offer the adversary capabilities that bypass SW constraints. Therefore, in our research, we study all possible vulnerable spots and security threats of NNs systems and develop novel HW solutions for designing trustworthy and secure NNs systems.

Chat is not available.