Machine learning security and privacy: a review of threats and countermeasures
Paracha, Anum and Arshad, Junaid and Farah, Mohamed and Ismail, Khalid (2024) Machine learning security and privacy: a review of threats and countermeasures. EURASIP Journal on Information Security, 2024 (1). ISSN 2510-523X
Preview |
Text
s13635-024-00158-3.pdf - Published Version Available under License Creative Commons Attribution. Download (2MB) |
Abstract
Machine learning has become prevalent in transforming diverse aspects of our daily lives through intelligent digital solutions. Advanced disease diagnosis, autonomous vehicular systems, and automated threat detection and triage are some prominent use cases. Furthermore, the increasing use of machine learning in critical national infrastructures such as smart grids, transport, and natural resources makes it an attractive target for adversaries. The threat to machine learning systems is aggravated due to the ability of mal-actors to reverse engineer publicly available models, gaining insight into the algorithms underpinning these models. Focusing on the threat landscape for machine learning systems, we have conducted an in-depth analysis to critically examine the security and privacy threats to machine learning and the factors involved in developing these adversarial attacks. Our analysis highlighted that feature engineering, model architecture, and targeted system knowledge are crucial aspects in formulating these attacks. Furthermore, one successful attack can lead to other attacks; for instance, poisoning attacks can lead to membership inference and backdoor attacks. We have also reviewed the literature concerning methods and techniques to mitigate these threats whilst identifying their limitations including data sanitization, adversarial training, and differential privacy. Cleaning and sanitizing datasets may lead to other challenges, including underfitting and affecting model performance, whereas differential privacy does not completely preserve model’s privacy. Leveraging the analysis of attack surfaces and mitigation techniques, we identify potential research directions to improve the trustworthiness of machine learning systems.
Item Type: | Article |
---|---|
Identification Number: | 10.1186/s13635-024-00158-3 |
Dates: | Date Event 2 April 2024 Accepted 23 April 2024 Published Online |
Uncontrolled Keywords: | Adversarial attacks, Scrutiny-by-design, Poisoned dataset, Exploiting integrity, Data sanitization, Differential privacy |
Subjects: | CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science |
Divisions: | Faculty of Computing, Engineering and the Built Environment > College of Computing |
Depositing User: | Gemma Tonks |
Date Deposited: | 15 Jan 2025 11:55 |
Last Modified: | 15 Jan 2025 11:55 |
URI: | https://www.open-access.bcu.ac.uk/id/eprint/16078 |
Actions (login required)
![]() |
View Item |