Outlier-oriented poisoning attack: a grey-box approach to disturb decision boundaries by perturbing outliers in multiclass learning
Paracha, Anum and Arshad, Junaid and Ben Farah, Mohamed and Ismail, Khalid (2025) Outlier-oriented poisoning attack: a grey-box approach to disturb decision boundaries by perturbing outliers in multiclass learning. International Journal of Information Security, 24. ISSN 1615-5262
![]() |
Text
research paper.pdf - Accepted Version Restricted to Repository staff only until 26 February 2026. Download (830kB) | Request a copy |
Abstract
Poisoning attacks are a primary threat to machine learning (ML) models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack—outlier-oriented poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. To ascertain the severity of the OOP attack for different degrees (5–25%) of poisoning to conduct a detailed analysis, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models. Benchmarking the OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our analysis helps understand behaviour of multiclass models against data poisoning attacks and contributes to effective mitigation against such attacks. Utilizing three publicly available datasets: IRIS, MNIST, and ISIC, our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% for IRIS dataset with 15% poisoning. Whereas, for same poisoning level and dataset, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption (12.28% and 17.52%). We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models.
Item Type: | Article |
---|---|
Identification Number: | 10.1007/s10207-025-00998-1 |
Dates: | Date Event 7 February 2025 Accepted 26 February 2025 Published |
Uncontrolled Keywords: | data poisoning attack, outliers manipulation, multiclass poisoning, confidence disruption, optimal poisoning, behavioural analysis, integrity violation |
Subjects: | CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science |
Divisions: | Faculty of Computing, Engineering and the Built Environment > College of Computing |
Depositing User: | Anum Paracha |
Date Deposited: | 17 Mar 2025 11:51 |
Last Modified: | 17 Mar 2025 11:51 |
URI: | https://www.open-access.bcu.ac.uk/id/eprint/16198 |
Actions (login required)
![]() |
View Item |