Enhancing AI transparency in IoT intrusion detection using explainable AI techniques

Wang, Yifan and Azad, Muhammad Ajmal and Zafar, Maham and Gul, Ammara (2025) Enhancing AI transparency in IoT intrusion detection using explainable AI techniques. Internet of Things, 33. p. 101714. ISSN 2542-6605

[thumbnail of 1-s2.0-S2542660525002288-main.pdf]
Preview
Text
1-s2.0-S2542660525002288-main.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (5MB)

Abstract

Internet of Things (IoT) networks continue to grow and have been integrated into critical applications such as healthcare, industrial control, and national infrastructure. The interconnected nature and resource-constrained devices can create numerous entry points for malicious actors who can bring about data breaches, unauthorised access, service disruptions, and even compromise critical infrastructure. Ensuring the security of these networks is essential to maintain the integrity and availability of services that could have serious social, economic, or operational consequences. Automated Intrusion Detection Systems (IDSs) have been widely used to identify threats with high accuracy and reduced detection time. However, the complexity of machine learning and deep learning models poses a serious challenge to the transparency and interpretability of the produced detection results. The lack of explainability in AI-driven IDS undermines user confidence and limits their practical deployment, especially among non-expert stakeholders. To address these challenges, this paper investigates the use of Explainable AI (XAI) techniques to enhance the interpretability of AI-based IDSs within IoT ecosystems. Specifically, it applies SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) to different Machine learning models. The models’ performance is evaluated using standard metrics such as accuracy, precision, and recall. The results show that incorporating XAI techniques significantly improves the transparency of IDS results, allowing users to understand and trust the reasoning behind AI decisions. This enhanced interpretability not only supports more informed cybersecurity practices but also makes AI systems more accessible to non-specialist users.

Item Type: Article
Identification Number: 10.1016/j.iot.2025.101714
Dates:
Date
Event
18 July 2025
Accepted
29 July 2025
Published Online
Uncontrolled Keywords: Interpretable artificial intelligence, Intrusion detection system, Internet of Things, Machine learning, Cyber security, Explainable AI
Subjects: CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science
Divisions: Architecture, Built Environment, Computing and Engineering > Computer Science
Depositing User: Gemma Tonks
Date Deposited: 22 Aug 2025 10:56
Last Modified: 22 Aug 2025 10:57
URI: https://www.open-access.bcu.ac.uk/id/eprint/16619

Actions (login required)

View Item View Item

Research

In this section...