Artificial Intelligence Life Cycle: The Detection and Mitigation of Bias

Aninze, Ashionye and Bhogal, Jagdev (2024) Artificial Intelligence Life Cycle: The Detection and Mitigation of Bias. Proceedings of the International Conference on AI Research, 4 (1). pp. 40-49. ISSN 3049-5628

[thumbnail of Aninze+and+Bhogal+082.pdf]
Preview
Text
Aninze+and+Bhogal+082.pdf - Published Version

Download (1MB)

Abstract

The rapid expansion of Artificial Intelligence(AI) has outpaced the development of ethical guidelines and regulations, raising concerns about the potential for bias in AI systems. These biases in AI can manifest in real-world applications leading to unfair or discriminatory outcomes in areas like job hiring, loan approvals or criminal justice predictions. For example, a biased AI model used for loan prediction may deny loans to qualified applicants based on demographic factors such as race or gender. This paper investigates the presence and mitigation of bias in Machine Learning(ML) models trained on the Adult Census Income dataset, known to have limitations in gender and race. Through comprehensive data analysis, focusing on sensitive attributes like gender, race and relationship status, this research sheds light on complex relationships between societal biases and algorithmic outcomes and how societal biases can be rooted and amplified by ML algorithms. Utilising fairness metrics like demographic parity(DP) and equalised odds(EO), this paper quantifies the impact of bias on model predictions. The results demonstrated that biased datasets often lead to biased models even after applying pre-processing techniques. The effectiveness of mitigation techniques such as reweighting(Exponential Gradient(EG)) to reduce disparities was examined, resulting in a measurable reduction in bias disparities. However, these improvements came with trade-offs in accuracy and sometimes in other fairness metrics, identified the complex nature of bias mitigation and the need for precise consideration of ethical implications. The findings of this research highlight the critical importance of addressing bias at all stages of the AI life cycle, from data collection to model deployment. The limitation of this research, especially the use of EG, demonstrates the need for further development of bias mitigation techniques that can address complex relationships while maintaining accuracy. This paper concludes with recommendations for best practices in Artificial Intelligence development, emphasising the need for ongoing research and collaboration to mitigate bias by prioritising ethical considerations, transparency, explainability, and accountability to ensure fairness in AI systems.

Item Type: Article
Identification Number: 10.34190/icair.4.1.3131
Dates:
Date
Event
4 December 2024
Accepted
4 December 2024
Published Online
Uncontrolled Keywords: Artificial Intelligence, Machine Learning, AI ethics, data bias, machine learning fairness, bias in AI
Subjects: CAH11 - computing > CAH11-01 - computing > CAH11-01-05 - artificial intelligence
Divisions: Faculty of Computing, Engineering and the Built Environment > College of Computing
Depositing User: Gemma Tonks
Date Deposited: 08 Jul 2025 13:29
Last Modified: 08 Jul 2025 13:29
URI: https://www.open-access.bcu.ac.uk/id/eprint/16500

Actions (login required)

View Item View Item

Research

In this section...