Enhancing Deepfake Detection: Leveraging StyleGAN3 for Robust AI-Generated Forgery Identification
Hafezi, Mohammad and Shahra, Essa and Basurra, Shadi and Aneiba, Adel and Devey, Jack (2026) Enhancing Deepfake Detection: Leveraging StyleGAN3 for Robust AI-Generated Forgery Identification. IEEE Access, 13. ISSN 2169-3536
Preview |
Text
Enhancing_Deepfake_Detection_Leveraging_StyleGAN3_for_Robust_AI-Generated_Forgery_Identification.pdf - Published Version Available under License Creative Commons Attribution. Download (2MB) |
Abstract
The rapid advancement of generative models has significantly increased the realism of AI-generated Deepfake content, posing serious challenges to digital media integrity and forensic analysis. A key difficulty in Deepfake detection lies in achieving robust generalization when confronted with synthetic images generated by previously unseen models that exhibit reduced visual artifacts. This study investigates the effectiveness of augmenting training data with StyleGAN3-generated images to enhance the generalization capability of Deepfake detection systems. Unlike earlier generative models, StyleGAN3 mitigates common artifacts such as texture sticking and aliasing, producing highly realistic synthetic faces that better represent modern forgery characteristics. We train a convolutional neural network (ResNet-18) under two controlled conditions: using a standard Deepfake dataset and using a dataset augmented with StyleGAN3-generated images. Experimental results demonstrate that the proposed augmentation strategy yields a 20.5% absolute improvement in test accuracy, along with a substantial increase in true positive rate and a significant reduction in false negatives. These findings indicate that exposure to more realistic synthetic samples enables the model to learn deeper and more transferable representations of manipulated content. However, the improvement in fake detection performance is accompanied by a moderate rise in false positives, highlighting an important trade-off that must be considered in practical deployment. Overall, this work demonstrates that incorporating artifact-reduced synthetic images during training can improve the robustness of Deepfake detection models. The study contributes to ongoing efforts in digital media forensics by emphasizing the importance of realistic data augmentation strategies for strengthening detection systems against evolving generative techniques.
| Item Type: | Article |
|---|---|
| Identification Number: | 10.1109/ACCESS.2026.3680327 |
| Dates: | Date Event 20 March 2026 Accepted 3 April 2026 Published Online |
| Uncontrolled Keywords: | Deepfake, AI generation, deep learning, StyleGAN3 |
| Subjects: | CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science |
| Divisions: | Architecture, Built Environment, Computing and Engineering > Computer Science |
| Depositing User: | Gemma Tonks |
| Date Deposited: | 12 May 2026 12:19 |
| Last Modified: | 12 May 2026 12:19 |
| URI: | https://www.open-access.bcu.ac.uk/id/eprint/17034 |
Actions (login required)
![]() |
View Item |

Tools
Tools