Chain of Thoughts Based Prompting for Incident Detection Using Large Language Model
Udokisai, Affiong and Chambers, Lorraine and Mahmoud, Haitham and Ismail, Khalid N. and Mohammed, Nazim and Cervantes-Solis, Waldo and Gaber, Mohamed Medhat and Bhana, Rehan (2025) Chain of Thoughts Based Prompting for Incident Detection Using Large Language Model. In: 2025 International Joint Conference on Neural Networks: International Neural Network Society, 30th June - 5th July 2025, Rome, Italy. (In Press)
![]() |
Text
IJCNN_COT_Based_Prompting_for_Incident_Detection_Using_Large_Language_Model_3_.pdf - Accepted Version Restricted to Repository staff only Download (517kB) | Request a copy |
Abstract
Warehouse safety is crucial but challenging due to the variety of incidents that occur, such as human errors and equipment failures. Traditional incident reporting often focuses only on the event itself, lacking context on the actions leading to the incident. To address this, this work introduces a system using video analysis to track and document the sequence of events before, during, and after an incident. This system integrates You Only Look Once (YOLO) models YOLOv9, YOLOv11 and Faster R-CNN (Region-based Convolutional Neural Network) to detect and annotate real-time events, capturing the entire incident sequence. Moreover, a language model automatically generates detailed and clear health and safety reports based on the detected actions, ensuring their accuracy and relevance. Testing demonstrated the system’s effectiveness in detecting key incidents like forklift mishandling and goods falling. YOLOv11 achieved a precision of 0.806 and a recall of 0.955, with a mean Average Precision (mAP) mAP50 score of 0.972. The system also showed strong sequence detection accuracy, with key events identified with a recall of 1.0 in some cases. Reports generated using Generative Pre-trained Transformer (GPT)-based models showed strong alignment with human-readable text, with a cosine similarity score of 0.874 and a Bidirectional Encoder Representations from Transformers (BERT) F1 score of 0.879. These results indicate that the system improves safety practices by providing comprehensive, actionable insights.Warehouse safety is crucial but challenging due to the variety of incidents that occur, such as human errors and equipment failures. Traditional incident reporting often focuses only on the event itself, lacking context on the actions leading to the incident. To address this, this work introduces a system using video analysis to track and document the sequence of events before, during, and after an incident. This system integrates You Only Look Once (YOLO) models YOLOv9, YOLOv11 and Faster R-CNN (Region-based Convolutional Neural Network) to detect and annotate real-time events, capturing the entire incident sequence. Moreover, a language model automatically generates detailed and clear health and safety reports based on the detected actions, ensuring their accuracy and relevance. Testing demonstrated the system’s effectiveness in detecting key incidents like forklift mishandling and goods falling. YOLOv11 achieved a precision of 0.806 and a recall of 0.955, with a mean Average Precision (mAP) mAP50 score of 0.972. The system also showed strong sequence detection accuracy, with key events identified with a recall of 1.0 in some cases. Reports generated using Generative Pre-trained Transformer (GPT)-based models showed strong alignment with human-readable text, with a cosine similarity score of 0.874 and a Bidirectional Encoder Representations from Transformers (BERT) F1 score of 0.879. These results indicate that the system improves safety practices by providing comprehensive, actionable insights.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Dates: | Date Event 1 June 2025 Accepted |
Subjects: | CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science |
Divisions: | Faculty of Computing, Engineering and the Built Environment > College of Computing |
Depositing User: | Gemma Tonks |
Date Deposited: | 17 Jun 2025 13:29 |
Last Modified: | 17 Jun 2025 13:30 |
URI: | https://www.open-access.bcu.ac.uk/id/eprint/16432 |
Actions (login required)
![]() |
View Item |