FedNets: Federated Learning on Edge Devices Using Ensembles of Pruned Deep Neural Networks

Alhalabi, Besher and Basurra, Shadi and Gaber, Mohamed Medhat (2023) FedNets: Federated Learning on Edge Devices Using Ensembles of Pruned Deep Neural Networks. IEEE Access, 11. pp. 30726-30738. ISSN 2169-3536

FedNets_Federated_Learning_on_Edge_Devices_Using_Ensembles_of_Pruned_Deep_Neural_Networks.pdf - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (1MB)


Federated Learning (FL) is an innovative area of machine learning that enables different clients to collaboratively generate a shared model while preserving their data privacy. In a typical FL setting, a central model is updated by aggregating the clients’ parameters of the respective artificial neural network. The aggregated parameters are then sent back to the clients. However, two main challenges are associated with the central aggregation approach. Firstly, most state-of-the-art strategies are not optimised to operate in the presence of certain types of non-iid (not independent and identically distributed) applications and datasets. Secondly, federated learning is vulnerable to various privacy and security concerns related to model inversion attacks, which can be used to access sensitive information from the training data. To address these issues, we propose a novel federated learning strategy FedNets based on ensemble learning. Instead of sharing the parameters of the clients over the network to update a single global model, our approach allows clients to have ensembles of diverse-lightweight models and collaborate by sharing ensemble members. FedNets utilises graph embedding theory to reduce the complexity of running Deep Neural Networks (DNNs) on resource-limited devices. Each Deep Neural Network (DNN) is treated as a graph, from which respective graph embeddings are generated and clustered to determine which part of the DNN should be shared with other clients. Our approach outperforms state-of-the-art FL algorithms such as Federated Averaging (Fed-Avg) and Adaptive Federated Optimisation (Fed-Yogi) in terms of accuracy; on the Federated CIFAR100 dataset (non-iid), FedNets demonstrates a remarkable 63% and 92% improvement in accuracy, respectively. Furthermore, FedNets does not compromise the client’s privacy, as it is safeguarded by the design of the method.

Item Type: Article
Identification Number: https://doi.org/10.1109/ACCESS.2023.3261266
1 March 2023Accepted
24 March 2023Published Online
Uncontrolled Keywords: Federated learning, ensemble learning, convolutional neural networks, graph embedding, affinity propagation, non-IID datasets, privacy
Subjects: CAH11 - computing > CAH11-01 - computing > CAH11-01-01 - computer science
Divisions: Faculty of Computing, Engineering and the Built Environment > School of Computing and Digital Technology
Depositing User: Gemma Tonks
Date Deposited: 20 Apr 2023 14:09
Last Modified: 20 Apr 2023 14:09
URI: https://www.open-access.bcu.ac.uk/id/eprint/14345

Actions (login required)

View Item View Item


In this section...