TY - JOUR
T1 - FedAdapt
T2 - Adaptive Offloading for IoT Devices in Federated Learning
AU - Wu, Di
AU - Ullah, Rehmat
AU - Harvey, Paul
AU - Kilpatrick, Peter
AU - Spence, Ivor
AU - Varghese, Blesson
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2022/5/19
Y1 - 2022/5/19
N2 - Applying federated learning (FL) on Internet of Things (IoT) devices is necessitated by the large volumes of data they produce and growing concerns of data privacy. However, there are three challenges that need to be addressed to make FL efficient: 1) execution on devices with limited computational capabilities; 2) accounting for stragglers due to computational heterogeneity of devices; and 3) adaptation to the changing network bandwidths. This article presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges. FedAdapt accelerates local training in computationally constrained devices by leveraging layer offloading of deep neural networks (DNNs) to servers. Furthermore, FedAdapt adopts reinforcement learning (RL)-based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth. The experimental studies are carried out on a lab-based testbed and it is demonstrated that by offloading a DNN from the device to the server FedAdapt reduces the training time of a typical IoT device by over half compared to classic FL. The training time of extreme stragglers and the overall training time can be reduced by up to 57%. Furthermore, with changing network bandwidth, FedAdapt is demonstrated to reduce the training time by up to 40% when compared to classic FL, without sacrificing accuracy.
AB - Applying federated learning (FL) on Internet of Things (IoT) devices is necessitated by the large volumes of data they produce and growing concerns of data privacy. However, there are three challenges that need to be addressed to make FL efficient: 1) execution on devices with limited computational capabilities; 2) accounting for stragglers due to computational heterogeneity of devices; and 3) adaptation to the changing network bandwidths. This article presents FedAdapt, an adaptive offloading FL framework to mitigate the aforementioned challenges. FedAdapt accelerates local training in computationally constrained devices by leveraging layer offloading of deep neural networks (DNNs) to servers. Furthermore, FedAdapt adopts reinforcement learning (RL)-based optimization and clustering to adaptively identify which layers of the DNN should be offloaded for each individual device on to a server to tackle the challenges of computational heterogeneity and changing network bandwidth. The experimental studies are carried out on a lab-based testbed and it is demonstrated that by offloading a DNN from the device to the server FedAdapt reduces the training time of a typical IoT device by over half compared to classic FL. The training time of extreme stragglers and the overall training time can be reduced by up to 57%. Furthermore, with changing network bandwidth, FedAdapt is demonstrated to reduce the training time by up to 40% when compared to classic FL, without sacrificing accuracy.
KW - Edge computing
KW - Internet of Things (IoT)
KW - federated learning (FL)
KW - reinforcement learning (RL)
UR - http://www.scopus.com/inward/record.url?scp=85130475380&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2022.3176469
DO - 10.1109/JIOT.2022.3176469
M3 - Article
AN - SCOPUS:85130475380
SN - 2327-4662
VL - 9
SP - 20889
EP - 20901
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 21
ER -