TY - JOUR
T1 - FedFly
T2 - Toward Migration in Edge-Based Distributed Federated Learning
AU - Ullah, Rehmat
AU - Wu, Di
AU - Harvey, Paul
AU - Kilpatrick, Peter
AU - Spence, Ivor
AU - Varghese, Blesson
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2022/8/1
Y1 - 2022/8/1
N2 - Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource-constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from an edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (ONN) when devices move between edge servers during FL training. Our empirical results on the CIFAR-10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33 percent when a device moves after 50 percent of the training is completed, and by up to 45 percent when 90 percent of the training is completed when compared to the state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation.
AB - Federated learning (FL) is a privacy-preserving distributed machine learning technique that trains models while keeping all the original data generated on devices locally. Since devices may be resource-constrained, offloading can be used to improve FL performance by transferring computational workload from devices to edge servers. However, due to mobility, devices participating in FL may leave the network during training and need to connect to a different edge server. This is challenging because the offloaded computations from an edge server need to be migrated. In line with this assertion, we present FedFly, which is, to the best of our knowledge, the first work to migrate a deep neural network (ONN) when devices move between edge servers during FL training. Our empirical results on the CIFAR-10 dataset, with both balanced and imbalanced data distribution, support our claims that FedFly can reduce training time by up to 33 percent when a device moves after 50 percent of the training is completed, and by up to 45 percent when 90 percent of the training is completed when compared to the state-of-the-art offloading approach in FL. FedFly has negligible overhead of up to two seconds and does not compromise accuracy. Finally, we highlight a number of open research issues for further investigation.
UR - http://www.scopus.com/inward/record.url?scp=85135760268&partnerID=8YFLogxK
U2 - 10.48550/arXiv.2111.01516
DO - 10.48550/arXiv.2111.01516
M3 - Article
AN - SCOPUS:85135760268
SN - 0163-6804
VL - 60
SP - 42
EP - 48
JO - IEEE Communications Magazine
JF - IEEE Communications Magazine
IS - 11
ER -