What is it about?
Federated Learning is transforming AI by enabling the collaborative training of ML models in a distributed fashion so that the computational effort is shared among the participating clients without actually sharing any local (private) data. Our work introduces FedBed, a testing framework that enables the rapid and reproducible benchmarking of FL deployments on virtualzed testbeds like cloud and edge computing offerings. By using FedBed, users can quickly evaluate several trade-offs from combining a variety of FL software (e.g., TensorFlow, PyTorch, Flower) and infrastructure (e.g., different compute capabilities, network links). This reduces the time-consuming process that includes the setup of either a virtual physical or emulation testbed, experiment configurations, and the monitoring of the resulting FL testbed.
Featured Image
Photo by Stephen Dawson on Unsplash
Why is it important?
Designing and provisioning FL deployments, especially over the edge-cloud continuum, is extremely challenging with resource and network heterogeneity, different AI models and libraries, and non-uniform data distributions, all hampering QoS and innovation potential. With FedBed, users can rapidly setup FL experiments to try different things, discover performance inefficiences, and detect resource wastage in the model training process by simply providing the means to test early in the design process.
Perspectives
Read the Original
This page is a summary of: FedBed: Benchmarking Federated Learning over Virtualized Edge Testbeds, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3603166.3632138.
You can read the full text:
Resources
Contributors
The following have contributed to this page