Federated Learning often involves coordinating hundreds or thousands of devices, each potentially training large models. Sending full model updates or gradients between these clients and the central server during each communication round can consume significant bandwidth and time, frequently becoming the primary performance constraint. The uplink communication, from client to server, is particularly constrained on many edge devices.
This chapter focuses on techniques to mitigate this communication overhead. We will examine methods for reducing the size of the data transmitted, such as:
We will analyze the trade-offs associated with these techniques, considering their impact on communication cost, computation overhead, and model convergence speed. You will also learn how to implement some of these methods in practice.
5.1 Communication Bottlenecks in Federated Learning
5.2 Gradient Compression Techniques
5.3 Error Accumulation and Compensation Methods
5.4 Model Update Compression Strategies
5.5 Optimizing Local Computation
5.6 Asynchronous Federated Learning Optimizations
5.7 Hands-on Practical: Implementing Gradient Quantization
© 2025 ApX Machine Learning