The standard Federated Averaging (FedAvg) algorithm serves as a foundational technique for combining model updates from multiple clients. However, its effectiveness can be limited, especially when dealing with non-identically distributed (Non-IID) data across clients or when facing unreliable participants. This chapter introduces aggregation algorithms designed to perform more effectively under these challenging, yet common, conditions.
You will examine:
Upon completing this chapter, you will be equipped to select, implement, and analyze aggregation strategies beyond FedAvg, enabling the development of more reliable and performant federated learning systems in diverse environments.
2.1 Limitations of Federated Averaging (FedAvg)
2.2 FedProx: Addressing Statistical Heterogeneity
2.3 SCAFFOLD: Variance Reduction in Federated Optimization
2.4 FedNova: Normalized Averaging for Heterogeneous Systems
2.5 Byzantine-Robust Aggregation Methods
2.6 Adaptive Federated Optimization Techniques
2.7 Practice: Implementing Advanced Aggregation
© 2025 ApX Machine Learning