To effectively build upon federated learning (FL), this chapter revisits its core concepts. We start by recapping the standard FL workflow and its basic principles.
You will examine the inherent difficulties faced in federated settings, specifically statistical heterogeneity (non-IID data) and systems heterogeneity (variations in client hardware, network, and availability). We will define the mathematical objective function typically used in federated optimization, often expressed as minimizing a global loss aggregated across clients:
F(w)=k=1∑NpkFk(w)where Fk(w) is the local objective for client k and pk represents its contribution weight.
Further sections compare synchronous and asynchronous training paradigms, analyze potential security and privacy threats within the FL threat model, and discuss essential metrics and methodologies for evaluating the performance, efficiency, and fairness of these distributed systems. This review establishes the necessary groundwork for the advanced techniques covered in subsequent chapters.
1.1 Federated Learning Principles: A Recap
1.2 Challenges in Federated Environments
1.3 Mathematical Formulation of Federated Optimization
1.4 Synchronous vs. Asynchronous Federated Learning Models
1.5 Threat Models in Federated Learning
1.6 Evaluating Federated Learning Systems
© 2025 ApX Machine Learning