Metric-based meta-learning shifts the focus from adapting model parameters directly, as seen in gradient-based methods, towards learning an embedding space fϕ(⋅) where new classification tasks become straightforward using simple distance calculations. This chapter examines advanced techniques within this framework.
You will study the mechanisms behind key algorithms like Prototypical Networks, Relation Networks, and Matching Networks, including architectural variations and the integration of attention. A significant part of the discussion involves adapting these methods to work effectively with the high-dimensional embeddings produced by large foundation models. We will also review relevant deep metric learning objectives, such as contrastive and triplet loss (Ltriplet), which are often used to train the embedding function. The chapter includes practical guidance on implementing these concepts, specifically using Prototypical Networks with features extracted from pre-trained models.
3.1 Prototypical Networks Revisited
3.2 Relation Networks for Few-Shot Learning
3.3 Matching Networks with Attention
3.4 Deep Metric Learning Techniques
3.5 Adapting Metric Learning for High-Dimensional Embeddings
3.6 Practice: Implementing Prototypical Networks with Foundation Model Embeddings
© 2025 ApX Machine Learning