Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning, Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, Michal Valko, 2020Advances in Neural Information Processing Systems (NeurIPS)DOI: 10.48550/arXiv.2006.07733 - Introduces BYOL, a self-supervised learning method that avoids using negative pairs by predicting the output of a momentum-updated target network, achieving strong performance.
Momentum Contrast for Unsupervised Visual Representation Learning, Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, 2020IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)DOI: 10.48550/arXiv.1911.05722 - Proposes Momentum Contrast (MoCo), a self-supervised learning framework that builds a dynamic dictionary with a momentum encoder to facilitate contrastive learning without massive batch sizes.