References for Research Papers

Refer to the following references for more details.

We'll cover the following


For more details, please refer to the following references:

  1. [Predicting Relative Position of Patches] Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised Visual Representation Learning by Context Prediction. In ICCV 2015.

  2. [Predicting Rotation] Spyros Gidaris, Praveer Singh, Nikos Komodakis. Unsupervised Representation Learning by Predicting Image Rotations. In ICLR 2018.

  3. [Solving Jigsaw Puzzles] Mehdi Noroozi, Paolo Favaro. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. In ECCV 2016.

  4. [Contrastive Learning - The SimCLR Algorithm] Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton. A Simple Framework for Contrastive Learning of Visual Representations. In ICML 2020.

  5. [The MoCo-v2 Algorithm] Xinlei Chen, Haoqi Fan, Ross Girshick, Kaiming He. Improved Baselines with Momentum Contrastive Learning. ArXiv 2020.

  6. [Clustering: The DeepCluster Algorithm] Mathilde Caron, Piotr Bojanowski, Armand Joulin, Matthijs Douze. Deep Clustering for Unsupervised Learning of Visual Features. In ECCV 2018.

  7. [Distillation: The BYOL Algorithm] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, koray kavukcuoglu, Remi Munos, Michal Valko. Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning. In NeurIPS 2020.

  8. [The SimSiam Algorithm] Xinlei Chen, Kaiming He. Exploring Simple Siamese Representation Learning. In CVPR 2021.

  9. [Barlow Twins] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, Stéphane Deny. Barlow Twins: Self-Supervised Learning via Redundancy Reduction. In ICML 2021.

  10. [Simple Masked Image Modeling] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu. SimMIM: A Simple Framework for Masked Image Modeling. In CVPR 2022.

  11. [Masked Autoencoders: Part 1/Part2] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick. Masked Autoencoders Are Scalable Vision Learners. In ArXiv 2021.

  12. [Masked Siamese Networks: Part 1/Part 2] Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas. Masked Siamese Networks for Label-Efficient Learning. In Arxiv 2022.

Get hands-on with 1200+ tech skills courses.