# References for Research Papers

Refer to the following references for more details.

## We'll cover the following

## References

For more details, please refer to the following references:

**[Predicting Relative Position of Patches]**Carl Doersch, Abhinav Gupta, and Alexei A. Efros*. Unsupervised Visual Representation Learning by Context Prediction.*In ICCV 2015.**[Predicting Rotation]**Spyros Gidaris, Praveer Singh, Nikos Komodakis.*Unsupervised Representation Learning by Predicting Image Rotations.*In ICLR 2018.**[Solving Jigsaw Puzzles]***Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles.*In ECCV 2016.**[Contrastive Learning - The SimCLR Algorithm]**Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton.*A Simple Framework for Contrastive Learning of Visual Representations.*In ICML 2020.**[The MoCo-v2 Algorithm]***Improved Baselines with Momentum Contrastive Learning.*ArXiv 2020.**[Clustering: The DeepCluster Algorithm]***Deep Clustering for Unsupervised Learning of Visual Features.*In ECCV 2018.**[Distillation: The BYOL Algorithm]***Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning.*In NeurIPS 2020.**[The SimSiam Algorithm]***Exploring Simple Siamese Representation Learning.*In CVPR 2021.**[Barlow Twins]***Barlow Twins: Self-Supervised Learning via Redundancy Reduction.*In ICML 2021.**[Simple Masked Image Modeling]**Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, Han Hu.*SimMIM: A Simple Framework for Masked Image Modeling.*In CVPR 2022.**[Masked Autoencoders: Part 1/Part2]**Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick.*Masked Autoencoders Are Scalable Vision Learners.*In ArXiv 2021.**[Masked Siamese Networks: Part 1/Part 2]**Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas.*Masked Siamese Networks for Label-Efficient Learning.*In Arxiv 2022.

Get hands-on with 1200+ tech skills courses.