Introduction to Transformers in Computer Vision
Explore the foundational concepts of transformers and self-attention as they apply to computer vision. Understand how attention mechanisms generalize beyond NLP, compare with convolution operations, and enhance feature analysis across spatial and channel dimensions as well as temporal relations in videos and moving objects.
We'll cover the following...
We'll cover the following...
After familiarizing ourselves with attention and transformers in NLP to establish a foundation for understanding the attention mechanism, we’re now ready to expand our focus to apply transformers in the context of computer vision (CV), which is the central theme of our course.
Bridging the gap: Self-attention in computer vision
Let's start by planning our path ahead with a ...