Vision Transformers
Learn about non-task-agnostic vision transformers.
Introduction
Foundation models, as we saw earlier in this course, have two distinct and unique properties:
Emergence: Transformer models that qualify as foundation models can perform tasks they were not trained for. They are large models trained on supercomputers. They are not trained to learn specific tasks like many other models. Foundation models learn how to understand sequences.
Homogenization: The same model can be used across many domains with the same fundamental architecture. Foundation models can learn new skills through data faster and better than any other model.
GPT-3 and Google BERT (only the BERT models trained by Google) are task-agnostic foundation models. These task-agnostic models lead directly to ViT, CLIP, and DALL-E models. Transformers have uncanny sequence analysis abilities.
The level of abstraction of transformer models leads to multi-modal neurons.
Multi-modal neurons can process images that can be tokenized as pixels or image patches. Then they can be processed as words in vision transformers. Once an image has been encoded, transformer models see the tokens as any word token, as shown in the figure below:
Get hands-on with 1200+ tech skills courses.