Overview of the Course

Explore what you will learn in the course.

Techniques employed

Here are some of the techniques you will learn (by either building them yourself or by using them to solve a task):

  • Linear layers
  • Backpropagation and optimization
  • Convolutional neural networks
  • Recurrent neural networks
  • Variational autoencoders
  • Generative adversarial networks
  • Attention
  • Transformers
  • Graph convolutional networks

Besides these main architectures, you will also learn ways and methods to improve the model’s accuracy and performance. The techniques can be used in a variety of architectures and are not model-specific:

Examples include:

  • Optimization algorithms
  • Activation functions
  • Batch normalization
  • Skip connections
  • Dropout
  • Latent variables
  • Image and text representation

After completing this course, these skills will be your arsenal. Be sure to dedicate an appropriate time to each chapter so as to feel comfortable discussing them later.

A lot of emphasis has been given to provide you with inspiration along the way.

Course outline

Each pre-described building block will be an independent lesson that you can master at your own pace. At the end of the course, there will be a coding exercise as well as a quiz to solidify your understanding. The chapters are organized as follows:

  1. Neural networks
  2. Training neural networks
  3. Convolutional neural networks
  4. Recurrent Neural Networks
  5. Autoencoders
  6. Generative adversarial networks
  7. Attention and transformers
  8. Graph neural networks
  9. Conclusion

Libraries and setup

Alongside the theory, you will learn the basics of one of the most famous deep learning libraries: PyTorch.

You will have the time to load your data and solve problems inside the platform.

We try to introduce every concept smoothly but some work is always left to you. You will always need to search and find stuff, so keep the official PyTorch documentation in your bookmarks.