Where to go from here?
Revisit what you learned in this course, explore some of the latest advancements in deep learning, and find out where to go from here.
We'll cover the following
Recap
In this admittedly long and difficult course, you saw everything you need to know about deep learning as of 2021.
First, we examined the basic principles behind Neural Networks starting with the Linear classifier. Then, we proceeded to discuss the Feedforward Networks as a natural extension of linear models.
We also discussed how to train a deep network using backpropagation and gradient descent as well as the most commonly used optimization algorithms and activation functions.
Convolutional Neural Networks was the architecture that started the whole deep learning hype back in 2012. We talked about it in considerable detail to ensure that you would have a good grasp of it. In computer vision applications, they are as relevant as ever.
Next in line were Recurrent Neural Networks and LSTMs, which were major breakthroughs in Natural Language Processing and sequence processing.
Then, we made a turn towards Generative Learning and learned about Variational Autoencoder and Generative Adversarial Networks.
Finally, we touched upon the two major architectures popular at this time:
Attention-based networks and Transformers. These architectures have radically changed the field in the last few years. They have already outperformed every other model in NLP tasks and have slowly entered the computer vision field as well.
Graph Neural Networks, on the other hand, aren’t very mature yet, but they have a vast variety of potential applications.
What’s next for deep learning
The good news is that, by now, you definitely have a very wide view of the field and all major models present in it. The bad news is that deep learning research is expanding rapidly, which means that we have only scratched the surface here. Here are some topics you might want to look into next:
- Reinforcement Learning
- Tensorflow, MXNet, JAX, and other frameworks
- State-of-the-art Computer Vision architectures such as EfficientNet
- Natural Language processing systems based on Transformers such as BERT and GPT
- Self-supervised learning
- Representation learning
- DL in autonomous vehicles and AI-assisted driving
- Medical applications
- Privacy in AI
- AutoML
- MLOps
- Artificial General Intelligence (AGI)
Of course, one cannot master everything. You will learn more and more as you face more challenging projects.
The path forward
Throughout the course, you have familiarized yourself with Pytorch and have gained a solid understanding of the mathematics and intuition behind the algorithms. You are now ready to start applying deep learning to solve real-world problems. You can choose to go down either of the three routes listed below:
-
Go to kaggle.com, find a problem that interests you, and try to develop a solution using everything you learned in this course.
-
If you are more interested in the research aspect of deep learning, you can start reading research papers, explore their implementation and applications. There is no better place for that than paperswithcode.com.
-
Finally, if you want to dive even deeper into deep learning and stay updated on the latest trends in the field, our blog AI Summer is the place for you. There, you will find implementations of different algorithms, more articles on the intuition and math behind them, and guides on how to build deep learning-based applications.
And that’s it. We wish you all the best in your future endeavors. For feedback or questions, don’t hesitate to reach out to us on our social media.
Get hands-on with 1200+ tech skills courses.