Auto Vectorization

This lesson will provide an introduction to auto vectorization in JAX.


Those that are familiar with stochastic gradient descent (SGD) will know that it is applied one sample at a time, thus making it computationally inefficient. Instead, we use it in the batches in a technique usually known as minibatch gradient descent.

This batching operation is a common practice throughout the deep learning regime and can be used for various tasks like convolution, optimization, and so on.

Let’s have a look at a convolution function for 1D vectors:

Get hands-on with 1000+ tech skills courses.