Introduction to Model Deployment
Learn the deployment steps of a model by discovering various deployment frameworks like TensorRT, TensorLite, PyTorch Mobile, ONNX, and OpenVINO.
Deployment of an AI model
It’s time to learn the deployment stage and the real environment. We will use our trained model to ask for its predictions using its ready-to-use weights. This is also called inference time.
Usually, a deployment environment has different hardware features than training. We can have a GPU in our local machine to train our models, but we might want to run this trained model in an embedded system, in a mobile phone with less memory and a less powerful processor. Or we might have to run it on another machine, like a computer or server but have less power since the stronger ones are way more expensive than our deployment environment budget. Another case is that even though our hardware in the train and deployment environment is similar, we might want to run inference faster, which raises a need to optimize our ...