Model Deployment
Explore how to deploy AI models effectively in production environments considering DevOps workflows, data management policies, and customer expectations. Understand team roles in deployment and learn how to train end users to interpret AI outputs, ensuring a smooth product experience for both B2B and B2C contexts.
Managing deployments
In this lesson, we’d like to understand the avenues available from a DevOps perspective, where we will ultimately use and deploy the models in production outside of the training workstation or training environment itself. Perhaps we’re using something such as GitLab to manage the branches of our code repository for various applications of AI/ML in our product and experimenting there.
However, once we are ready to make changes or update our models after retraining, we’ll push the new models into production regularly. This means we need a pipeline that can support this kind of experimentation, retraining, and deployment regularly. This section will primarily focus on the considerations after we place a finished ML model into production—a live environment—where it will be accessed by end users.