Endpoints and Deployment
Explore how to deploy machine learning models using Azure through various methods including online, batch, and local endpoints. Understand how to configure environments, scoring code, and inference setups. This lesson guides you through no-code, system-image, and custom-image deployment topologies, offering practical insights for managing Azure ML services efficiently.
We'll cover the following...
How do we integrate our ML models into our product? Let’s talk about deploying the model. There are multiple approaches and multiple aspects of deployment. The first aspect is online vs. offline/batch deployment. Let’s compare both methods.
Online vs. Batch Deployment
Online deployment | Batch deployment |
|
|
|
|
|
|
|
|
Components in service deployment
What are the essential aspects needed for running a service? Let's understand by using the diagram below.
An ML model needs the following details for deployment:
An environment configuration: Specifications about which software and packages to install.
Scoring code: The code that scores the requested data using the model.
Inference configuration: Specifications about the script and other details to run the script.