Search⌘ K
AI Features

ML as a Service (MLaaS)

Explore how to deploy and serve machine learning models using Docker containers within AWS Lambda's serverless environment. Understand the event-driven process from model storage to inference, address challenges like cold start latency, and discover how autoscaling enhances performance and resource efficiency in ML pipelines using serverless computing.

Machine learning pipeline

The machine learning pipeline consists of many repetitive steps, for example, data preprocessing, model training/retraining, and model inference. Serverless architecture can be utilized in many of these steps for different tasks. For example:

  • For servicing pull and push data/objects from the backend or data buckets (for example, AWS Amazon S3).

  • For building APIs to transform and clean data (the data processing stage of ML).

  • For new training or retraining in scenarios where conditions for concept driftingIn machine learning, model prediction accuracy may change with time due to minor drifts in physical phenomenon or models. Consequently, it causes change(s) in the statistical properties of the target variable over time in unexpected ways. Retraining with new training data is one trivial solution to this problem. are met.

  • For batch/ensemble predictions.

Machine learning steps
Machine learning steps

In ...