Search⌘ K
AI Features

Hyperparameter Tuning

Explore how hyperparameters affect model bias and variance, and learn to configure Amazon SageMaker Automatic Model Tuning with strategies like random search and Bayesian optimization. Understand how to select objective metrics and implement tuning workflows to identify optimal configurations that improve ML model performance on AWS.

With training infrastructure optimized through distributed training, Spot Training, Training Compiler, and Warm Pools, the next challenge shifts from how to train to which configuration to use. Every machine learning model depends on configuration values that you set before training begins. These values, called hyperparameters, are distinct from model parameters such as weights and biases, which the algorithm learns from data during training. Choosing the right hyperparameters determines whether a model underfits, overfits, or generalizes well to unseen data.

Consider concrete examples within the AWS ML exam scope. In XGBoost, a SageMaker built-in algorithm, max_depth controls tree complexity, num_round sets the number of boosting iterations, and eta governs the learning rate. In neural networks, batch size, number of hidden layers, and the number of units per layer are all hyperparameters. Manually experimenting with these values through trial and error is a common pitfall that wastes compute resources and often produces suboptimal models.

...