Coding: Bayesian Optimization from Scratch
Explore how to implement Bayesian optimization from scratch by building a Gaussian process surrogate model and defining an acquisition function. Understand the iterative process of selecting sample points, updating models, and optimizing complex functions relevant to machine learning. This lesson equips learners with practical skills to write Bayesian optimization code manually, enabling fine control over model complexity and parameter tuning.
Bayesian optimization is a model-based method for finding the minimum of a function that is expensive to evaluate. It involves constructing a probabilistic model for the function and then exploiting this model to determine where to sample next.
General steps of Bayesian optimization
The general steps to implement Bayesian optimization are:
Specifying a surrogate model (usually a GP).
Defining an acquisition function based on this model.
Iterating the following steps for a number of rounds:
Using the acquisition function to decide where to sample.
Updating the surrogate model incorporating the new sample.
Example of machine learning
In the realm of practical machine learning, we often employ a function called f(x) as a way to control the complexity of our models during the training process. Imagine we’re training a machine learning model, for example, a neural network, to make predictions, and we want ...