What is Bayesian optimization?

Bayesian optimization is a global optimization technique specifically designed for optimizing complex, noisy, and expensive objective functions. It is particularly useful when the true objective function is difficult or time-consuming to evaluate, such as in hyperparameter tuning for machine learning models or optimizing parameters in simulation-based optimization problems.

Key idea behind Bayesian optimization

The key idea behind Bayesian optimization is to model the unknown objective function as a probability distribution and iteratively update this model based on the observed data. The method employs a probabilistic surrogate model, typically a Gaussian process, to represent the uncertainty about the true objective function. The Gaussian process model provides a prediction of the objective function with an estimate of the uncertainty associated with the prediction.

Optimization process

The optimization process involves a balance between exploration and exploitation. In the exploration phase, the algorithm tries to sample points where the objective function is uncertain, while in the exploitation phase, it focuses on sampling points with high predicted values. The balance is achieved using an acquisition function, which quantifies the utility of sampling a point based on the surrogate model’s predictions and uncertainties.

Summary of general steps

The general steps of Bayesian optimization can be summarized as follows:

  • Modeling: Build a probabilistic surrogate model, usually a Gaussian process, to represent the objective function and its uncertainty.

  • Selection: Use an acquisition function to determine the next evaluation point based on the current surrogate model.

  • Evaluation: Evaluate the true objective function at the selected point.

  • Update: Update the surrogate model with the new observation and repeat the process.

  • Termination: Repeat selection and update steps, until a stopping criterion is met, which could be a convergence threshold or reaching a maximum number of iterations.

Bayesian optimization effectively optimizes black-box functions with a limited number of evaluations. It is widely used in various applications, including hyperparameter tuning, experimental design, robotics, etc.

Working of the Bayesian optimization
Working of the Bayesian optimization

Implementation of Bayesian optimization

Bayesian optimization is a probabilistic model-based approach for optimizing opaque-box functions, often used in scenarios where evaluating the objective function is expensive. Here’s an example using the scikit-optimize library in Python, which provides a convenient interface for Bayesian optimization with a random forest model:

# Install scikit-optimize which we have already installed here but if you want to run locally
# then we can run the command given below:
# pip install scikit-optimize
from skopt import BayesSearchCV
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
# Create a synthetic dataset for classification
X, y = make_classification(n_samples=1000, n_features=20, n_informative=10, n_clusters_per_class=2, random_state=42)
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Define the hyperparameter search space
param_space = {
'n_estimators': (10, 100),
'max_depth': (1, 20),
'min_samples_split': (2, 10),
'min_samples_leaf': (1, 10),
}
# Define the objective function (here, it's the accuracy of a RandomForestClassifier)
def objective_function(params):
model = RandomForestClassifier(**params, random_state=42)
model.fit(X_train, y_train)
return -model.score(X_test, y_test) # Minimize negative accuracy
# Perform Bayesian optimization
opt = BayesSearchCV(
RandomForestClassifier(),
param_space,
n_iter=10, # Number of optimization steps
random_state=42,
n_jobs=-1, # Use all available cores
)
opt.fit(X, y)
# Print the best parameters found
print("Best parameters:", opt.best_params_)

Code explanation

  • Lines 5–8: We Import the required libraries.

  • Line 11: Now, we generate a synthetic dataset for classification using the make_classification function with the required parameters.

  • Line 14: We split the data into training and testing sets.

  • Lines 17–22: The param_space{} dictionary defines the hyperparameter search space for the random forest classifier. It specifies ranges for hyperparameters like the number of estimators, maximum depth, minimum samples split, and minimum samples leaf.

  • Lines 25–28: This objective_function is the function to be optimized. It takes a set of hyperparameters as input (params), creates a random forest classifier with those hyperparameters, fits the model on the training data, and returns the negative accuracy on the test data. The goal is to minimize the negative accuracy, which is equivalent to maximizing the accuracy.

  • Lines 31–37: Now, we create a BayesSearchCV object, which performs Bayesian optimization using cross-validation. It takes the random forest classifier as the base estimator, the hyperparameter search space, and other configuration parameters such as the number of optimization steps (n_iter), random seed (random_state), and the number of parallel jobs (n_jobs).

  • Line 39: Now, optimization is performed using the fit method with the synthetic dataset.

  • Line 42: Finally, we print the best parameters we found with the Bayesian method.

Bayesian optimization is a probabilistic model-based optimization technique used to efficiently find the optimal configuration of parameters for complex, expensive-to-evaluate functions. By iteratively updating a surrogate model based on observed results, it intelligently explores the parameter space, making it particularly effective for optimizing machine learning hyperparameters or other real-world optimization problems.

Copyright ©2024 Educative, Inc. All rights reserved