Getting to Know Dragonfly Better

Dragonfly is a library that is designed for scaling Bayesian optimization and experimental design. The library offers a wide range of features that allow for high dimensionality, multifidelity evaluations, multitask settings, parallel evaluations, and derivative evaluations.

The library is developed to be both modular and flexible, allowing users to plug and play different components, optimizers, acquisition functions, and surrogate models. It also provides an interface for customizing various parameters and settings to suit specific use cases.

Modes in Dragonfly

  • Bayesian optimization mode: This mode focuses on optimizing expensive black box functions. Dragonfly implements a variety of surrogate models including GPs, random forests, and others. It also supports several acquisition functions such as EI, UCB, and knowledge gradient (KG).

  • Multifidelity optimization mode: This mode is useful when we have access to multiple versions of our objective function, each with different costs and accuracies. Dragonfly makes use of cheaper, less accurate fidelities to guide the search for the optimum in the more expensive, accurate fidelities. This results in significantly reduced optimization costs.

  • Multitask optimization mode: This mode is applicable when optimizing functions under multiple related tasks simultaneously. Dragonfly exploits similarities across tasks to speed up the optimization process.

  • Parallel optimization mode: If we have access to a system that can evaluate multiple points simultaneously, this mode is for us. Dragonfly can suggest batches of points to be evaluated in parallel, speeding up the optimization process.

  • Derivative-based optimization mode: This mode leverages gradient information, if available, to guide the optimization process, achieving faster convergence rates.

Methods of operation in Dragonfly

Dragonfly, as a Python library for Bayesian optimization, provides options to the end user to develop large scalable optimization codes. It also allows the flexibility to do this through minimum code, where only an API is called and the options are handled by the Dragonfly optimization functions automatically. In this case, we have little autonomy in the execution of the code, and the complete optimization loop is handled by the Dragonfly default structures and methods automatically. The second way is by using the ask-tell mode that gives us the autonomy to handle and tailor the optimization code by ourselves.

Scalable Bayesian optimization through API

The scalability of Bayesian optimization, as offered in Dragonfly, primarily stems from its design for high-dimensional optimization problems. Traditional Bayesian optimization can be computationally expensive for high-dimensional problems, but Dragonfly implements scalable variants of Bayesian optimization algorithms to handle such cases effectively. This includes methods like additive GPs, Bayesian optimization with random embeddings, and partitioning-based Bayesian optimization.

Furthermore, Dragonfly’s support for parallel and multifidelity evaluations also contributes to its scalability. These features allow it to optimize functions more efficiently by evaluating multiple points simultaneously or by using cheaper, less accurate evaluations to guide the optimization process.

The ask-tell mode

The ask-tell mode is a flexible interface provided in Dragonfly and other similar Bayesian optimization libraries. It is designed to give users greater control over the optimization loop, allowing them to intervene in the process at any point. This is particularly useful when users need to manage the evaluations of their objective function themselves, such as in cases where the evaluations might involve complex procedures or human decisions.

How is the ask-tell mode different?

In traditional Bayesian optimization, the optimization loop involves suggesting a candidate for evaluation, evaluating the objective function at that point, and updating the model with the results. This loop is typically managed by the library and is opaque to the user.

In contrast, the ask-tell mode breaks this loop into distinct ask and tell steps that the user can control:

  1. Ask: The library suggests a candidate point for evaluation. This is the ask step. The user can then take this suggested point and evaluate it whenever they are ready.

  2. Tell: After the user has evaluated the suggested point, they return the result to the library. This is the tell step. The library then updates its model with the result.

The ask-tell mode gives users the flexibility to do the following:

  • Controlling the evaluation of the objective function: This is useful in scenarios where evaluations might require specific conditions to be met or might be affected by certain variables that the library is not aware of.

  • Introducing manual evaluations: There might be cases where a user wants to evaluate certain points based on their own intuition or knowledge. The ask-tell mode allows them to incorporate these manual evaluations into the optimization process.

  • Pausing and resuming the optimization: The optimization process can be paused after the ask step and resumed at a later time with the tell step. This allows users to manage the optimization process according to their own schedules.

  • Handling failed evaluations: If the evaluation of a suggested point fails for some reason, the user can simply skip the tell step for that point and move on to the next ask step.

In Dragonfly, the ask-tell mode can be accessed through the dragonfly.exd.cp_domain_utils and dragonfly.exd.experiment_caller modules. Users can create an ExperimentDesigner object and use the ask and tell methods to control the optimization process.

Implementation in Python

Let’s assume a simple example of a basic optimization using Dragonfly. Suppose we’re traveling on a circular path around point A (0.5,0)(0.5,0) and want to find the minimum distance to the point on the path from point A. We can do this using the ask-tell mode in Dragonfly, as shown below:

Get hands-on with 1200+ tech skills courses.