Search⌘ K
AI Features

Evaluating MuLan: Performance and Design Insights

Explore how MuLan's multi-step architecture improves text-to-image generation by accurately handling complex prompts with multiple objects, attributes, and spatial relationships. Understand the evaluation framework, performance benefits of feedback control, and practical design principles for building reliable multimodal AI agents.

We’ve explored MuLan’s innovative, multi-step architecture. But how do we prove that this agentic system design is actually more effective than a standard, one-shot approach? To answer this, the researchers needed a rigorous way to evaluate its performance on complex, multi-object prompts.

A benchmark for compositional prompts

To create a fair and challenging test for MuLan, the researchers curated a new dataset consisting of 200 hard prompts. This benchmark wasn’t taken from a single source; it was carefully constructed to test the specific failure points of modern text-to-image models. The creation process involved several steps outlined below.

  • Foundation: They began by collecting complex spatial prompts from an existing benchmark, T2I-CompBench.

  • Expansion: To broaden the scope, they used ChatGPT to generate hundreds of new prompts with diverse objects, relationships, and attributes.

  • Curation: Finally, they manually selected the most difficult prompts that state-of-the-art models like SDXL ...