Home/Newsletter/Artificial Intelligence/How Microsoft’s WHAMM Uses AI to Render Gameplay in Real Time
Home/Newsletter/Artificial Intelligence/How Microsoft’s WHAMM Uses AI to Render Gameplay in Real Time

How Microsoft’s WHAMM Uses AI to Render Gameplay in Real Time

Microsoft’s WHAMM model renders real-time game frames using AI instead of a graphics engine. Trained on just one week of Quake II footage, it predicts each frame from player input, opening the door to a future of interactive, AI-generated worlds.
10 min read
May 12, 2025
Share

Traditional game graphics are built through simulation: every shadow is calculated, every polygon is mapped, and every collision is modeled using hardcoded rules.

But what if we skipped the simulation entirely?

What if, instead of calculating every detail through physics and rendering engines, an AI model could predict the game’s next frame, based purely on learned patterns from gameplay?

That’s the promise of a new direction in Generative AI: systems that don’t just create static content like art or text, but generate dynamic, interactive environments. At the cutting edge of that idea is WHAMM, a project from Microsoft Research.

Today, I'll share:

  • What WHAMM is (and why it matters)

  • How WHAMM works under the hood

  • WHAMM vs. WHAM: A performance leap

Let's get started.