Neural Radiance Fields (NeRF)

Learn about the Neural Radiance Fields technique and how it can be used to generate novel views of a given scene.


Neural Radiance Fields are a massively popular new technique in the field of computer vision. A Neural Radiance Field (NeRF) is a differentiable neural network that models light transport through a continuous 3D scene. NeRFs are perhaps the new state-of-the-art model for novel view synthesis problems. Since their inception in 2020, hundreds of variations on the technique have introduced many new methods and capabilities, such as faster training, relighting, animation, text-to-3D generation, and more.

Introduction to NeRFs

The NeRF architecture attempts to solve the novel view synthesis problem. Given a collection of 2D views of a scene and their 6DoF poses, we wish to render images from unobserved poses. Taking 2D images and poses as input, the NeRF learns an implicit representation of a scene that can be used to predict the appearance of 2D images from previously unseen positions.

How NeRFs work

The NeRF model is a representation of a radiance field, a function that describes the direction of light transport through a continuously-defined 3D space. The 5D radiance field function is parameterized by position x=(x,y,z)R3\bold{x}=(x, y, z) \isin \mathbb{R}^3 and view direction d=(ϕ,θ)\bold{d}=(\phi, \theta), where ϕ\phi is the elevation and θ\theta is the azimuth of light direction. The view direction d\bf{d} is often represented as a 3D unit vector for convenience. Thus, the radiance function maps from a position and view direction to radiance:

Get hands-on with 1200+ tech skills courses.