# Optimizing Feature Map Pooling

Uncover the effectiveness of tailored maximum likelihood estimators for pooling in convolutional networks.

## We'll cover the following

A feature map follows a distribution. The distribution differs with samples. For example, an object with sharp edges at the center of an image will have a different feature map distribution compared to an object with smudgy edges or located at a corner.

The distribution’s maximum likelihood estimator (MLE) makes the most efficient pooling statistic. Here are a few distributions that feature maps typically follow and their MLEs.

## Uniform distribution

A uniform distribution belongs to the symmetric location probability distribution family. A **uniform distribution** describes a process where the random variable has an arbitrary outcome in a boundary denoted as $(α,β)$ with the same probability. Its pdf is

$f(x)=\begin{cases} \frac{1}{(β-α)}&, &\text{if}\spaceα < x < β\\ 0&, &\text{otherwise} \end{cases}$

Different shapes of the uniform distribution are shown in the following illustration as examples. Feature maps can follow a uniform distribution under some circumstances, such as if the object of interest is scattered in an image.

Get hands-on with 1200+ tech skills courses.