Maximizing Efficiency with Complete Statistics
Understand the concept of complete statistics and how they relate to minimal sufficient statistics and maximum likelihood estimators. Explore properties like unbiasedness and minimum variance that make certain statistics ideal for pooling in convolutional neural networks. Learn about ancillary statistics and their complementary role in improving model efficiency and stability.
Complete statistics
The many choices with minimal sufficient statistics sometimes confuse a selection. This section introduces complete statistics, which narrows the pooling statistic choice to only the maximum likelihood estimator (MLE) of the feature map distribution.
A complete statistic is a bridge between minimal sufficient statistics and
Next, we lay out the attributes and path leading to the relationship between complete minimal statistics and the MLE.
Completeness
Let be a family of pdfs or pmfs for a statistic . The family of probability distributions is called complete if for every measurable, real-valued function for all implies with respect to , that is, for all . The statistic is boundedly complete if is bounded.
In simple words, it means a probability distribution is complete if the probability of a statistic from an observed sample in the distribution is always non-zero.
It becomes clearer by considering a discrete case. In this case, completeness means implies because by definition is non-zero.
For example, suppose is observed from a normal distribution , and there is a statistic . Then, the is not equal to for all . Therefore, ...