Search⌘ K
AI Features

Maximum Likelihood Estimation (MLE)

Explore maximum likelihood estimation (MLE) to find model parameters that maximize the likelihood of observed data. Understand the principles behind MLE, including log-likelihood and gradient solving, and apply it to Gaussian distributions with practical examples using NumPy.

What is MLE?

Maximum likelihood estimation (MLE) is a method used to determine the parameters of a process or model used to generate the observed data. The parameters are found such that they maximize the log-likelihood of the observed data being generated by the process described by the model.

For example, suppose a math teacher wants to grade students based on their performance on a math test. They can use MLE to fit a suitable model, such as normal, binomial, Poisson, and so on, to the test scores and find the most likely values of the mean, standard deviation, proportion, rate, etc., needed to grade students.

MLE
MLE

To understand better, let’s assume we have NN observed data points D={x1,x2,...,xN}D = \{x_1, x_2, ..., x_N\} generated by a known distribution P(x,β)P(x, \beta) but parameterized by an unknown parameter β\beta. We wish to estimate the optimal parameter β\beta^* ...