Search⌘ K
AI Features

Maximum A Posteriori (MAP)

Explore the concept of Maximum A Posteriori estimation, how it differs from Maximum Likelihood Estimation, and its application in convex optimization. Understand Bayesian approaches using priors to improve parameter estimates, and apply MAP to practical examples such as the Bernoulli distribution with coin toss data.

Limitation of MLE

Imagine NN coin tosses where xi{H,T}  i=1,2,...,Nx_i \in \{H, T\} \ \forall \ i=1,2, ..., N denotes the outcome of the ithi^{th} toss. Let pp be the probability of the outcome being heads (HH) and 1p1-p be the probability of the outcome being tails (TT) . The negative log-likelihood of the observed tosses D={x1,x2,...,xN}D = \{x_1, x_2, ..., x_N\} ...