Search⌘ K
AI Features

Maximum A Posteriori (MAP)

Explore the concept of Maximum A Posteriori (MAP) estimation within convex optimization. Understand how MAP incorporates prior knowledge to improve parameter estimates, avoid overfitting common with Maximum Likelihood Estimation, and apply these techniques to real-world problems such as the Bernoulli distribution. Gain practical skills computing MAP estimates using gradient methods and Bayesian inference.

Limitation of MLE

Imagine NN coin tosses where xi{H,T}  i=1,2,...,Nx_i \in \{H, T\} \ \forall \ i=1,2, ..., N denotes the outcome of the ithi^{th} toss. Let pp be the probability of the outcome being heads (HH) and 1p1-p be the probability of the outcome being tails (TT) . The negative log-likelihood of the observed tosses D={x1,x2,...,xN}D = \{x_1, x_2, ..., x_N\} ...