The link:
https://haitaodu.wordpress.com/2012/10/05/maximum-a-posteriori-map-vs-bayesian/
Here we want to use an example to illustrate the difference between MAP and Bayesian.
Problem
Suppose we have a jar of coins. There a two types of coins in the jar, type A and type B. For type A coin, if we flip it, we have 60% of getting head (H). For type B it is 40%. In addition, we know there are same number of type A coins and B in the jar.
Someone selected one coin, it could be type A or B, and flipped twice, we got the data. Given data, we try to make predictions on next flip, i.e.,, we want to know for next coin flip, what is the probability of getting head.
Analysis
We can understand this problem as a parameter estimation problem. We have a discrete distribution, , . can only be 0.6 or 0.4, and prior is uniform, which is , and
MAP will select the “right parameter for the distribution” / it is a type A coin or type B coin, and make predictions on it. For example, the calculation gives us and , we know it is more likely to be a type A coin, we will predict next flip with have 0.6 chance head. And note
Bayesian will NOT be interested in get the “right parameter”. But use the distribution of to make the prediction. For example if and , the prediction would be , instead of 0.6.
Calculation details
According to Bayes’ Theorem
Note for given Data, is a constant. Therefore we only need to calculate the numerator and normalize it.
Comments
Post a Comment