Monday, March 4, 2019

Baye's Theorem

P(A,B) = P(A|B)P(B) = P(B,A) = P(B|A)P(A)
P(A|B)P(B) = P(B|A)P(A)

P(A|B) = P(B|A)P(A) / P(B)  # divide by P(B) on both sides
P(e|T) = P(T|event)P(event) / P(T)

P(A|B) is called the posterior
P(B|A) is called the likelihood
P(A) is called the prior
P(B) is called the marginal likelihood

No comments:

Post a Comment

Regularization in Machine Learning, Deep Learning

Regularization can prevent overfitting and potentially make algorithm converge faster and more performant. Useful in deep learning tasks, in...