In practice, we often end up solving problems that require knowledge about unknown parameters during modeling. The Expectation Maximization algorithm was designed to overcome the gridlocks of one parameter not allowing to infer the other one and vice-versa.
Strictly speaking, the EM algorithm is used to find maximum likelihood parameters of a statistical model in cases where the equations cannot be solved directly. In practice, EM framework of thinking is a very powerful tool to solve “gridlocks” and could be found in a lot of classical solutions. For example, the most popular clustering approach, K-means, is pure EM without fancy math. Add some sophisticated models to the mix and you already in the field of Mixture Models.
In this talk we would learn fundamental backgrounds of EM and go through few examples, from toy examples to PhD thesis problems. The final goal is to walk away with a powerful framework that will allow you to combine other algorithms in a much more powerful way.