Monday, February 11, 2013

Thoughts on EM - Clustering: Growing A Mixture

Since I am still working a lot on my clustering algorithms, especially Expectation Maximization for Gaussian Mixture Models, I will share another thought on how to improve EM's performance. In general the problem is that EM will converge on a local maximum. That menas when we start with initial bad gaussians, we will end up on a local maximum close to where we started. Solutions are run k-means 10 x and use the clusters as the inital means (like weka) or cluster hierarchical first (see older blog post). The problem with k-means is that it is localized, too. The problem with the hierarchical version is that it is slow. In the end the initialization was slower for me then the actual clustering. Something I found today is "growing" the mixture incrementally [1].

Just to remind you EM is a two stage algorithm. First we calculate the probabilities
for each data point using the current model parameters (E -step, expectation), and then use these probability estimates to refine the parameters (M-step, maximization). Now the trick is to split the gaussian with the highest rank (highest weight or prior) after each M-step and delete all components
with a too small prior or a too small variance. We repeat that process until we reach the desired k.

[1] Kevin Murphy: Machine Learning: a Probabilistic Perspective, MIT Press, 2012

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.