Wednesday, January 2, 2013

Forward Recursion of Variable Duration Hidden Markov Models

Somewhen last October / November I started wrapping my head around variable duration Hidden Markov Models. Especially the forward recursion. Again the formulas and most of my Hidden Markov Model knowledge are from the Rabiner paper [1]. In regular Hidden Markov Models the duration of staying in a state is modeled by a geometric distribution, imposed by the structure of HMMs:


However, in practice the duration is not geometrically distributed. Variable duration Hidden Markov Models fix this weakness by allowing each state to generated multiple observations at once. The length of this generated sequence is modeled by a non parametric distribution: p(d). Furthermore these models forbid the self transition: aii. Now the forward recursion becomes the following:


The first difference is that we have to sum out the duration on top of the states, since we don't know
how long the observed sequence for that state is. The recursion is then not only looking at the last state but looks back d steps. So the model is not truly "markovian" but "semi markovian". The other difference is that the observation probability is not only the current frame with the current state's observation probability, but the sequence from d steps back to the current one.

The plus side of that model is that it is able to model the duration of each state better. However the downside is that the inference time is much higher since we have to sum out the duration, too.

[1] Lawrence Rabiner: "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", Proceedings of the IEEE Vol 77, No 2, 1989

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.