EstiHMM: heat plots for the number of maximal sequences

by gertekoo

In a previous post, I mentioned an efficient algorithm for predicting the maximal state sequences for a given output sequence in an imprecise hidden Markov model (iHMM). Jasper De Bock and I have since given this algorithm a name, EstiHMM, and have written an implementation in Python that we intend to make public as soon as possible, also via this channel. We also hope to be able to present our work at the coming ISIPTA’11 conference in Innsbruck.

We now know that EstiHMM’s complexity is cubic in the number of states for the hidden variables, quadratic in the number of hidden variables, and linear in the number of maximal sequences. This is comparable to Viterbi’s algorithm, if we take into account that Viterbi resolves ties arbitrarily, something we are not allowed to do for iHMMs.

While a linear complexity in the number of sequences is probably as good as it gets, we see that we can only hope to find all maximal sequences efficiently provided their number is reasonably small. Should it, say, tend to increase exponentially with the length of the chain, then no algorithm, however cleverly designed, could overcome this hurdle.

Because this number of maximal sequences is so important, we decided to study its behaviour in more detail.

In a preliminary analysis, we used Monte Carlo methods to study the distribution of the number of maximal elements for randomly chosen transition and emission models. We found that the expected number of maximal elements is typically rather small, but its variance can become quite large with increasing imprecision, indicating roughly that high numbers of maximal elements only occur in restricted regions of (imprecise) probability space.

To get a more definite picture, we decided to concentrate on binary hidden Markov chains where the state variables have two possible values: say, 0 and 1. So have the output variables: say, 0 and 1 as well. This means that a precise transition matrix for going from one state to the next is completely determined by two values: the probability p to go from state 0 to state 0, and the probability q to go from state 1 to state 1. We can find an imprecise model by contamination: taking convex mixtures of these precise models, with mixture coefficient 1-\epsilon, and the vacuous model, with mixture coefficient \epsilon. As \epsilon ranges from zero to one, we go from a precise iHMM to a vacuous iHMM.

We kept the emission model fixed and precise, and for a fixed value of the output sequence and \epsilon, we allowed the probabilities (p,q) to range over the possible values in the unit square, in a grid of size 1000 by 1000. For each of these values, we used the iHMMPredict algorithm to calculate the corresponding number of maximal state sequences. The results can be represented quite efficiently and elegantly in a heat plot.

Here are a few of the more spectacular ones:

To see how the number of maximal elements evolves as the imprecision increases, take a look at the following animation:

We see that, even for an imprecision that is appreciable, there are large regions of transition probability space where the number of maximal elements remains fairly small. The plots display quite interesting and intricate behaviour. And the nice thing is: we can explain it, and even predict it to some extent. But for this, you’ll have to wait until we see each other at the coming ISIPTA meeting. Looking forward to that!

Update: Call us fickle, but we have decided to rename the algorithm iHMMPredict to EstiHMM. It just tastes better.

Advertisements