Once again, the short term memory does provide some self-consistency
but it is only a rudimentary form of state information. The memory is
finite and covers a fixed window of a few seconds. It is not a full
state model since the *states* here do not correspond to
meaningful transitions or fundamentally different modes of
operation. Thus, if no events of significance occur for a few seconds,
the ARL system forgets its current state and starts off fresh. A
finite state automaton will not 'forget' its current discrete state
and might remain in it indefinitely until an appropriate transition is
triggered. In addition, the continuous representations of the ARL's
short-term memory causes some spatially driven clustering in the
eigenspace. In a Hidden Markov Model (HMM), on the other hand,
states are clustered in terms of their output probabilities and their
generative characteristics. This is a more functional notion of
state. Therefore, events that occur adjacently in time but have very
different outputs will be partitioned by an HMM. However, the ARL
clustering might not separate the two and cluster the events due to
their spatio-temporal proximity (i.e. *not* their functional
proximity). The notion of state can be included in the CEM algorithm
if it is extended to include Hidden Markov Models in the pdf (as in
addition to Gaussians).