Learning a Dynamical Systems Tree with Toy Data

 

 

For this toy problem, we chose a very simple Dynamical Systems Tree and sampled it for 200 time steps. We then attempted to learn the model and infer the hidden states from this data.

 

 

The DST used has two SLDS leaf processes and one higher level aggregator tying them together. The discrete vertices each have 2 hidden states. The continuous hidden variables are each 2 dimensional Gaussians with full covariances. The observed emission variables are one dimension.

 

Data set 1:

 

Data set 2:

 

 

 

The first and second plots of each of the datas set are the outputs of the 2 leaf processes. The third plot it the sampled switching states of the aggregator variable. The fourth plot is the inferred probability of the hidden state from our learned model. The fifth and seventh plots are the sampled switching states for each leaf process and the sixth and eighth are the inferred probabilities from our learned model. The lower level leaf processes had no problem learning the switching sequence that generated the data. The higher level aggregator also learned the proper switching state. It had a little trouble learning a quick switch such as time at time step 96  in data set 1, and not all of the probabilities locked at 0 or 1. Overall it learned the switching states nearly to perfection.

 

 

 


 

DST Main Page            Models Page          Andrew Howard Home           Prof. Tony Jebara Home