Visual Hide and Seek


We train embodied agents to play Visual Hide and Seek where a prey must navigate in a simulated environment in order to avoid capture from a predator. We place a variety of obstacles in the environment for the prey to hide behind, and we only give the agents partial observations of their environment using an egocentric perspective. Although we train the model to play this game from scratch without any prior knowledge of its visual world, experiments and visualizations show that a representation of other agents automatically emerges in the learned representation. Furthermore, we quantitatively analyze how agent weaknesses, such as slower speed, effect the learned policy. Our results suggest that, although agent weaknesses make the learning problem more challenging, they also cause useful features to emerge in the representation.


We systematically intervene on the learning process to understand the mechanisms behind emergent features. Our model exhibits diverse behaviors across multiple variants of the environments. Further details can be found in our paper published at ALife 2020 (Best Poster Award).






VisibilityReward & FasterHider



Random (no training)



  1. We introduce the problem of visual hide-and-seek where an agent receives a partial observation of its visual environment and must navigate to avoid capture.
  2. We empirically demonstrate that this task causes representations of other agents in the scene to emerge. (Does the agent learn to recognize other agents? Does it recognize its own self-visibility?)
  3. We analyze the underlying reasons why these representations emerge, and show they are due to imperfections in the agent's abilities.
  4. We present a set of evaluation matrices to quantify the behaviors and representations emerged through the interaction. We believe this is useful for further studying dynamics between agents.



We will release our environment and code.


[1] All authors are from Columbia University.


              title={Visual hide and seek},
              author={Chen, Boyuan and Song, Shuran and Lipson, Hod and Vondrick, Carl},
              booktitle={Artificial Life Conference Proceedings},
              organization={MIT Press} }

Related Works

Our work is related to a lot of great ideas. We here link some of them and refer the reader to more completed list in our paper. Please check them out! (Please feel free to send us any references we may accidentally missed.)


This research is supported by DARPA MTO grant L2M Program HR0011-18-2-0020 and NSF NRI 1925157. We would also like to thank NVIDIA for the donation of GPU.


If you have any questions, please feel free to contact Boyuan Chen (