Our group studies computer vision and machine learning. By training machines to observe and interact with their surroundings, we aim to create robust and versatile models for perception. We often investigate visual models that capitalize on large amounts of unlabeled data and transfer across tasks and modalities. Other interests include scene dynamics, sound and language and beyond, interpretable models, and perception for robotics. Our group is part of the Visual Computing and Machine Learning ecosystem at Columbia.
We have openings for a postdoc in computer vision broadly. To apply, please reach out with your CV.
Prospective PhD students should apply here and mention my name on the application. Every year, the lab accepts one or two students.
Current Columbia students should email me directly with your CV.
Hui Lu (now at Facebook), Jillian Ross (now PhD student at MIT), Amogh Gupta (now at Amazon Research), Dave Epstein (CRA Honorable Mention, now PhD student at Berkeley)
Robust Perception through Equivariance New!
Chengzhi Mao, Lingyu Zhang, Abhishek Joshi, Junfeng Yang, Hao Wang, Carl Vondrick
ICML 2023
Paper Project Page
SurfsUp: Learning Fluid Simulation for Novel Surfaces New!
Arjun Mani*, Ishaan Preetam Chandratreya*, Elliot Creager, Carl Vondrick, Richard Zemel
arXiv 2023
Paper Project Page
Zero-1-to-3: Zero-shot One Image to 3D Object New!
Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, Carl Vondrick
arXiv 2023
Paper Project Page Code Demo
ViperGPT: Visual Inference via Python Execution for Reasoning New!
Dídac Surís*, Sachit Menon*, Carl Vondrick
arXiv 2023
Paper Project Page Code
Humans as Light Bulbs: 3D Human Reconstruction from Thermal Reflection New!
Ruoshi Liu, Carl Vondrick
CVPR 2023
Paper Project Page
What You Can Reconstruct from a Shadow New!
Ruoshi Liu, Sachit Menon, Chengzhi Mao, Dennis Park, Simon Stent, Carl Vondrick
CVPR 2023
Paper Blog Post
Tracking through Containers and Occluders in the Wild New!
Basile Van Hoorick, Pavel Tokmakov, Simon Stent, Jie Li, Carl Vondrick
CVPR 2023
Paper Project Page Datasets Code
FLEX: Full-Body Grasping Without Full-Body Grasps New!
Purva Tendulkar, Dídac Surís, Carl Vondrick
CVPR 2023
Paper Project Page
Doubly Right Object Recognition: A Why Prompt for Visual Rationales New!
Chengzhi Mao, Revant Teotia, Amrutha Sundar, Sachit Menon, Junfeng Yang, Xin Wang, Carl Vondrick
CVPR 2023
Paper
Affective Faces for Goal-Driven Dyadic Communication New!
Scott Geng*, Revant Teotia*, Purva Tendulkar, Sachit Menon, Carl Vondrick
arXiv 2023
Paper Project Page
Visual Classification via Description from Large Language Models New!
Sachit Menon, Carl Vondrick
ICLR 2023 (Oral)
Paper Project Page Code Demo
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models New!
Chengzhi Mao, Scott Geng, Junfeng Yang, Xin Wang, Carl Vondrick
ICLR 2023
Paper
Adversarially Robust Video Perception by Seeing Motion New!
Lingyu Zhang*, Chengzhi Mao*, Junfeng Yang, Carl Vondrick
arXiv 2022
Paper Project Page
Muscles in Action New!
Mia Chiquier, Carl Vondrick
arXiv 2022
Paper Project Page
Task Bias in Vision-Language Models New!
Sachit Menon*, Ishaan Preetam Chandratreya*, Carl Vondrick
arXiv 2022
Paper
Private Multiparty Perception for Navigation New!
Hui Lu, Mia Chiquier, Carl Vondrick
NeurIPS 2022
Paper Project Page Code
Representing Spatial Trajectories as Distributions New!
Dídac Surís, Carl Vondrick
NeurIPS 2022
Paper Project Page
Landscape Learning for Neural Network Inversion New!
Ruoshi Liu, Chengzhi Mao, Purva Tendulkar, Hao Wang, Carl Vondrick
arXiv 2022
Paper Blog Post
Forget-me-not! Contrastive Critics for Mitigating Posterior Collapse
Sachit Menon, David Blei, Carl Vondrick
UAI 2022
Paper
Revealing Occlusions with 4D Neural Fields
Basile Van Hoorick, Purva Tendulkar, Dídac Surís, Dennis Park, Simon Stent, Carl Vondrick
CVPR 2022 (Oral)
Paper Project Page Talk
Globetrotter: Connecting Languages by Connecting Images
Dídac Surís, Dave Epstein, Carl Vondrick
CVPR 2022 (Oral)
Paper Project Page Code
Causal Transportability for Visual Recognition
Chengzhi Mao*, Kevin Xia*, James Wang, Hao Wang, Junfeng Yang, Elias Bareinboim, Carl Vondrick
CVPR 2022
Paper
It's Time for Artistic Correspondence in Music and Video
Dídac Surís, Carl Vondrick, Bryan Russell, Justin Salamon
CVPR 2022
Paper Project Page
UnweaveNet: Unweaving Activity Stories
Will Price, Carl Vondrick, Dima Damen
CVPR 2022
Paper
There is a Time and Place for Reasoning Beyond the Image
Xingyu Fu, Ben Zhou, Ishaan Preetam Chandratreya, Carl Vondrick, Dan Roth
ACL 2022 (Oral)
Paper Code + Data
Real-Time Neural Voice Camouflage
Mia Chiquier, Chengzhi Mao, Carl Vondrick
ICLR 2022 (Oral)
Paper Project Page Science
Discrete Representations Strengthen Vision Transformer Robustness
Chengzhi Mao, Lu Jiang, Mostafa Dehghani, Carl Vondrick, Rahul Sukthankar, Irfan Essa
ICLR 2022
Paper
Full-Body Visual Self-Modeling of Robot Morphologies
Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, Hod Lipson
Science Robotics 2022
PaperProject Page Code
The Boombox: Visual Reconstruction from Acoustic Vibrations
Boyuan Chen, Mia Chiquier, Hod Lipson, Carl Vondrick
CoRL 2021
Paper Project Page Video Overview
Adversarial Attacks are Reversible with Natural Supervision
Chengzhi Mao, Mia Chiquier, Hao Wang, Junfeng Yang, Carl Vondrick
ICCV 2021
Paper Code
Dissecting Image Crops
Basile Van Hoorick, Carl Vondrick
ICCV 2021
Paper Code
Learning the Predictability of the Future
Dídac Surís*, Ruoshi Liu*, Carl Vondrick
CVPR 2021
Paper Project Page Code Models Talk
Generative Interventions for Causal Learning
Chengzhi Mao, Amogh Gupta, Augustine Cha, Hao Wang, Junfeng Yang, Carl Vondrick
CVPR 2021
Paper Code
Learning Goals from Failure
Dave Epstein, Carl Vondrick
CVPR 2021
Paper Project Page Data Code Talk
Visual Behavior Modelling for Robotic Theory of Mind
Boyuan Chen, Carl Vondrick, Hod Lipson
Scientific Reports 2021
Paper Project Page
Listening to Sounds of Silence for Speech Denoising
Ruilin Xu, Rundi Wu, Yuko Ishiwaka, Carl Vondrick, Changxi Zheng
NeurIPS 2020
Paper Project Page
Multitask Learning Strengthens Adversarial Robustness
Chengzhi Mao, Amogh Gupta, Vikram Nitin, Baishakhi Ray, Shuran Song, Junfeng Yang, Carl Vondrick
ECCV 2020 (Oral)
Paper
We Have So Much In Common: Modeling Semantic Relational Set Abstractions in Videos
Alex Andonian, Camilo Fosco, Mathew Monfort, Allen Lee, Carl Vondrick, Rogerio Feris, Aude Oliva
ECCV 2020
Paper Project Page
Learning to Learn Words from Visual Scenes
Dídac Surís*, Dave Epstein*, Heng Ji, Shih-Fu Chang, Carl Vondrick
ECCV 2020
Paper Project Page Code Talk
Oops! Predicting Unintentional Action in Video
Dave Epstein, Boyuan Chen, Carl Vondrick
CVPR 2020
Paper Project Page Data Code Talk
Metric Learning for Adversarial Robustness
Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, Baishakhi Ray
NeurIPS 2019
Paper Code
VideoBERT: A Joint Model for Video and Language Representation Learning
Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid
ICCV 2019
Paper Blog
Multi-level Multimodal Common Semantic Space for Image-Phrase Grounding
Hassan Akbari, Svebor Karaman, Surabhi Bhargava, Brian Chen, Carl Vondrick, Shih-Fu Chang
CVPR 2019
Paper Code
Relational Action Forecasting
Chen Sun, Abhinav Shrivastava, Carl Vondrick, Rahul Sukthankar, Kevin Murphy, Cordelia Schmid
CVPR 2019 (Oral)
Paper
Moments in Time Dataset: one million videos for event understanding
Mathew Monfort et al
PAMI 2019
Paper Project Page
Tracking Emerges by Colorizing Videos
Carl Vondrick, Abhinav Shrivastava, Alireza Fathi, Sergio Guadarrama, Kevin Murphy
ECCV 2018
Paper Blog
The Sound of Pixels
Hang Zhao, Chuang Gan, Andrew Rouditchenko, Carl Vondrick, Josh McDermott, Antonio Torralba
ECCV 2018
Paper Project Page
Actor-centric Relation Network
Chen Sun, Abhinav Shrivastava, Carl Vondrick, Kevin Murphy, Rahul Sukthankar, Cordelia Schmid
ECCV 2018
Paper
AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
Chunhui Gu et al
CVPR 2018 (Spotlight)
Paper Project Page
Following Gaze in Video
Adria Recasens, Carl Vondrick, Aditya Khosla, Antonio Torralba
ICCV 2017
Paper
Generating the Future with Adversarial Transformers
Carl Vondrick, Antonio Torralba
CVPR 2017
Paper Project Page
Cross-Modal Scene Networks
Yusuf Aytar*, Lluis Castrejon*, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
PAMI 2017
Paper Project Page
See, Hear, and Read: Deep Aligned Representations
Yusuf Aytar, Carl Vondrick, Antonio Torralba
arXiv 2017
Paper Project Page
Generating Videos with Scene Dynamics
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
NeurIPS 2016
Paper Project Page Code NBC Scientific American New Scientist MIT News
SoundNet: Learning Sound Representations from Unlabeled Video
Yusuf Aytar*, Carl Vondrick*, Antonio Torralba
NeurIPS 2016
Paper Project Page Code NPR New Scientist Week Junior MIT News
Anticipating Visual Representations with Unlabeled Video
Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
CVPR 2016 (Spotlight)
Paper Project Page NPR CNN AP Wired Stephen Colbert MIT News
Predicting Motivations of Actions by Leveraging Text
Carl Vondrick, Deniz Oktay, Hamed Pirsiavash, Antonio Torralba
CVPR 2016
Paper Dataset
Learning Aligned Cross-Modal Representations from Weakly Aligned Data
Lluis Castrejon*, Yusuf Aytar*, Carl Vondrick, Hamed Pirsiavash, Antonio Torralba
CVPR 2016
Paper Project Page Demo
Visualizing Object Detection Features
Carl Vondrick, Aditya Khosla, Hamed Pirsiavash, Tomasz Malisiewicz, Antonio Torralba
IJCV 2016
Paper Project Page Slides MIT News
Do We Need More Training Data?
Xiangxin Zhu, Carl Vondrick, Charless C. Fowlkes, Deva Ramanan
IJCV 2015
Paper Dataset
Learning Visual Biases from Human Imagination
Carl Vondrick, Hamed Pirsiavash, Aude Oliva, Antonio Torralba
NeurIPS 2015
Paper Project Page Technology Review
Where are they looking?
Adria Recasens*, Aditya Khosla*, Carl Vondrick, Antonio Torralba
NeurIPS 2015
Paper Project Page Demo
Assessing the Quality of Actions
Hamed Pirsiavash, Carl Vondrick, Antonio Torralba
ECCV 2014
Paper Project Page
HOGgles: Visualizing Object Detection Features
Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz, Antonio Torralba
ICCV 2013 (Oral)
Paper Project Page Slides MIT News
Do We Need More Training Data or Better Models for Object Detection?
Xiangxin Zhu, Carl Vondrick, Deva Ramanan, Charless C. Fowlkes
BMVC 2012
Paper Dataset
Efficiently Scaling Up Crowdsourced Video Annotation
Carl Vondrick, Donald Patterson, Deva Ramanan
IJCV 2012
Paper Project Page
Video Annotation and Tracking with Active Learning
Carl Vondrick, Deva Ramanan
NeurIPS 2011
Paper Project Page
A Large-scale Benchmark Dataset for Event Recognition
Sangmin Oh, et al.
CVPR 2011
Paper Project Page
Efficiently Scaling Up Video Annotation with Crowdsourced Marketplaces
Carl Vondrick, Deva Ramanan, Donald Patterson
ECCV 2010
Paper Project Page