My research is to develop algorithms that enable robots to intelligently interact with the physical world and improve themselves over time. In particular, I am interested in building reactive perception-driven systems that can autonomously acquire the sensorimotor skills necessary to execute complex tasks in unstructured environments. My work lies at the intersection of robotics, computer vision, and machine learning. See my new Twitter for updates.



Robotic Pick-and-Place of Novel Objects in Clutter with Multi-Affordance Grasping and Cross-Domain Image Matching

We built a robo-picker that can grasp and recognize novel objects (appearing for the first time during testing) in cluttered environments without needing any additional data collection or re-training. It achieves this with affordance-based object-agnostic grasping and one-shot learning to recognize objects using only product images (e.g., from the web). The approach was part of the MIT-Princeton Team system that took 1st place in the stowing task at the 2017 Amazon Robotics Challenge.

Andy Zeng, Shuran Song, Kuan-Ting Yu, Elliott Donlon, Francois R. Hogan, Maria Bauza, Daolin Ma, Orion Taylor, Melody Liu, Eudald Romo, Nima Fazeli, Ferran Alet, Nikhil Chavan Dafle, Rachel Holladay, Isabella Morona, Prem Qu Nair, Druck Green, Ian Taylor, Weber Liu, Thomas Funkhouser, Alberto Rodriguez
IEEE International Conference on Robotics and Automation (ICRA) 2018
Project  •   PDF  •   Code (Github)  •   MIT News  •   Amazon News  •   Engadget

Im2Pano3D: Extrapolating 360° Structure and Semantics Beyond the Field of View

We explore the limits of leveraging strong contextual priors learned from large-scale synthetic and real-world indoor scenes. To this end, we trained a network that can generate a dense prediction of 3D structure and a probability distribution of semantic labels for a full 360° panoramic view of an indoor scene when given only a partial observation (<= 50%) in the form of an RGB-D image -- i.e., it can infer what's behind you.

Shuran Song, Andy Zeng, Angel X. Chang, Manolis Savva, Silvio Savarese, Thomas Funkhouser
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018
Oral Presentation
Project  •   PDF

Matterport3D: Learning from RGB-D Data in Indoor Environments

We introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and scene classification.

Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Nießner, Manolis Savva, Shuran Song, Andy Zeng, Yinda Zhang
IEEE International Conference on 3D Vision (3DV) 2017
Project  •   PDF  •   Code (Github)  •   Matterport Blog

3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions

We present a data-driven model that learns a local 3D shape descriptor for establishing correspondences between partial and noisy 3D/RGB-D data. To amass training data for our model, we propose an unsupervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Our learned descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence).

Andy Zeng, Shuran Song, Matthias Nießner, Matthew Fisher, Jianxiong Xiao, Thomas Funkhouser
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
Oral Presentation
Project  •   PDF  •   Code (Github)  •   Talk  •   2 Minute Papers

Semantic Scene Completion from a Single Depth Image

We present an end-to-end model that is capable of inferring a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. To train our model, we construct SUNCG -- a manually created large-scale dataset of synthetic 3D scenes with dense volumetric annotations.

Shuran Song, Fisher Yu, Andy Zeng, Angel X. Chang, Manolis Savva, Thomas Funkhouser
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
Oral Presentation
Project  •   PDF  •   SUNCG Dataset  •   Code (Github)  •   Talk  •   2 Minute Papers

Multi-view Self-supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge

We developed a vision system that can recognize objects and estimate their 6D poses under cluttered environments, partial data, sensor noise, multiple instances of the same object, and a large variety of object categories. Our approach leverages fully convolutional networks to segment and label multiple RGB-D views of a scene, then fits pre-scanned 3D object models to the resulting segmentation to estimate their poses. We also propose a scalable self-supervised method that leverages precise and repeatable robot motions to generate a large labeled dataset without tedious manual annotations. The approach was part of the MIT-Princeton Team system that took 3rd place at the 2016 Amazon Picking Challenge.

Andy Zeng, Kuan-Ting Yu, Shuran Song, Daniel Suo, Ed Walker Jr., Alberto Rodriguez, Jianxiong Xiao
IEEE International Conference on Robotics and Automation (ICRA) 2017
Project  •   PDF  •   Shelf & Tote Dataset  •   Code (Github)


Oral presentation on 3DMatch at CVPR 2017 (slides here)


My research has been graciously funded by

Design: HTML5 UP  •  My old website resides here