Model-Driven Feed-Forward Prediction
for Manipulation of Deformable Objects

Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun, Peter K. Allen
Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail Paper page thumbnail

Abstract

Robotic manipulation of deformable objects is a difficult problem especially because of the complexity of the many different ways an object can deform. Searching such a high dimensional state space makes it difficult to recognize, track, and manipulate deformable objects. In this paper, we introduce a \emph{predictive, model-driven} approach to address this challenge, using a pre-computed, simulated database of deformable object models. Mesh models of common deformable garments are simulated with the garments picked up in multiple different poses under gravity, and stored in a database for fast and efficient retrieval. To validate this approach, we developed a comprehensive pipeline for manipulating clothing as in a typical laundry task. First, the database is used for category and pose estimation for a garment in an arbitrary position. A fully featured 3D model of the garment is constructed in real-time and volumetric features are then used to obtain the most similar model in the database to predict the object category and pose. Second, the database can significantly benefit the manipulation of deformable objects via non-rigid registration, providing accurate correspondences between the reconstructed object model and the database models. Third, the accurate model simulation can also be used to optimize the trajectories for manipulation of deformable objects, such as the folding of garments. Extensive experimental results are shown for the tasks above using a variety of different clothing.

Database

More specifically, we have developed an offline simulation pipeline whose results are good enough to support various applications, using advanced simulators such as Maya to simulate deformable objects. In this way, we can produce thousands of exemplars efficiently, which can be used as a corpus for learning the visual appearances of the deformed garments. The offline simulation is time efficient, noise free, and more accurate compared with acquiring data via sensors from real objects. Simulation models do not suffer from occlusion or noise as compared to physically scanned models. In the offline simulation, we use a few well-defined garment mesh models such as sweaters, jeans, and short pants.

For each grasping point, we compute the garment layout by hanging under gravity in the simulator. The hanging under gravity effect of the garment models is shown in the figure below. We manually label each garment in the database with the key grasping points such as sleeve end, elbow, shoulder, chest, waist, etc.

Example simulated models in the database.

The database is able to support multiple applications, which are demonstrated in the following videos.

Citation

Yinxiao Li, Yan Wang, Yonghao Yue, Danfei Xu, Michael Case, Shih-Fu Chang, Eitan Grinspun and Peter Allen, "Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects," IEEE Transactions on Automation Science and Engineering, 2017

BibTeX

    @article{li_tase,
        Author = {Li, Yinxiao and Wang, Yan and Yue, Yonghao and Xu, Danfei and Case, Michael and Chang, Shih-Fu and Grinspun, Eitan and Allen, Peter},
        Title = {Model-Driven Feed-Forward Prediction for Manipulation of Deformable Objects},
        Journal = {IEEE Transactions on Automation Science and Engineering},
        Publisher = {IEEE},
        Year = {2017}
    }