Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images

1Fudan University, 2Google Research

Overview of our framework. Our method takes a small number of color images with ground truth or predicted camera poses as inputs and generates 3D meshes.

Abstract

We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses. While many previous works learn to hallucinate the shape directly from priors, we adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network. Instead of building a direct mapping function from images to 3D shape, our model learns to predict series of deformations to improve a coarse shape iteratively. Inspired by traditional multiple view geometry methods, our network samples nearby area around the initial mesh's vertex locations and reasons an optimal deformation using perceptual feature statistics built from multiple input images. Extensive experiments show that our model produces accurate 3D shapes that are not only visually plausible from the input perspectives, but also well aligned to arbitrary viewpoints. With the help of physically driven architecture, our model also exhibits generalization capability across different semantic categories, and the number of input images. Model analysis experiments show that our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable renderer for test-time optimization.

Network Architecture

Interpolate start reference image.

Results

Interpolate start reference image.
>

BibTeX


      @inproceedings{wen2019pixel2mesh++,
      title={Pixel2mesh++: Multi-view 3d mesh generation via deformation},
      author={Wen, Chao and Zhang, Yinda and Li, Zhuwen and Fu, Yanwei},
      booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
      pages={1042--1051},
      year={2019}}