This repo is the official implementation of InstantMesh, a feed-forward framework for efficient 3D mesh generation from a single image. We will release all the code, weights, and demo here.
We provide 4 sparse-view reconstruction model variants and a customized Zero123++ UNet for white-background image generation in the [model card](https://huggingface.co/TencentARC/InstantMesh).
Please download the models and put them under the `ckpts/` directory.
By default, we use the `instant-mesh-large` reconstruction model variant.
## Start a local gradio demo
To start a gradio demo in your local machine, simply running:
```bash
python app.py
```
## Running with command line
To generate 3D meshes from images via command line, simply running:
We use [rembg](https://github.com/danielgatis/rembg) to segment the foreground object. If the input image already has an alpha mask, please specify the `no_rembg` flag:
By default, our script exports a `.obj` mesh with vertex colors, please specify the `--export_texmap` flag if you hope to export a mesh with a texture map instead (this will cost longer time):
Please use a different `.yaml` config file in the [configs](./configs) directory if you hope to use other reconstruction model variants. For example, using the `instant-nerf-large` model for generation:
**Note:** When using the `NeRF` model variants for image-to-3D generation, exporting a mesh with texture map by specifying `--export_texmap` may cost long time in the UV unwarping step since the default iso-surface extraction resolution is `256`. You can set a lower iso-surface extraction resolution in the config file.
We provide our training code to facilatate future research. But we cannot provide the training dataset due to its size. Please refer to our [dataloader](src/data/objaverse.py) for more details.
To train the sparse-view reconstruction models, please run: