bluestyle97
0d335d074d
|
9 months ago | |
---|---|---|
assets | 9 months ago | |
configs | 9 months ago | |
examples | 9 months ago | |
src | 9 months ago | |
zero123plus | 9 months ago | |
.gitignore | 9 months ago | |
LICENSE | 9 months ago | |
README.md | 9 months ago | |
app.py | 9 months ago | |
requirements.txt | 9 months ago | |
run.py | 9 months ago | |
train.py | 9 months ago |
README.md
InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models
This repo is the official implementation of InstantMesh, a feed-forward framework for efficient 3D mesh generation from a single image based on the LRM architecture.
https://github.com/TencentARC/InstantMesh/assets/20635237/737bba2d-df45-4707-8557-1dd84f248764
🚩 Todo List
- Release inference and training code.
- Release model weights.
- Release hugging face gradio demo (we are waiting for the GPU grant and will make it available as soon as possible).
- Add support to more multi-view diffusion models.
⚙️ Dependencies and Installation
We recommand using Python>=3.10
, PyTorch>=2.1.0
, and CUDA=12.1
.
conda create --name instantmesh python=3.10
conda activate instantmesh
pip install -U pip
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7
# Install other requirements
pip install -r requirements.txt
💫 How to Use
Download the models
We provide 4 sparse-view reconstruction model variants and a customized Zero123++ UNet for white-background image generation in the model card.
Please download the models and put them under the ckpts/
directory.
By default, we use the instant-mesh-large
reconstruction model variant.
Start a local gradio demo
To start a gradio demo in your local machine, simply running:
python app.py
Running with command line
To generate 3D meshes from images via command line, simply running:
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video
We use rembg to segment the foreground object. If the input image already has an alpha mask, please specify the no_rembg
flag:
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --no_rembg
By default, our script exports a .obj
mesh with vertex colors, please specify the --export_texmap
flag if you hope to export a mesh with a texture map instead (this will cost longer time):
python run.py configs/instant-mesh-large.yaml examples/hatsune_miku.png --save_video --export_texmap
Please use a different .yaml
config file in the configs directory if you hope to use other reconstruction model variants. For example, using the instant-nerf-large
model for generation:
python run.py configs/instant-nerf-large.yaml examples/hatsune_miku.png --save_video
Note: When using the NeRF
model variants for image-to-3D generation, exporting a mesh with texture map by specifying --export_texmap
may cost long time in the UV unwarping step since the default iso-surface extraction resolution is 256
. You can set a lower iso-surface extraction resolution in the config file.
💻 Training
We provide our training code to facilatate future research. But we cannot provide the training dataset due to its size. Please refer to our dataloader for more details.
To train the sparse-view reconstruction models, please run:
# Training on NeRF representation
python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
# Training on Mesh representation
python train.py --base configs/instant-mesh-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
📚 Citation
If you find our work useful for your research or applications, please cite using this BibTeX:
@article{xu2024instantmesh,
title={InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models},
author={Xu, Jiale and Cheng, Weihao and Gao, Yiming and Wang, Xintao and Gao, Shenghua and Shan, Ying},
journal={arXiv preprint arXiv:2404.07191},
year={2024}
}
🤗 Acknowledgements
We thank the authors of the following projects for their excellent contributions to 3D generative AI!