You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
Jiale Xu 498cd0033f
Update README.md
6 months ago
assets init 6 months ago
README.md Update README.md 6 months ago

README.md

InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models


This repo is the official implementation of InstantMesh, a feed-forward framework for efficient 3D mesh generation from a single image. We will release all the code, weights, and demo here.

https://github.com/TencentARC/InstantMesh/assets/20635237/737bba2d-df45-4707-8557-1dd84f248764

⚙️ Dependencies and Installation

We recommand using Python>=3.10, PyTorch>=2.1.0, and CUDA=12.1.

conda create --name instantmesh python=3.10
conda activate instantmesh
pip install -U pip

# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7

# Install other requirements
pip install -r requirements.txt

💫 How to Use

Download the models

We provide 4 sparse-view reconstruction model variants and a customized Zero123++ UNet for white-background image generation in the model card.

Please download the models and put them under the ckpts/ directory.

By default, we use the instant-mesh-large reconstruction model variant.

Start a local gradio demo

To start a gradio demo in your local machine, simply running:

python app.py

Running with command line

To generate 3D meshes from images via command line, simply running:

python run.py configs/instant-mesh-large.yaml examples/ --save_video

By default, our script exports a .obj mesh with vertex colors, please specify the --export_texmap flag if you hope to export a mesh with a texture map instead:

python run.py configs/instant-mesh-large.yaml examples/ --save_video --export_texmap

Please use a different .yaml config file in the configs directory if you hope to use other reconstruction model variants. For example, using the instant-nerf-large model for generation:

python run.py configs/instant-nerf-large.yaml examples/ --save_video --export_texmap

💻 Training

We provide our training code to facilatate future research. But we cannot provide the training dataset due to its size. Please refer to our dataloader for more details.

To train the sparse-view reconstruction models, please run:

# Training on NeRF representation
python train.py --base configs/instant-nerf-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1

# Training on Mesh representation
python train.py --base configs/instant-mesh-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1

📚 Citation

If you find our work useful for your research or applications, please cite using this BibTeX:

@article{xu2024instantmesh,
  title={InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models},
  author={Jiale Xu and Weihao Cheng and Yiming Gao and Xintao Wang and Shenghua Gao and Ying Shan},
  journal={arXiv preprint},
  year={2024}
}

🤗 Acknowledgements

We thank the authors of the following projects for their excellent contributions to 3D generative AI!