Introducing GET3D, a generative model that creates explicit textured 3D meshes with complex topology and high-quality geometry and textures. With the ability to generate diverse shapes, ranging from cars, chairs, animals, motorbikes, and human characters to buildings, GET3D achieves significant improvements over previous methods. The model is end-to-end trainable and combines recent advances in differentiable surface modeling, differentiable rendering, and 2D Generative Adversarial Networks to train from 2D image collections. GET3D can generate similar-looking shapes with slight local differences and can also produce meaningful view-dependent lighting effects in an unsupervised manner when combined with DIBR++.
GET3D (Nvidia) Features
- 3D Shape Generation: GET3D can generate explicit textured 3D meshes with complex topology, rich geometric details, and high fidelity textures. The shapes generated can range from cars, chairs, animals, motorbikes, human characters to buildings.
- Latent Codes: GET3D utilizes two latent codes to generate the 3D SDF and texture field, respectively. This allows users to generate diverse shapes with arbitrary topology.
- Differentiable Renderer: A rasterization-based differentiable renderer is used to obtain RGB images and silhouettes. Two 2D discriminators, one for RGB image and one for silhouette, respectively, are used to classify whether the inputs are real or fake.
- End-to-End Trainable: The whole model is end-to-end trainable, which means that the process from image collection to 3D generation can be easily completed in one flow.
- Text-Guided Generation: Users can provide a text and GET3D can generate shapes from it using text prompts.
- High Quality: GET3D is able to generate high-quality 3D textured meshes with significant improvements over previous methods.