Farm3D is a website that uses an image generator called Stable Diffusion to learn an articulated object category entirely from virtual supervision. Farm3D proposes a framework that employs an image generator to produce training data for learning a reconstruction network from the ground up.
The method yields a monocular reconstruction network capable of generating controllable 3D assets from a single input image, whether real or generated. Farm3D allows for adjusting lighting, swapping textures between models of the same category, and even animating the shape.
- Monocular 3D Reconstruction: Capable of generating controllable 3D assets from a single input image, whether real or generated, in a matter of seconds.
- Input Image 3D Shape: Factorises the input image of an object instance into articulated shape, appearance (albedo and diffuse and ambient intensities), viewpoint, and light direction.
- Textured Controllable Synthesis: Enables the generation of controllable 3D assets from either a real image or an image synthesised using Stable Diffusion.
- Animation: Allows for the animation of the generated 3D assets.
- Relighting: Allows for the adjustment of lighting on the generated 3D assets.
- Texture Swapping: Enables the swapping of textures between models of the same category.