Overview
One-2-3-45 represents a significant architectural leap in 3D generative AI, specifically designed to solve the latency and consistency issues found in earlier Score Distillation Sampling (SDS) methods. Developed by researchers at UC San Diego, the system utilizes a feed-forward neural network that leverages the 2D knowledge of large-scale diffusion models (specifically Zero123) to generate multi-view images of a single object. These views are then processed through a cost volume-based 3D reconstruction module. Unlike optimization-based approaches that can take hours, One-2-3-45 completes the lifting of a 2D image into a full 3D mesh in approximately 45 seconds. By 2026, the architecture has become a benchmark for real-time spatial computing, providing the foundational logic for rapid asset creation in XR environments and game development pipelines. Its technical superiority lies in its ability to maintain high geometry fidelity and multi-view consistency without the typical 'Janus problem' common in early-stage 3D generators. The system is highly scalable, supporting deployment on consumer-grade GPUs with at least 24GB of VRAM, making it a favorite for local development and private enterprise deployment.
