30187

Home-made Diffusion Model from Scratch to Hatch

Shih-Ying Yeh
National Tsing Hua University
arXiv:2509.06068 [cs.CV], (7 Sep 2025)

@misc{yeh2025homemadediffusionmodelscratch,

   title={Home-made Diffusion Model from Scratch to Hatch},

   author={Shih-Ying Yeh},

   year={2025},

   eprint={2509.06068},

   archivePrefix={arXiv},

   primaryClass={cs.CV},

   url={https://arxiv.org/abs/2509.06068}

}

We introduce Home-made Diffusion Model (HDM), an efficient yet powerful text-to-image diffusion model optimized for training (and inferring) on consumer-grade hardware. HDM achieves competitive 1024×1024 generation quality while maintaining a remarkably low training cost of $535-620 using four RTX5090 GPUs, representing a significant reduction in computational requirements compared to traditional approaches. Our key contributions include: (1) Cross-U-Transformer (XUT), a novel U-shape transformer, Cross-U-Transformer (XUT), that employs cross-attention for skip connections, providing superior feature integration that leads to remarkable compositional consistency; (2) a comprehensive training recipe that incorporates TREAD acceleration, a novel shifted square crop strategy for efficient arbitrary aspect-ratio training, and progressive resolution scaling; and (3) an empirical demonstration that smaller models (343M parameters) with carefully crafted architectures can achieve high-quality results and emergent capabilities, such as intuitive camera control. Our work provides an alternative paradigm of scaling, demonstrating a viable path toward democratizing high-quality text-to-image generation for individual researchers and smaller organizations with limited computational resources.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: