This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Using Stable Diffision models for Colorization
#3
The DDColor version seems to be more realistic to me, to be frank.
Stable Diffusion probably only used this right color in this case since it was trained with images of Disneyland and DDColor not.
So maybe figuring out how to create more models for DDColor might be a better, more resource friendly goal.

> 1) model storage: Qwen Image Edit requires about 30GB of storage for the diffusion model, text encoder and VAE
> 2) speed: on my RTX 5070 Ti the colorization process takes about 20sec to get the colored picture.
hmm,.. so in ~20years your GPU can do the coloring for you live. Smile
(Assuming gpu compute increases by ~1.5× every 2 years; and real-time to be 25-30fps; 20years would give a x512 speed increase)


Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply


Messages In This Thread
RE: Using Stable Diffision models for Colorization - by Selur - 25.12.2025, 20:36

Forum Jump:


Users browsing this thread: 1 Guest(s)