25.12.2025, 20:36
The DDColor version seems to be more realistic to me, to be frank.
Stable Diffusion probably only used this right color in this case since it was trained with images of Disneyland and DDColor not.
So maybe figuring out how to create more models for DDColor might be a better, more resource friendly goal.
> 1) model storage: Qwen Image Edit requires about 30GB of storage for the diffusion model, text encoder and VAE
> 2) speed: on my RTX 5070 Ti the colorization process takes about 20sec to get the colored picture.
hmm,.. so in ~20years your GPU can do the coloring for you live.
(Assuming gpu compute increases by ~1.5× every 2 years; and real-time to be 25-30fps; 20years would give a x512 speed increase)
Cu Selur
Stable Diffusion probably only used this right color in this case since it was trained with images of Disneyland and DDColor not.
So maybe figuring out how to create more models for DDColor might be a better, more resource friendly goal.
> 1) model storage: Qwen Image Edit requires about 30GB of storage for the diffusion model, text encoder and VAE
> 2) speed: on my RTX 5070 Ti the colorization process takes about 20sec to get the colored picture.
hmm,.. so in ~20years your GPU can do the coloring for you live.

(Assuming gpu compute increases by ~1.5× every 2 years; and real-time to be 25-30fps; 20years would give a x512 speed increase)
Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.

