This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Using Stable Diffision models for Colorization
#1
Recently I received some request to include stable diffusion models in HAVC colorization process.

So I decided to analyze the problem and to write this post to describe my findings.

First all is necessary to understand that the stable diffusion models were developed for the text to image process. They are able to build an image based on the description of the image.
This "specialization" represent the main problem, because I want to try to use them to color an image already available. 

For example if I try to describe to a stable diffusion model the following image

 [Image: attachment.php?aid=3419]

In the best case I can obtain something like this

[Image: attachment.php?aid=3420]

So I had to develop a complex pipeline, and after many attempts I was able to obtain "decent" colored images using the following models in the colorization pipeline:

1) Juggernaut-XL_v9_RunDiffusionPhoto_v2 (for the text to image colorization)
2) control-LoRA-recolor-rank256 (LoRA specialized to force the stable diffusion model to produce an image equal to the source in the gray-space)
3) DDColor_Modelscope (to provide a colored image as reference)
3) Qwen3-VL-2B (to descibe the image provided by DDColor and to provide the "text" to Juggernaut which tries to "mimic" DDColor)

Using this pipeline I obtained the following result (source image on the left generated with AI)

[Image: attachment.php?aid=3421]

The description of the pipeline that I used is too complex to be included in this post, but I can say that to build the colorization pipeline I used ComyUI.

For those familiar with it, I've attached an image (Recolor_Workflow.png) containing the workflow that I used. 
It is necessary to drag and drop the image into ComfyUI to view the workflow (very big). 
Of course, it will be necessary to install all the missing nodes and models (available at Hugging Face).

In summary, a part the speed (stable diffusion models are about 50x slower) I don't see any significant improvement in using stable diffusion models for the colorization process.
They could be used with the Image-to-Image or Image-Edit process to change the colors of an image already colored to be used as reference.
But this process is totally manual and cannot be included in the automatic colorization pipeline used by HAVC.
 

Dan


Attached Files Thumbnail(s)
               
Reply


Messages In This Thread
Using Stable Diffision models for Colorization - by Dan64 - 14.12.2025, 20:08

Forum Jump:


Users browsing this thread: 1 Guest(s)