Selur's Little Message Board
Using Stable Diffision models for Colorization - Printable Version

+- Selur's Little Message Board (https://forum.selur.net)
+-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html)
+--- Forum: Small Talk (https://forum.selur.net/forum-7.html)
+--- Thread: Using Stable Diffision models for Colorization (/thread-4287.html)

Pages: 1 2 3


RE: Using Stable Diffision models for Colorization - Dan64 - 04.01.2026

Attaching the workflow to the image isn't a bizarre idea of mine. It's the standard way the images are saved by ComfyUI. 
The reason is simple: anyone receiving an image generated by ComfyUI can see how it was generated and possibly reproduce the same result. 

I downloaded the image from the forum and uploaded it to ComfyUI as described, and the workflow is perfectly visible. 

However, I attached a zip file containing the workflow in json format. Again, just upload the file to ComfyUI using drag-and-drop to see the workflow. 

I included also a legend with the links where is possible to download the models used in the workflow.

Dan


RE: Using Stable Diffision models for Colorization - Selur - 04.01.2026

Unlike dragging&dropping the png, it works for the json for me.
¯\_(ツ)_/¯

Cu Selur


RE: Using Stable Diffision models for Colorization - Dan64 - 24.01.2026

In the case it could be useful, in the workflow was used the model svdq-fp4 available at this page: https://huggingface.co/nunchaku-ai/nunchaku-qwen-image-edit-2509
The fp4 quantization is supported only by GPU RTX50, for older GPU it is necessary to use svdq-int4 quantization, you can find it on the link above.

Some suggestions: given that the colorization of a B&W images is a much simpler task, respect the text to image. 
It is possible to use the rank32 version of svdq-int4 quantization and lower the number of steps from 4 to 2.

Dan


RE: Using Stable Diffision models for Colorization - Dan64 - 25.01.2026

Hello,

  For those interested in applying DiT models to color old BW films, I've recolored the films I've published on the Internet Archive using the new Pipeline, which uses a DiT model to automatically color the reference frames. 
  The list is available at this link: havc-colorized-movies

  the new Pipeline for colorize the movies is the following:

    1) extract the reference frames (with HAVC)
    2) colorize the reference frames with a DiT model (currently I'm using qwen-image-edit) 
    3) propagate the color of reference frames to full clip  (with HAVC)
    4) generate an alternative stable colored clip  (with HAVC)  

 unfortunately even if the frames colored with DiT models have more stable and better colors than DDColor, the problem of color stability and consistency is still present. In order to mitigate it, I need to merge the clip colored at the step 3 with the clip generated at step 4.
   In my list of colored movies, there is only one movie that was not merged with an alternative stable colored clip: a-night-to-remember-colorized-1958-720p (*)

Dan

(*) left as a reference to the problems of stability and consistency of colors using DiT models. The movie was colored with HAVC(ColorMNet, max_memory_frames=150), despite its shortcomings, ColorMNet is the only model that is able to mitigate this problem automatically.


RE: Using Stable Diffision models for Colorization - XxBo0oMxX - 08.02.2026

Can Qwen-IE be added to the Hybrid program?
This would eliminate the need for external programs and simplify the coloring process, making it easier to incorporate the new coloring technology into Hybrid.


RE: Using Stable Diffision models for Colorization - Selur - 08.02.2026

No. At this time, I don't know how to add Qwen-iE or similar stuff to Hybrid (in a good way).

comfy-cli and ComfyUI-to-Python-Extension might be suited to automate some stuff to incorporate running ComfyUI generation as an external step rather than using ComfyUI as a 'native' VapourSynth module.
Ignoring that, at least with current consumer/local Hardware, and not using it just for generating a few key frames, the process would be too slow.
If someone comes up with something like 'vs-mlrt' or a 'native' VapourSynth module, things might be different, but atm. this all does not seem feasible to me.
(also temporal consistency could be a problem with how Qwen-image-edit&co work atm.)

Cu Selur


RE: Using Stable Diffision models for Colorization - XxBo0oMxX - 08.02.2026

Thank you so much for your heroic work in coloring technology, but please, is it possible to make a video explaining how to use the Qwen-iE program? Because even though I've downloaded everything, I still don't know how to use it.


RE: Using Stable Diffision models for Colorization - Selur - 08.02.2026

There already is a documentation and video for it https://docs.comfy.org/tutorials/image/qwen/qwen-image-edit

Cu Selur


RE: Using Stable Diffision models for Colorization - Dan64 - 09.02.2026

(08.02.2026, 12:35)XxBo0oMxX Wrote: Thank you so much for your heroic work in coloring technology, but please, is it possible to make a video explaining how to use the Qwen-iE program? Because even though I've downloaded everything, I still don't know how to use it.

Willing to use Qwen-IE to color B&W pictures it is very important to use an appropriate prompt, the best that I found till now is the following: "Colorize this image, natural colors. Strictly preserve all shapes, edges and background details."

By using only python code, and adding some optimization trick, I was able to lower the coloring speed from 22 sec/image (ComfyUI) to 4 sec/image (Python only). So the speed now is not really an issue. I decided to no release an HAVC extension with  Qwen-iE because the HW requirements to get this speed are very high (RTX 5070TI and 64 GB RAM). If in the future will be released a DiT model with lower HW requirements able to color to a speed of 4 sec/image or better, I will evaluate the possibility to add it to HAVC.

Dan


RE: Using Stable Diffision models for Colorization - XxBo0oMxX - 10.02.2026

Thank you very much, firstly for the quick response.

Secondly, for considering adding Qwen-IE to the coloring project and the program. Please try to add it as soon as possible so we can benefit from this technology. It would also be helpful to release two versions of the HAVC program: one for high-end specifications and another for lower-end specifications, allowing users to choose between the two. Sincerely,