This forum uses cookies
This forum makes use of cookies to store your login information if you are registered, and your last visit if you are not. Cookies are small text documents stored on your computer; the cookies set by this forum can only be used on this website and pose no security risk. Cookies on this forum also track the specific topics you have read and when you last read them. Please confirm whether you accept or reject these cookies being set.

A cookie will be stored in your browser regardless of choice to prevent you being asked this question again. You will be able to change your cookie settings at any time using the link in the footer.

Using Stable Diffision models for Colorization
#11
Attaching the workflow to the image isn't a bizarre idea of mine. It's the standard way the images are saved by ComfyUI. 
The reason is simple: anyone receiving an image generated by ComfyUI can see how it was generated and possibly reproduce the same result. 

I downloaded the image from the forum and uploaded it to ComfyUI as described, and the workflow is perfectly visible. 

However, I attached a zip file containing the workflow in json format. Again, just upload the file to ComfyUI using drag-and-drop to see the workflow. 

I included also a legend with the links where is possible to download the models used in the workflow.

Dan


Attached Files
.zip   Qwen_IE-2511_recolor_v3.zip (Size: 3,46 KB / Downloads: 31)
Reply
#12
Unlike dragging&dropping the png, it works for the json for me.
¯\_(ツ)_/¯

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#13
In the case it could be useful, in the workflow was used the model svdq-fp4 available at this page: https://huggingface.co/nunchaku-ai/nunch...-edit-2509
The fp4 quantization is supported only by GPU RTX50, for older GPU it is necessary to use svdq-int4 quantization, you can find it on the link above.

Some suggestions: given that the colorization of a B&W images is a much simpler task, respect the text to image. 
It is possible to use the rank32 version of svdq-int4 quantization and lower the number of steps from 4 to 2.

Dan
Reply
#14
Hello,

  For those interested in applying DiT models to color old BW films, I've recolored the films I've published on the Internet Archive using the new Pipeline, which uses a DiT model to automatically color the reference frames. 
  The list is available at this link: havc-colorized-movies

  the new Pipeline for colorize the movies is the following:

    1) extract the reference frames (with HAVC)
    2) colorize the reference frames with a DiT model (currently I'm using qwen-image-edit) 
    3) propagate the color of reference frames to full clip  (with HAVC)
    4) generate an alternative stable colored clip  (with HAVC)  

 unfortunately even if the frames colored with DiT models have more stable and better colors than DDColor, the problem of color stability and consistency is still present. In order to mitigate it, I need to merge the clip colored at the step 3 with the clip generated at step 4.
   In my list of colored movies, there is only one movie that was not merged with an alternative stable colored clip: a-night-to-remember-colorized-1958-720p (*)

Dan

(*) left as a reference to the problems of stability and consistency of colors using DiT models. The movie was colored with HAVC(ColorMNet, max_memory_frames=150), despite its shortcomings, ColorMNet is the only model that is able to mitigate this problem automatically.
Reply
#15
Thumbs Up 
Can Qwen-IE be added to the Hybrid program?
This would eliminate the need for external programs and simplify the coloring process, making it easier to incorporate the new coloring technology into Hybrid.
Reply
#16
No. At this time, I don't know how to add Qwen-iE or similar stuff to Hybrid (in a good way).

comfy-cli and ComfyUI-to-Python-Extension might be suited to automate some stuff to incorporate running ComfyUI generation as an external step rather than using ComfyUI as a 'native' VapourSynth module.
Ignoring that, at least with current consumer/local Hardware, and not using it just for generating a few key frames, the process would be too slow.
If someone comes up with something like 'vs-mlrt' or a 'native' VapourSynth module, things might be different, but atm. this all does not seem feasible to me.
(also temporal consistency could be a problem with how Qwen-image-edit&co work atm.)

Cu Selur
----
Dev versions are in the 'experimental'-folder of my GoogleDrive, which is linked on the download page.
Reply
#17
Thumbs Up 
Thank you so much for your heroic work in coloring technology, but please, is it possible to make a video explaining how to use the Qwen-iE program? Because even though I've downloaded everything, I still don't know how to use it.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)