25.12.2025, 18:19
Good news, I tried a new approach that seems promising.
I used the last version of Qwen Image Edit with ComfyUI, with the simple prompt: "restore and colorize this photo. Repair the damaged white background. Maintain the consistency between the characters and the background", and I obtained the following result:
![[Image: attachment.php?aid=3458]](https://forum.selur.net/attachment.php?aid=3458)
As it is possible to see Qwen recognize the castle and colorized it properly. In all my tests Qwen Image Edit was always was able to provide colorized images that were better respect to the ones colorized with DDColor (any model) and DeOldify (any model).
For sure this approach represent the future of colorization.
But there are some big problems that need to be addressed:
1) model storage: Qwen Image Edit requires about 30GB of storage for the diffusion model, text encoder and VAE
2) speed: on my RTX 5070 Ti the colorization process takes about 20sec to get the colored picture.
Moreover it will be necessary to write from zero all the necessary python code, because I cannot use ComfyUI with Vapoursynth.
I will start to investigate the possibility to develop a filter using this model, in meanwhile for those familiar with ComfyUI, I've attached an image (QwenIE_Recolor_small_workflow.png) containing the workflow that I used.
Merry Christmas,
Dan
I used the last version of Qwen Image Edit with ComfyUI, with the simple prompt: "restore and colorize this photo. Repair the damaged white background. Maintain the consistency between the characters and the background", and I obtained the following result:
As it is possible to see Qwen recognize the castle and colorized it properly. In all my tests Qwen Image Edit was always was able to provide colorized images that were better respect to the ones colorized with DDColor (any model) and DeOldify (any model).
For sure this approach represent the future of colorization.
But there are some big problems that need to be addressed:
1) model storage: Qwen Image Edit requires about 30GB of storage for the diffusion model, text encoder and VAE
2) speed: on my RTX 5070 Ti the colorization process takes about 20sec to get the colored picture.
Moreover it will be necessary to write from zero all the necessary python code, because I cannot use ComfyUI with Vapoursynth.
I will start to investigate the possibility to develop a filter using this model, in meanwhile for those familiar with ComfyUI, I've attached an image (QwenIE_Recolor_small_workflow.png) containing the workflow that I used.
Merry Christmas,
Dan

