Selur's Little Message Board

Full Version: Deoldify Vapoursynth filter
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi, I hope you're well, have you seen him yet? https://github.com/KIMGEONUNG/BigColor

Very good colorization! but unstable in the colorization video! if there is a way to stabilize it with deoldfiy

Best
Djilay
hai check this video for reffernce

https://www.youtube.com/watch?v=JXtRKEUPB2o&t=12s
@Dan64: Is there a way to lower the VRAM usage of DeOldify? (please, see: https://forum.selur.net/thread-3841.html)
(16.08.2024, 14:45)Selur Wrote: [ -> ]Looking at the Github page https://github.com/yyang181/colormnet and the OpenXLab page they wanted to setup after 1 or 2 weeks doesn't exist.
4m month without an update
=> it does not seem likely that there ever will be a release,...

The code was released about 5 hours ago. 
Crossing the fingers...

Dan
There is a link to a pretrained model. Smile
Moscow State University also runs a comparison of Colorization techniques. MSU Video Colorization Benchmark
Deoldefy or DeepRemaster are not one of the leading in the board. it seems for Overall best results would be LVVCP
I read the methodology used by MSU. They stated that "are mainly focused on color propagation algorithms" and that "minimized the appearance of new objects in the frames, information about which was missing in the first anchor frame". So they are testing a specific feature "color propagation" providing (if possible) a reference image. It is obvious that DeOldify, which is not using reference images, rank only 7 in their scale. I don't understand why DDColor was not considered, but in any case even DDColor is not able of using reference images.
Colormnet is a significant improvement of BiSTNet and probably it would rank 1 in MSU methodology.
The problem regarding the models using frame-based color propagation it that they need a reference image and are unable to manage the situation when new objects appears in the frames.
To develop an automatic colorization tool are necessary models that are able to properly colorize a B&W image without a reference image. Both DeOldofy and DDcolor are good in performing this task, but are unable to maintain a temporal color consistency across the frames.
In order to solve this problem a possible solution is to use a frame-based color propagation model using DeOldify and/or DDColor to provide the reference images. The tool should be smart enough to provide a new reference image every time new objects appears in the frames. The Hybrid Automatic Video Colorizer was developed with this intent. Now I'm working to include Colorment as a new and improved frame-based color propagation model.
Please read the post #500  to get a better understanding of the problem.

Dan
vsdeoldify-4.5.0_RC1

Hello Selur,

  I managed to get ColorMNet  inside Hybrid.
  These are the steps to install the v4.5.0 RC1:

  1) unzip the file spatial_correlation_sampler-0.5.0-py312-cp312-win_amd64.whl.zip under "Hybrid\64bit\Vapoursynth\Lib\site-packages"
  2) unzip the file vsdeoldify-4.5.0_RC1.zip under "Hybrid\64bit\Vapoursynth\Lib\site-packages"  (override the folder "vsdeoldify")   
  3) download the file DINOv2FeatureV6_LocalAtten_s2_154000.pth and save it in "Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\weights"

At the first run will be downloaded additional files in:  "Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\models"

I added the following 2 fields to the function HAVC_deepx() :
  • ex_model:  Exemplar model to use for the color propagation, available models are:
                    0 : ColorMNet
                    1 : Deep-Exemplar
  • max_memory_frames:  Parameter used by ColorMNet model, specify the max number of encoded frames to keep in memory,
                    suggested values are :
                        4 or 5 : for 8GB GPU
                        8 or 9 : for 12GB GPU
                        15 or 16: for 24GB GPU
                    if set equal to 0 (zero) all the encoded video frames are stored in memory

Few changes are required in the GUI:

1) the frame "DeepEX" should be renamed in "Exemplar Models"
2) should be provide the option to select one of the 2 Exemplar Models available: ColorMNet, Deep-Exemplar
3) The following parameters are specific for a given Exemplar Model:
     render_vivid, render_speed: are used only by Deep-Exemplar
     max_memory_frames: is used only by ColorMNet

I hope that it will work also on your side.

crossing the finger... Smile

Dan
      


ex
Nice.
I'll do some testing and report back. Smile

Cu Selur
There also seem to be ref_weight, ref_thresh which were not present before,...
=> this will take some time (have to read everything again, so see what else changed)