Deoldify Vapoursynth filter - Printable Version +- Selur's Little Message Board (https://forum.selur.net) +-- Forum: Talk, Talk, Talk (https://forum.selur.net/forum-5.html) +--- Forum: Small Talk (https://forum.selur.net/forum-7.html) +--- Thread: Deoldify Vapoursynth filter (/thread-3595.html) |
RE: Deoldify Vapoursynth filter - djilayeden - 21.08.2024 Hi, I hope you're well, have you seen him yet? https://github.com/KIMGEONUNG/BigColor Very good colorization! but unstable in the colorization video! if there is a way to stabilize it with deoldfiy Best Djilay RE: Deoldify Vapoursynth filter - hallomanbh - 21.08.2024 hai check this video for reffernce https://www.youtube.com/watch?v=JXtRKEUPB2o&t=12s RE: Deoldify Vapoursynth filter - Selur - 02.09.2024 @Dan64: Is there a way to lower the VRAM usage of DeOldify? (please, see: https://forum.selur.net/thread-3841.html) RE: Deoldify Vapoursynth filter - Dan64 - 07.09.2024 (16.08.2024, 14:45)Selur Wrote: Looking at the Github page https://github.com/yyang181/colormnet and the OpenXLab page they wanted to setup after 1 or 2 weeks doesn't exist. The code was released about 5 hours ago. Crossing the fingers... Dan RE: Deoldify Vapoursynth filter - Selur - 07.09.2024 There is a link to a pretrained model. RE: Deoldify Vapoursynth filter - Akila - 09.09.2024 Moscow State University also runs a comparison of Colorization techniques. MSU Video Colorization Benchmark Deoldefy or DeepRemaster are not one of the leading in the board. it seems for Overall best results would be LVVCP RE: Deoldify Vapoursynth filter - Dan64 - 10.09.2024 I read the methodology used by MSU. They stated that "are mainly focused on color propagation algorithms" and that "minimized the appearance of new objects in the frames, information about which was missing in the first anchor frame". So they are testing a specific feature "color propagation" providing (if possible) a reference image. It is obvious that DeOldify, which is not using reference images, rank only 7 in their scale. I don't understand why DDColor was not considered, but in any case even DDColor is not able of using reference images. Colormnet is a significant improvement of BiSTNet and probably it would rank 1 in MSU methodology. The problem regarding the models using frame-based color propagation it that they need a reference image and are unable to manage the situation when new objects appears in the frames. To develop an automatic colorization tool are necessary models that are able to properly colorize a B&W image without a reference image. Both DeOldofy and DDcolor are good in performing this task, but are unable to maintain a temporal color consistency across the frames. In order to solve this problem a possible solution is to use a frame-based color propagation model using DeOldify and/or DDColor to provide the reference images. The tool should be smart enough to provide a new reference image every time new objects appears in the frames. The Hybrid Automatic Video Colorizer was developed with this intent. Now I'm working to include Colorment as a new and improved frame-based color propagation model. Please read the post #500 to get a better understanding of the problem. Dan RE: Deoldify Vapoursynth filter - Dan64 - 15.09.2024 vsdeoldify-4.5.0_RC1 Hello Selur, I managed to get ColorMNet inside Hybrid. These are the steps to install the v4.5.0 RC1: 1) unzip the file spatial_correlation_sampler-0.5.0-py312-cp312-win_amd64.whl.zip under "Hybrid\64bit\Vapoursynth\Lib\site-packages" 2) unzip the file vsdeoldify-4.5.0_RC1.zip under "Hybrid\64bit\Vapoursynth\Lib\site-packages" (override the folder "vsdeoldify") 3) download the file DINOv2FeatureV6_LocalAtten_s2_154000.pth and save it in "Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\colormnet\weights" At the first run will be downloaded additional files in: "Hybrid\64bit\Vapoursynth\Lib\site-packages\vsdeoldify\models" I added the following 2 fields to the function HAVC_deepx() :
1 : Deep-Exemplar
4 or 5 : for 8GB GPU 8 or 9 : for 12GB GPU 15 or 16: for 24GB GPU if set equal to 0 (zero) all the encoded video frames are stored in memory Few changes are required in the GUI: 1) the frame "DeepEX" should be renamed in "Exemplar Models" 2) should be provide the option to select one of the 2 Exemplar Models available: ColorMNet, Deep-Exemplar 3) The following parameters are specific for a given Exemplar Model: render_vivid, render_speed: are used only by Deep-Exemplar max_memory_frames: is used only by ColorMNet I hope that it will work also on your side. crossing the finger... Dan ex RE: Deoldify Vapoursynth filter - Selur - 15.09.2024 Nice. I'll do some testing and report back. Cu Selur RE: Deoldify Vapoursynth filter - Selur - 15.09.2024 There also seem to be ref_weight, ref_thresh which were not present before,... => this will take some time (have to read everything again, so see what else changed) |