11.05.2024, 18:52
It was released 6 hrs ago.
Deoldify Vapoursynth filter
|
11.05.2024, 19:11
Wow! you are very fast, but I'm faster I already released a new version: https://github.com/dan64/vs-deoldify/rel...tag/v3.5.3
I added the option "MoreVivid" to VideoTune. There is no urgency to release a new version, I added this option mainly to keep the symmetry in the range of options. Meanwhile I discovered that in the NTIRE 2023 Video Colorization Challenge the benchmark to beat was DeOldify. The winner of the challenge has not released any code yet, but in the case it will be made available, I could try to create a new Vapoursynth filter for it. Dan
11.05.2024, 19:13
Next release will probably be in either end of May in June.
12.05.2024, 10:49
Uploaded a new test version which support "MoreVivid".
Cu Selur
12.05.2024, 13:07
I tested the winner of NTIRE 2023 Video Colorization Challenge (code here: Bistnet )
Unfortunately the results were not good as expected. The approach followed by this model is to use some reference frame to map the objects color and then try to keep the temporal consistency. Here a simple comparison 1) Start frame (for Bistnet is an external reference image) In this case was used as reference the true starting frame, while DDeoldify had to guess, the colors. 2) Frame 25 in this case (due to temporal consistency) the boy's pants on the right start to turn red (like the girl's skirt) 3) Frame 80 in this case the boy's pants on the left start to turn red (like the girl's skirt) 4) Frame 125 in this case the girl's skirt starts to turn green (like the girl's pants) In summary is not so good, DDelodifi(Stable) colors are less rich but are more stable. Moreover is very slow: only 0.15 fps Dan
So probably more suited to fix some coloring lost using vhs transfers. (where you got just a few frames without colors)
12.05.2024, 16:23
The previous comparison was unfair because as reference image was used a ground truth image.
In the real situation the "ground truth" image is not available and cannot be provided manually inside an automatic video coloring process. The most realistic solution is to provide at every scene change a reference image provided by another automatic image coloring model. In this case the best candidate for providing the reference images is DDColor. Given that the comparison with "DDeoldifiy(Stable)" was already provided, I will show the comparison with "DDColor(rf=32)". 1) Start frame (for Bistnet was provided the DDColor image as reference) in this case the 2 images are the same by construction 2) Frame 25 in this case Bistnet is stable, while in DDColor the boys' shirts are starting to turn green 3) Frame 80 even in this case Bistnet is stable, while in DDColor the boy's shirt has become green and his trousers are almost blue 3) Frame 113 in this case Bistnet is still stable, while in DDColor the girl's skirt also turned blue This example shows that Bistnet could improve the stability of images provided by DDColor when the DDColor images are used (only) as reference, but the comparison provides also a clear example of the DDColor instability. Comparing the Bistnet images obtained with this example with the "DDeoldifiy(Stable)" images provided previously, it is possible to see (a part the choice of colors) the both present a good color image stability. But "DDeoldifiy(Stable)" is about 33x faster than Bistnet which is too slow and in practice cannot be used. The author wrote that he will provide a new version called ColorMNet that should be faster and less memory hungry, let's see... Dan
12.05.2024, 16:25
NTIRE 2024 Video Colorization Challenge shouldn't be war away too,... iirc. NTIRE was always in the summer.
13.05.2024, 20:12
(are there plans for a direct implementation of deoldify into hybrid?)
More directly than through the torch-addon: No.
(see: https://selur.de/downloads/ for a link to the GoogleDrive where the addons are for download, read the README.md on how to install the packages) Cu Selur |
« Next Oldest | Next Newest »
|